Hi, the internet literature about this is certainly horrible. Trying to piece it together; they iterate a linear transformation f:V->V of a real finite dim'l vector-space V, and for p \in V they apply a positive definite quadratic form q to each term of the sequence p, fp, f\^2p, f\^3 p,....
The difference of two consecutive terms is the quadratic form q(fy)-q(y) and if this is negative semi-definite as a function of y, they claim that the original sequence has to always tend to zero [I don't understand this step. We get that squared distance from the origin is non-increasing, I don't understand why semi-definite is enough here].
Applying this to the case when x is an eigenvector, we also can deduce that all eigenvalres of f are magnitude less than 1. If you let Q be the matrix so q(x) = x'Qx, and F be the matrix of F, you are looking at y'F'QFy-y'Qy = y'(F'QF-Q)y, and so we proved a little lemma about real square matrices,
**Lemma:** if F is a real square matrix, and if there is a positive definite symmetric Q so that F'QF-Q is negative semidefinite then all eigenvalues of F have magnitude less than 1.
In the case when F is the matrix which represents multiplying by T mod h(T) in R[T]/(h(T)) where h is a real polynomial (so F is the rational canonical matrix of h), you can solve the equation F'QF-Q=-g'g where g is the column vector (a, a\_1, ..., a\_{n-1}) - a\_n (a\_n, a\_{n-1},...,a\_1), and Q is a particular symmetric matrix which you can write down explicitly, where a\_i are the coefficients of h. The matrix Q is called the Schur-Cohn matrix of F.
The lemma then says that if the Schur-Cohn matrix is positive definite all eigenvalues of F have magnitude less than 1. And the eigenvalues of F are just the roots of the polynomial h in any case. Thus
**Theorem**. If the Schur-Cohn matrix of h is positive definite, all roots of h have magnitude less than 1.