Next: Inverse Iteration Up: NUMERICAL TECHNIQUES FOR EIGENVALUES Previous: Rayleigh-Ritz, Background:

## The QR Method

For reasonably-sized matrices, this is the most efficient and widely used method. There's very good code on NETLIB EISPACK to do this. For large matrices, people are loading into Arnoldi methods (see Lehoucq). This method appeared in 1961 (Francis). Here's an elementary introduction. Given , there's a factorization

is upper triangluar and orthogonal. With real both are chosen real. The motivation is that we'll construct an iterative technique to find the eigenvalues using similarity transformations. Orthogonal matrices accomplish the transformations in such a way that they will not worsen the condition number or stability of the eigenvalue of a non-symmetric to matrix. A special class of orthogonal matrices is known as Householder Matrices'' and we'll concentrate on these.

Note: The term orthogonal matrix'' should be restricted to real matrices. However, usage of the term has extended to complex matrices. But in the complex case the matrices should be understood as being Unitary.''

Let with . Define

 (86)

this is the general form of a Householder Matrix''

Example) For

is orthogonal: because and the association law applies, then

The Factorization of a Matrix

Let

with

 (87)

Writing in terms of its columns we have

Pick and using the following construction: we want to use the Householder matrix to transform a nonzero vector into a new vector containing mainly zeros. Let be given, and suppose we want to produce of the form such that contains zeros in positions through , for some . Choose as in (88). Then the first elements of and are the same.

Let's write

 (88) (89)

and . Then our restriction on the form of requires the components of and be same, and
 (90)

for some . Since is orthogonal, the length of is preserved

 (91)

Let

from (91)
 (92)

Multiplication by and use of implies
 (93)

Substituting this into the first component of (93) gives

Choose sign of in (92) by

This choice maximizes and avoids loss of significance errors in calculation of . Having , obtain from (94). Return to (93) and then using components

Once is obtained and are obtained. The operation count can be shown to be

Now contains zero below the diagonal in its first column. Choose similarly, so that contains zeros in its second column below diagonal. Note that and have same elements in row 1 and column 1. So to get and replace in (90) by second column of . Continue procedure till we obtain an upper triangular matrix

If at the -step of procedure all elements below diagonal of column are zero then choose and go to next step. To complete the construction let

which is orthogonal. Then .

Now that we know how to obtain a factorization we return to getting the eigenvalues:

Assume is real can be found to be real. Let and define a sequence

Since

is orthogonally similar to . The sequence will converge either to a triangular matrix with the eigenvalues of on its diagonal or to a near-triangular matrix from which the eigenvalues can easily be calculated. This is slow but a shifting'' technique accelerates things considerably (not discussed here).

Remark: is very expensive even with shifting. So the first step is to prepare'' the matrix by making it simpler. If is symmetric it can be reduced to a similar symmetric tridiagonal matrix. If not symmetric, it can be reduced to its equivalent Hessenberg Matrix'' by unitary similarity transformations. An upper Hessenberg Matrix'' by unitary similarity transformations. An upper Hessenberg'' matrix has the form:

Note that if is tridiagonal it is also Hessenberg. However, if is tridiagonal, an efficient way to calculate its eigenvalue is by using Sturm sequences.'' (See Golub and VanLoan)

Convergence of QR: Let be real and eigenvalues

 (94)

then the iterates of method will converge to an upper triangular matrix with as diagonal elements. If is symmetric, the sequence converges to a diagonal matrix. For the rate of convergence

For matrices with eigenvalues not satisfying (95), the iterates may not converge to a triangular matrix. For symmetric the sequence converges to a block diagonal matrix.

in which are simple elements or matrices. Thus the eigenvalues of can be easily computed from those of . For non-symmetric, the situation is more complicated but still acceptable.

Next: Inverse Iteration Up: NUMERICAL TECHNIQUES FOR EIGENVALUES Previous: Rayleigh-Ritz, Background:
Juan Restrepo 2003-04-12