for each i =;2; ::; n. Example 2. Theorem of Linear Independence or Orthogonal set of vectors. An orthogonal set of vectors not containing 0 is LI. De

Similar documents
Chapter 6. Orthogonality

Inner Product Spaces and Orthogonality

Numerical Methods I Eigenvalue Problems

Similarity and Diagonalization. Similar Matrices

ALGEBRAIC EIGENVALUE PROBLEM

by the matrix A results in a vector which is a reflection of the given

Applied Linear Algebra I Review page 1

Direct Methods for Solving Linear Systems. Matrix Factorization

LINEAR ALGEBRA. September 23, 2010

Systems of Linear Equations

Orthogonal Bases and the QR Algorithm

Lecture 1: Schur s Unitary Triangularization Theorem

CS3220 Lecture Notes: QR factorization and orthogonal transformations

Linear Algebra Review. Vectors

1 VECTOR SPACES AND SUBSPACES

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

DATA ANALYSIS II. Matrix Algorithms

Numerical Analysis Lecture Notes

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

5. Orthogonal matrices

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems

[1] Diagonal factorization

Section Inner Products and Norms

3 Orthogonal Vectors and Matrices

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

SOLVING LINEAR SYSTEMS

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Notes on Symmetric Matrices

Vector and Matrix Norms

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Linearly Independent Sets and Linearly Dependent Sets

Orthogonal Diagonalization of Symmetric Matrices

7 Gaussian Elimination and LU Factorization

MATH APPLIED MATRIX THEORY

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

Factorization Theorems

6. Cholesky factorization

Solving Systems of Linear Equations

Eigenvalues and Eigenvectors

1 Solving LPs: The Simplex Algorithm of George Dantzig

Notes on Determinant

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Continuity of the Perron Root

Examination paper for TMA4205 Numerical Linear Algebra

LS.6 Solution Matrices

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Solving Linear Systems, Continued and The Inverse of a Matrix

Derivative Free Optimization

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

Chapter 17. Orthogonal Matrices and Symmetries of Space

Elementary Matrices and The LU Factorization

A note on companion matrices

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Math 312 Homework 1 Solutions

Inner Product Spaces

MAT188H1S Lec0101 Burbulla

Recall that two vectors in are perpendicular or orthogonal provided that their dot

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

x = + x 2 + x

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

LINEAR ALGEBRA W W L CHEN

Similar matrices and Jordan form

Lecture 5: Singular Value Decomposition SVD (1)

Matrices and Polynomials

Linear Algebra Notes

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

α = u v. In other words, Orthogonal Projection

Linear Algebra: Determinants, Inverses, Rank

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

Lecture 13 Linear quadratic Lyapunov theory

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

Solutions to Math 51 First Exam January 29, 2015

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

Inner products on R n, and more

Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors

A =

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

Geometric description of the cross product of the vectors u and v. The cross product of two vectors is a vector! u x v is perpendicular to u and v

Nonlinear Iterative Partial Least Squares Method

Iterative Methods for Solving Linear Systems

1 Symmetries of regular polyhedra

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

9 MATRICES AND TRANSFORMATIONS

Solving Systems of Linear Equations

8 Square matrices continued: Determinants

Lecture 14: Section 3.3

Section Continued

Elementary Linear Algebra

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

1 if 1 x 0 1 if 0 x 1

Examination paper for TMA4115 Matematikk 3

Variance Reduction. Pricing American Options. Monte Carlo Option Pricing. Delta and Common Random Numbers

Transcription:

Session 9 Approximating Eigenvalues Ernesto Gutierrez-Miravete Fall, 200 Linear Algebra Eigenvalues Recall that for any matrix A, the zeros of the characteristic polynomial pèçè = detèa, çiè are the eigenvalues of A the vector x which solves èa, çièx = 0 is the eigenvector of A. Deænition of Linearly independent set of vectors. The set of vectors fv èiè g n i= is linearly independent èliè if whenever i= æ i v èiè = 0 æ i =0for all i =;2; ::; n. Theorem of Representation. If fv èiè g n i= 2én is a LI set of vectors, any vector x in é n can be expressed uniquely as x = i= æ i v èiè where the æ i 's are numbers. Theorem of Linear Independence of eigenvectors. If A is a matrix fç i g for i = ; 2; ::; n are distinct eigenvalues of A with associated eigenvectors fx èiè g n i= then the eigenvector set is LI. Deænition of Orthogonality Orthonormality. The set of vectors fv èiè g n i= is orthogonal if èv èiè è t v èjè =0 8 i6=j it is also orthonomal if èv èiè è t v èjè = kv èiè k 2 = 8 i=j

for each i =;2; ::; n. Example 2. Theorem of Linear Independence or Orthogonal set of vectors. An orthogonal set of vectors not containing 0 is LI. Deænition of Orthogonal Matrix. A matrix A is orthogonal iæ A, = A t. Deænition of Similar Matrices. Two matrices A B are similar if there is a non-singular matrix S such that A = S, BS. Theorem of Eigenvectors Similar Matrices. If A B are similar ç x are an eigenvalue eigenvector of A then ç is also an eigenvalue of of B Sx is its associate eigenvector. For a triangular matrix the eigenvalues are the solutions to 0=detèA, çiè = ny i= èa ii, çè Theorem of Schur. Shur Transformation. For any matrix A there is always a nonsingular matrix U such that T = U, AU where T is an upper-triangular matrix containing the eigenvalues of A along its main diagonal. The matrix U is unitary since it satisæes kuxk 2 = kxk 2 for any x. Theorem of Symmetric Matrices. If A is symmetric D is diagonal containing the eigenvalues of A, a matrix P exists such that D = P, AP. Further, if A is n æ n its eigenvalues are real numbers its associated n eigenvectors are an orthonormal set. Theorem of Positive Deænite Matrix. If A is symmetric all its eigenvalues are positive then it is positive deænite. Theorem of Gerschgorin. If A is n æ n R i = fz 2 C; jz, a ii jç ;j6=i ja ij jg is the circle in the complex plane centered at a ii with radius P n ;j6=i ja ijj, the eigenvalues of A are contained within R = ë n i= R i. Example 4. 2 The Power Method Let A be n æ n, fçg n i= its eigenvalues fv èiè g n i= its associated LI eigenvectors. Let the eigenvalues be ordered jç j é jç 2 jçjç 3 jç::: çjç n jany vector x can be written as x = æ j v èjè 2

where the æ i 's are numbers. If A is applied on both sides one gets Ax = Applying Aktimes one gets A k x = æ j Av èjè = æ j A k v èjè = æ j ç j v èjè æ j ç k j v èjè Factoring ç k out, one gets A k x = ç k æ j è ç j ç è k v èjè Since ç is the largest eigenvalue lim k! Ak x = lim k! çk æ v èè This looks promising as a way to get ç but scaling is needed. Starting with a scaling vector x èmè with unit l norm èthe ærst guess for the eigenvector associated with the dominant eigenvalueè. Avector y èm+è = Ax èmè is then produced. Next, the component ofy èm+è with smallest index whose magnitude is equal to the ky èm+è k is identiæed called y èm+è p m+. This is the ærst approximation to the dominant eigenvalue. An èimprovedè guess for the eigenvector is then obtained as x èm+è = y èm+è =yp èm+è m+ the process is repeated. Speciæcally, select a unit vector x è0è with unit l norm a unit component x è0è of x è0è. Now let y èè = Ax è0è ç èè = y èè. Therefore ç èè = y èè = yèè æ vp èè = 0 ç x è0è æ v èè j=2 æ jè ç j èv èjè ç j=2 æ jvp èjè 0 Then let p Ax è0è =yp èè with unit l norm. be the least integer such that jy èè p j = ky èè k deæne x èè If now y è2è = Ax èè = A 2 x è0è =y èè p = y èè =y èè p = ç è2è = y è2è p = yè2è p x èè p èè æ vp = ç j=2 æ jèç j =ç è 2 vp èjè æ v èè p j=2 æ jèç j =ç èvp èjè Thus, sequences of vectors fx èmè g m=0 fy èmè g m= of scalars fç èmè g m= can be produced by y èmè = Ax èm,è 3

ç èmè = y èmè p = ç m, æ v èè p m, j=2 æ jèç j =ç è m v èjè p m, æ v èè p m, j=2 æ jèç j =ç è m, v èjè p m, x èmè = yèmè y èmè p m where at each step p m is the smallest integer for which jy èmè p m j = ky èmè k Because of the dominance of the ærst eigenvalue lim m! çèmè = ç x èmè converges to the eigenvector associated with ç that has unit l norm. Aitken's æ 2 procedure can be applied to speed convergence. Algorithm 9. èp. 562è Power method. Example. If A is symmetric convergence can be improved by use of the l 2 norm in the deænition of the scaling parameters. Algorithm 9.2 èpp. 565è Symmetric Power Method. Example 2. Theorem of Bound on Eigenvalues. If A is symmetric with eigenvalues ç i kax, çxk 2 éæ for some vector x with kxk 2 = real ç then min çjçnjç j, çj éæ Algorithm 9.3 èp. 568è. Inverse Power method. If the matrix A has eigenvalues ç i, the matrix èa, qiè, with q 6= ç i has eigenvalues èç i, qè,. Application of the Power method to èa, qiè, gives the Inverse Power method. In this case, the vector y èmè is obtained by solving the following system, usually by Gaussian elimination with pivoting èa, qièy èmè = x èm,è The initial choice of the number q can come from Greschgoring theorem or otherwise. One possible choice is given by the following formula 4

q = xè0èt Ax è0è x è0èt x è0è Once the dominant eigenvalue has been determined, subsequent eigenvalues can be computed in principle by deæation techniques. Theorem of Eigenvalue Relationships. If A has eigenvalues ç i associated eigenvectors v èiè with i =;2; :::; n if there is a vector x such that x t v èè =the matrix B = A, ç v èè x t has eigenvalues 0;ç 2 ;ç 3 ; :::; ç n with associated eigenvectors v èè ; w è2è ; w è3è ; ::; w ènè where v èiè =èç i,ç èw èiè +ç èx t w èiè èv èè Wielt deæation selects x as follows x = ç v èè i 2 6 4 a i a i2 : : : a in 3 7 5 where v èè i is a non-zero component of v èè. This gives x t v èè = ç v èè i a ij v èè j = Wielt deæation is not adequate for the determination of all the eigenvalues of A because the process rapidly accumulates roundoæ error. Example 4. Algorithm 9.4 èp. 56è Wielt Deæation. 3 Householder's Transformation Householder's transformation produces a symmetric tridiagonal matrix B which is similar to a given symmetric matrix A. This is done by sequentially selectively zeroing columns of matrix. A key feature of the transformations is that it is very stable against roundoæ error. Deænition of Householder transformation. The n æ n matrix P = I, 2ww t 5

where w t w =iscalled the Householder transformation. Theorem of Symmetry Orthogonality of the HT. IfP is a Householder transformation, it is symmetric orthogonal, i.e. P, = P t = P. The idea is to produce a sequence of transformed matrices A è2è = P èè AP èè, A è3è = P è2è A èè P è2è,..., A ènè = P èn,è A èn,è P èn,è such that A ènè is symetric tridiagonal. Speciæcally, to determine A è2è, P èè is found such that a è2è = a a è2è j = 0 for each j =3;4; :::; n. The required components of w are w =0 w 2 = a 2, æ 2r w j = a j 2r for each j =3;4; :::; n, where æ =,èsignèa 2 èèè j=2 a 2 jè =2 r =è 2 æ2, 2 a 2æè =2 The process is then repeated to determine A è3è. Example. Algorithm 9.5 èpp. 583è Householder Transformation. If A is not symmetric, A èn,è will not be tridiagonal but it will contains only zero entries below the lower subdiagonal èupper Hessenberg matrixè. 4 The QR Method The QR method simultaneously determines all eigenvalues of a tridiagonal symmetric matrix. The method consists in transforming the original matrix into a simpler one but with the same eigenvalues by means of similarity transformations. If A is symmetric but not tridiagonal, it can ærst be transformed using Householder's method. If the eigenvalues have distinct moduli are ordered the method converges. The QR method makes use of the factoring matrices Q èiè R èiè to generate a sequence of matrices A èi+è. First A = A èè is factored as a product of the two factoring matrices èi.e. A èè = Q èè R èè è. Then the next member of the sequence A è2è is generated performing the product in the reverse direction èi.e. A è2è = R èè Q èè è. The 6

process is repeated until A èi+è has the desired upper triangular structure. The eigenvalues appear then along the main diagonal. If A is tridiagonal symmetric, let a ;a 2 ; :::; a n be the entries along the diagonal b 2 ;b 3 ; :::; b n the entries along the subdiagonals. If b 2 or b n are zero then a or a n is an eigenvalue of A. If none of the b j 's are zero a sequence of matrices A èè ;A è2è ;A è3è ;:: is produced as follows In general A èè = A = Q èè R èè A è2è = R èè Q èè A èiè = Q èiè R èiè A èi+è = R èiè Q èiè = Q èièt A èiè Q èiè where Q èiè is an orthogonal matrix R èiè is an upper triangular matrix. As the process is repeated A èi+è tends to a diagonal matrix with same eigenvalues as A located along the main diagonal. Deænition of Rotation Matrix. A rotation matrix P is an orthogonal matrix diæering from I in at most four elements which are of the form p ii = p jj = cos ç p ij =,p ji = sin ç the angle ç chosen so that the matrix èpaè ij = 0. n, rotation matrices are used to construct R èè = P n P n,:::p 2 A èè Q èè = P t 2 P t 3 :::P t n Example. Algorithm 9.6 èpp. 592è QR Method. 7