Similar matrices and Jordan form



Similar documents
LINEAR ALGEBRA. September 23, 2010

[1] Diagonal factorization

x = + x 2 + x

Chapter 6. Orthogonality

by the matrix A results in a vector which is a reflection of the given

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

Question 2: How do you solve a matrix equation using the matrix inverse?

Linear Algebra Review. Vectors

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Lecture 5: Singular Value Decomposition SVD (1)

Eigenvalues and Eigenvectors

Using row reduction to calculate the inverse and the determinant of a square matrix

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

6. Cholesky factorization

Applied Linear Algebra I Review page 1

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Inner products on R n, and more

8 Square matrices continued: Determinants

LS.6 Solution Matrices

Similarity and Diagonalization. Similar Matrices

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Orthogonal Diagonalization of Symmetric Matrices

1 Introduction to Matrices

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices

Factorization Theorems

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

MAT 242 Test 2 SOLUTIONS, FORM T

MATH APPLIED MATRIX THEORY

The Characteristic Polynomial

Review Jeopardy. Blue vs. Orange. Review Jeopardy

DATA ANALYSIS II. Matrix Algorithms

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

Notes on Determinant

Solving Linear Systems, Continued and The Inverse of a Matrix

Data Mining: Algorithms and Applications Matrix Math Review

Solving Systems of Linear Equations

LINEAR ALGEBRA W W L CHEN

Section Inner Products and Norms

Lecture notes on linear algebra

Lecture 1: Schur s Unitary Triangularization Theorem

Orthogonal Projections

u = [ 2 4 5] has one row with three components (a 3 v = [2 4 5] has three rows separated by semicolons (a 3 w = 2:5 generates the row vector w = [ 2 3

SALEM COMMUNITY COLLEGE Carneys Point, New Jersey COURSE SYLLABUS COVER SHEET. Action Taken (Please Check One) New Course Initiated

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

Introduction to Matrix Algebra

18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in Total: 175 points.

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

Numerical Analysis Lecture Notes

7 Gaussian Elimination and LU Factorization

Math Lecture 33 : Positive Definite Matrices

October 3rd, Linear Algebra & Properties of the Covariance Matrix

Linear Algebraic Equations, SVD, and the Pseudo-Inverse

Direct Methods for Solving Linear Systems. Matrix Factorization

Linearly Independent Sets and Linearly Dependent Sets

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

CS3220 Lecture Notes: QR factorization and orthogonal transformations

CONTROLLABILITY. Chapter Reachable Set and Controllability. Suppose we have a linear system described by the state equation

DETERMINANTS TERRY A. LORING

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

Math 215 HW #6 Solutions

Lecture 2 Matrix Operations

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Reciprocal Cost Allocations for Many Support Departments Using Spreadsheet Matrix Functions

Examination paper for TMA4115 Matematikk 3

Brief Introduction to Vectors and Matrices

Linear Algebra Notes

Inner Product Spaces and Orthogonality

x + y + z = 1 2x + 3y + 4z = 0 5x + 6y + 7z = 3

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

Solution to Homework 2

University of Lille I PC first year list of exercises n 7. Review

α = u v. In other words, Orthogonal Projection

CURVE FITTING LEAST SQUARES APPROXIMATION

Notes on Symmetric Matrices

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

9 MATRICES AND TRANSFORMATIONS

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

A =

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

Arithmetic and Algebra of Matrices

LU Factoring of Non-Invertible Matrices

Linear Algebra: Determinants, Inverses, Rank

Matrix Algebra in R A Minimal Introduction

Linear Algebra and TI 89

Solving Systems of Linear Equations

Math 312 Homework 1 Solutions

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj

5. Orthogonal matrices

1 Sets and Set Notation.

Numerical Methods I Eigenvalue Problems

A Direct Numerical Method for Observability Analysis

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Transcription:

Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive definite A matrix is positive definite if x T Ax > 0 for all x = 0. This is a very important class of matrices; positive definite matrices appear in the form of A T A when computing least squares solutions. In many situations, a rectangular matrix is multiplied by its transpose to get a square matrix. Given a symmetric positive definite matrix A, is its inverse also symmetric and positive definite? Yes, because if the (positive) eigenvalues of A are λ 1, λ 2, λ d then the eigenvalues 1/λ 1, 1/λ 2, 1/λ d of A 1 are also positive. If A and B are positive definite, is A + B positive definite? We don t know much about the eigenvalues of A + B, but we can use the property x T Ax > 0 and x T Bx > 0 to show that x T (A + B)x > 0 for x = 0 and so A + B is also positive definite. Now suppose A is a rectangular (m by n) matrix. A is almost certainly not symmetric, but A T A is square and symmetric. Is A T A positive definite? We d rather not try to find the eigenvalues or the pivots of this matrix, so we ask when x T A T Ax is positive. Simplifying x T A T Ax is just a matter of moving parentheses: x T (A T A)x = (Ax) T (Ax) = Ax 2 0. The only remaining question is whether Ax = 0. If A has rank n (independent columns), then x T (A T A)x = 0 only if x = 0 and A is positive definite. Another nice feature of positive definite matrices is that you never have to do row exchanges when row reducing there are never 0 s or unsuitably small numbers in their pivot positions. Similar matrices A and B = M 1 AM Two square matrices A and B are similar if B = M 1 AM for some matrix M. This allows us to put matrices into families in which all the matrices in a family are similar to each other. Then each family can be represented by a diagonal (or nearly diagonal) matrix. Distinct eigenvalues If A has a full set of eigenvectors we can create its eigenvector matrix S and write S 1 AS = Λ. So A is similar to Λ (choosing M to be this S). 1

2 1 3 0 3 0 If A = then Λ = and so A is similar to. But A 1 2 0 1 0 1 is also similar to: M 1 A M B 1 4 3 0 1 4 1 4 2 9 = 0 1 0 1 0 1 0 1 1 6 2 15 =. 1 6 In addition, B is similar to Λ. All these similar matrices have the same eigenvalues, 3 and 1; we can check this by computing the trace and determinant of A and B. Similar matrices have the same eigenvalues! In fact, the matrices similar to A are all the 2 by 2 matrices with eigenvalues 3 7 1 7 3 and 1. Some other members of this family are and. To 0 1 0 3 prove that similar matrices have the same eigenvalues, suppose Ax = λx. We modify this equation to include B = M 1 AM: AMM 1 x = λx M 1 AMM 1 x = λm 1 x BM 1 x = λm 1 x. The matrix B has the same λ as an eigenvalue. M 1 x is the eigenvector. If two matrices are similar, they have the same eigenvalues and the same number of independent eigenvectors (but probably not the same eigenvectors). When we diagonalize A, we re finding a diagonal matrix Λ that is similar to A. If two matrices have the same n distinct eigenvalues, they ll be similar to the same diagonal matrix. Repeated eigenvalues If two eigenvalues of A are the same, it may not be possible to diagonalize A. Suppose λ 1 = λ 2 = 4. One family of matrices with eigenvalues 4 and 4 4 1 contains only the matrix. The matrix is not in this family. There are two families of similar matrices with eigenvalues 4 and 4. The 4 1 larger family includes. Each of the members of this family has only one eigenvector. 4 0 The matrix is the only member of the other family, because: M 1 4 0 M = 4M 1 4 0 M = for any invertible matrix M. 2

Jordan form Camille Jordan found a way to choose a most diagonal representative from each family of similar matrices; this representative is said to be in Jordan nor 4 1 4 0 mal form. For example, both and are in Jordan form. This form used to be the climax of linear algebra, but not any more. Numerical applications rarely need it. 4 1 We can find more members of the family represented by by choosing diagonal entries to get a trace of 4, then choosing off-diagonal entries to get a determinant of 16: 4 1 5 1 4 0 a b,,, 1 3 17 4 (8a a 2. 16)/b 8 a (None of these are diagonalizable, because if they were they would be similar 4 0 to. That matrix is only similar to itself.) What about this one? A = 0 1 0 0 0 0 1 0 Its eigenvalues are four zeros. Its rank is 2 so the dimension of its nullspace is 4 2 = 2. It will have two independent eigenvectors and two missing eigenvectors. When we look instead at 0 1 7 0 0 0 1 0 its rank and the dimension of its nullspace are still 2, but it s not as nice as A. B is similar to A, which is the Jordan normal form representative of this family. A has a 1 above the diagonal for every missing eigenvector and the rest of its entries are 0. Now consider:, C = 0 1 0 0 0 0 0 1 Again it has rank 2 and its nullspace has dimension 2. Its four eigenvalues are 0. Surprisingly, it is not similar to A. We can see this by breaking the matrices. 3

into their Jordan blocks: 0 1 0 0 0 1 0 0 0 0 1 0, C = A = 0 0 0 1. A Jordan block J i has a repeated eigenvalue λ i on the diagonal, zeros below the diagonal and in the upper right hand corner, and ones above the diagonal: λi 1 0 0 0 λ i 1 0 J. i =..... 0 0 λ i 1 0 0 0 λ i Two matrices may have the same eigenvalues and the same number of eigenvectors, but if their Jordan blocks are different sizes those matrices can not be similar. Jordan s theorem says that every square matrix A is similar to a Jordan matrix J, with Jordan blocks on the diagonal: J 1 0 0 0 J 2 0 J =...... 0 0 J d In a Jordan matrix, the eigenvalues are on the diagonal and there may be ones above the diagonal; the rest of the entries are zero. The number of blocks is the number of eigenvectors there is one eigenvector per block. To summarize: If A has n distinct eigenvalues, it is diagonalizable and its Jordan matrix is the diagonal matrix J = Λ. If A has repeated eigenvalues and missing eigenvectors, then its Jordan matrix will have n d ones above the diagonal. We have not learned to compute the Jordan matrix of a matrix which is missing eigenvectors, but we do know how to diagonalize a matrix which has n distinct eigenvalues. 4

MIT OpenCourseWare http://ocw.mit.edu 18.06SC Linear Algebra Fall 2011 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.