by the matrix A results in a vector which is a reflection of the given



Similar documents
MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Similarity and Diagonalization. Similar Matrices

Chapter 6. Orthogonality

Recall that two vectors in are perpendicular or orthogonal provided that their dot

MAT 242 Test 2 SOLUTIONS, FORM T

Orthogonal Diagonalization of Symmetric Matrices

[1] Diagonal factorization

Methods for Finding Bases

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Chapter 20. Vector Spaces and Bases

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

Linear Algebra Notes

Solutions to Math 51 First Exam January 29, 2015

Linear Equations in Linear Algebra

Vector Spaces 4.4 Spanning and Independence

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH APPLIED MATRIX THEORY

Similar matrices and Jordan form

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

Solving Systems of Linear Equations

LINEAR ALGEBRA W W L CHEN

x = + x 2 + x

Linear Algebra Review. Vectors

University of Lille I PC first year list of exercises n 7. Review

Linearly Independent Sets and Linearly Dependent Sets

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

THE DIMENSION OF A VECTOR SPACE

Numerical Analysis Lecture Notes

Solving Systems of Linear Equations

LS.6 Solution Matrices

Notes on Determinant

Applied Linear Algebra I Review page 1

Section Continued

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

LINEAR ALGEBRA. September 23, 2010

4.5 Linear Dependence and Linear Independence

CHAPTER 2. Eigenvalue Problems (EVP s) for ODE s

Chapter 17. Orthogonal Matrices and Symmetries of Space

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Examination paper for TMA4115 Matematikk 3

x + y + z = 1 2x + 3y + 4z = 0 5x + 6y + 7z = 3

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

Lecture 14: Section 3.3

160 CHAPTER 4. VECTOR SPACES

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Review Jeopardy. Blue vs. Orange. Review Jeopardy

1 Sets and Set Notation.

( ) which must be a vector

Direct Methods for Solving Linear Systems. Matrix Factorization

1.5 SOLUTION SETS OF LINEAR SYSTEMS

Lecture 5: Singular Value Decomposition SVD (1)

Zeros of a Polynomial Function

A =

Systems of Linear Equations

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) Write the given system in matrix form x = Ax + f ( ) sin(t) x y z = dy cos(t)

Inner Product Spaces and Orthogonality

NOTES ON LINEAR TRANSFORMATIONS

The Characteristic Polynomial

Solving Linear Systems, Continued and The Inverse of a Matrix

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

State of Stress at Point

9 Multiplication of Vectors: The Scalar or Dot Product

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

Introduction to Matrix Algebra

Section 1.1 Linear Equations: Slope and Equations of Lines

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Subspaces of R n LECTURE Subspaces

Figure 1.1 Vector A and Vector F

1 VECTOR SPACES AND SUBSPACES

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Vector and Matrix Norms

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

8 Square matrices continued: Determinants

Zeros of Polynomial Functions

Section Inner Products and Norms

CS3220 Lecture Notes: QR factorization and orthogonal transformations

MA106 Linear Algebra lecture notes

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

Linear Algebra I. Ronald van Luijk, 2012

Zeros of Polynomial Functions

March 29, S4.4 Theorems about Zeros of Polynomial Functions

1 Introduction to Matrices

Math 312 Homework 1 Solutions

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

4 MT210 Notebook Eigenvalues and Eigenvectors Definitions; Graphical Illustrations... 3

Name: Section Registered In:

Using row reduction to calculate the inverse and the determinant of a square matrix

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

3.3. Solving Polynomial Equations. Introduction. Prerequisites. Learning Outcomes

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

LINEAR EQUATIONS IN TWO VARIABLES

JUST THE MATHS UNIT NUMBER 1.8. ALGEBRA 8 (Polynomials) A.J.Hobson

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

Math 215 HW #6 Solutions

Transcription:

Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that and Thus, vectors on the coordinate axes get mapped to vectors on the same coordinate axis That is,

for vectors on the coordinate axes we see that and are parallel or, equivalently, for vectors on the coordinate axes there exists a scalar so that In particular, for vectors on the x-axis and for vectors on the y-axis Given the geometric properties of we see that has solutions only when is on one of the coordinate axes Definition Let A be an matrix We call a scalar an eigenvalue of A provided there exists a nonzero n-vector x so that In this case, we call the n-vector x an eigenvector of A corresponding to We note that is true for all in the case that and, hence, is not particularly interesting We do allow for the possibility that Eigenvalues are also called proper values ( eigen is German for the word own or proper ) or characteristic values or latent values Eigenvalues were initial used by Leonhard Euler in 1743 in connection with the solution to an order linear differential equation with constant coefficients Geometrically, the equation implies that the n-vectors are parallel

Example Suppose Then is an eigenvector for A corresponding to the eigenvalue of as In fact, by direct computation, any vector of the form is an eigenvector for A corresponding to We also see that is an eigenvector for A corresponding to the eigenvalue since Suppose A is an matrix and is a eigenvalue of A If x is an eigenvector of A corresponding to and k is any scalar, then So, any scalar multiple of an eigenvector is also an eigenvector for the given eigenvalue Now, if are both eigenvectors of A corresponding to, then Thus, the set of all eigenvectors of A corresponding to given eigenvalue is closed under scalar multiplication and vector addition This proves the following result:

Theorem If A is an matrix and is a eigenvalue of A, then the set of all eigenvectors of, together with the zero vector, forms a subspace of We call this subspace the eigenspace of Example Find the eigenvalues and the corresponding eigenspaces for the matrix Solution We first seek all scalars so that : The above has nontrivial solutions precisely when is singular That is, the above matrix equation has nontrivial solutions when

Thus, the eigenvalues for are Since implies, the eigenspace of corresponding to is the null space of Because ~ we see that the null space of is given by In a similar manner, the eigenspace for is the null space of which is

given by Finding Eigenvalues: Let A be an matrix Then Y Y Y It follows that is an eigenvalue of A if and only if if and only if Theorem Let A be an matrix Then is an eigenvalue of A if and only if Further, is a polynomial in of degree n called the characteristic polynomial of A We call the characteristic equation of A

Examples 1 Find the eigenvalues and the corresponding eigenspaces of the matrix Solution Here and so the eigenvalues are The eigenspace corresponding to is just the null space of the given matrix which is The eigenspace corresponding to is the null space of which is Note: Here we have two distinct eigenvalues and two linearly independent eigenvectors (as is not a multiple of ) We also see that

2 Find the eigenvalues and the corresponding eigenspaces of the matrix Solution Here and so the eigenvalues are (This example illustrates that a matrix with real entries may have complex eigenvalues) To find the eigenspace corresponding to we must solve As always, we set up an appropriate augmented matrix and row reduce: ~ Recall: ~ Hence, and so for all scalars t

To find the eigenspace corresponding to we must solve We again set up an appropriate augmented matrix and row reduce: ~ ~ Hence, and so for all scalars t Note: Again, we have two distinct eigenvalues with linearly independent eigenvectors We also see that Fact: Let A be an matrix with real entries If is an eigenvalue of A with associated eigenvector v, then is also an eigenvalue of A with associated eigenvector

3 Find the eigenvalues and the corresponding eigenspaces of the matrix Solution Here Recall: Rational Root Theorem Let be a polynomial of degree n with integer coefficients If r and s are relatively prime and, then and

For, we obtain or or or So, and the eigenspace corresponding to is given by

For, we obtain or or Hence, ~ ~ and so The eigenspace corresponding to is given by

Note: Here we have two distinct eigenvalues with three linearly independent eigenvectors We see that Examples (details left to the student) 1 Find the eigenvalues and corresponding eigenspaces for Solution Here The eigenspace corresponding to the lone eigenvalue is given by Note: Here we have one eigenvalue and one eigenvector Once again

2 Find the eigenvalues and the corresponding eigenspaces for Solution Here The eigenspace corresponding to is given by and the eigenspace corresponding to is given by

Note: Here we have two distinct eigenvalues and three linearly independent eigenvectors Yet again Theorem If A is an matrix with, then We note that in the above example the eigenvalues for the matrix are (formally) 2, 2, 2, and 3, the elements along the main diagonal This is no accident Theorem If A is an upper (or lower) triangular matrix, the eigenvalues are the entries on its main diagonal

Definition Let A be an matrix and let (1) The numbers are the algebraic multiplicities of the eigenvalues, respectively (2) The geometric multiplicity of the eigenvalue is the dimension of the null space Example 1 The table below gives the algebraic and geometric multiplicity for each eigenvalue of the matrix : Eigenvalue Algebraic Multiplicity Geometric Multiplicity 0 1 1 4 1 1 2 The table below gives the algebraic and geometric multiplicity for each eigenvalue of the matrix : Eigenvalue Algebraic Multiplicity Geometric Multiplicity 1 2 2 10 1 1

3 The table below gives the algebraic and geometric multiplicity for each eigenvalue of the matrix : Eigenvalue Algebraic Multiplicity Geometric Multiplicity 1 3 1 4 The table below gives the algebraic and geometric multiplicity for each eigenvalue of the matrix : Eigenvalue Algebraic Multiplicity Geometric Multiplicity 2 3 2 3 1 1 The above examples suggest the following theorem: Theorem Let A be an matrix with eigenvalue Then the geometric multiplicity of is less than or equal to the algebra multiplicity of