The Characteristic Polynomial

Save this PDF as:

Size: px
Start display at page:

Transcription

1 Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem consists of identifying all possible values of λ (called the eigenvalues), and the corresponding non-zero vectors v (called the eigenvectors) that satisfy eq (1) Noting that I v = v, one can rewrite eq (1) as (A λi) v = 0 (2) This is a set of n homogeneous equations If A λi is an invertible matrix, then one can simply multiply both sides of eq (2) by (A λi) 1 to conclude that v = 0 is the unique solution By definition, the zero vector is not an eigenvector Thus, in order to find non-trivial solutions to eq (2), one must demand that A λi is not invertible, or equivalently, p(λ) det(a λi) = 0 (3) Eq (3) is called the characteristic equation Evaluating the determinant yields an nth order polynomial in λ, called the characteristic polynomial, which we have denoted above by p(λ) The determinant in eq (3) can be evaluated by the usual methods It takes the form, a 11 λ a 12 a 1n a 21 a 22 λ a 2n p(λ) = det(a λi) = a n1 a n2 a nn λ = ( 1) n [ λ n + c 1 λ n 1 + c 2 λ n c n 1 λ + c n ], (4) where A = [a ij ] The coefficients c i are to be computed by evaluating the determinant Note that we have identified the coefficient of λ n to be ( 1) n This arises from one term in the determinant that is given by the product of the diagonal elements It is easy to show that this is the only possible source of the λ n term in the characteristic polynomial It is then convenient to factor out the ( 1) n before defining the coefficients c i 1

2 Two of the coefficients are easy to obtain Note that eq (4) is valid for any value of λ If we set λ = 0, then eq (4) yields: p(0) = det A = ( 1) n c n Noting that ( 1) n ( 1) n = ( 1) 2n = +1 for any integer n, it follows that c n = ( 1) n det A One can also easily work out c 1, by evaluating the determinant in eq (4) using the cofactor expansion This yields a characteristic polynomial of the form, p(λ) = det(a λi) = (a 11 λ)(a 22 λ) (a nn λ) + c 2λ n 2 + c 3λ n 3 + +c n (5) The term (a 11 λ)(a 22 λ) (a nn λ) is the product of the diagonal elements of A λi It is easy to see that none of the remaining terms that arise in the cofactor expansion [denoted by c 2 λn 2 + c 3 λn c n in eq (5)] are proportional to λn or λ n 1 Moreover, (a 11 λ)(a 22 λ) (a nn λ) = ( λ) n + ( λ) n+1 [a 11 + a a nn ] +, = ( 1) n [ λ n λ n 1 (Tr A) + ], where contains terms that are proportional to λ p, where p n 2 This means that the terms in the characteristic polynomial that are proportional to λ n and λ n 1 arise solely from the term (a 11 λ)(a 22 λ) (a nn λ) The term proportional to ( 1) n λ n 1 is the trace of A, which is defined to be equal to the sum of the diagonal elements of A Comparing eqs (4) and (5), it follows that: c 1 = Tr A Expressions for c 2, c 3,,c n 1 are more complicated For example, eqs (4) and (5) yield c 2 = a ii a jj + c 2 i=1 j=1 i<j For the moment, I will not explicitly evaluate c 2, c 3,,c n 1 In the Appendix to these notes, I will provide explicit expressions for these coefficients in terms of traces of powers of A It follows that the general form for the characteristic polynomial is: p(λ) = det(a λi) = ( 1) n [ λ n λ n 1 Tr A + c 2 λ n ( 1) n 1 c n 1 λ + ( 1) n det A ] (6) In computing the cofactor of the ij element, one crosses out row i and column j of the ij element and evaluates the determinant of the remaining matrix [multiplied by the sign factor ( 1) i+j ] Except for the product of diagonal elements, there is always one factor of λ in each of the rows and columns that is crossed out This implies that the maximal power one can achieve outside of the product of diagonal elements is λ n 2 2

3 By the fundamental theorem of algebra, an nth order polynomial equation of the form p(λ) = 0 possesses precisely n roots Thus, the solution to p(λ) = 0 has n potentially complex roots, which are denoted by λ 1, λ 2,, λ n These are the eigenvalues of A If a root is non-degenerate (ie, only one root has a particular numerical value), then we say that the root has multiplicity one it is called a simple root If a root is degenerate (ie, more than one root has a particular numerical value), then we say that the root has multiplicity p, where p is the number of roots with that same value such a root is called a multiple root For example, a double root (as its name implies) arises when precisely two of the roots of p(λ) are equal In the counting of the n roots of p(λ), multiple roots are counted according to their multiplicity In principle, one can always factor a polynomial in terms of its roots Thus, eq (4) implies that: p(λ) = ( 1) n (λ λ 1 )(λ λ 2 ) (λ λ n ), where multiple roots appear according to their multiplicity Multiplying out the n factors above yields p(λ) = ( 1) n λn λ n 1 λ i + λ n 2 λ i λ j + +λ n k i=1 i 1 =1 i 2 =1 i k =1 i 1 <i 2 < <i k i=1 j=1 i<j λ λ ii i2 λ } {{ ik } k factors Comparing with eq (6), it immediately follows that: + + λ 1 λ 2 λ n (7) Tr A = λ i = λ 1 + λ λ n, i=1 det A = λ 1 λ 2 λ 3 λ n The coefficients c 2, c 3,,c n 1 are also determined by the eigenvalues In general, c k = ( 1) k i 1 =1 i 2 =1 i k =1 i 1 <i 2 < <i k λ i1λ i2 λ } {{ ik, } for k = 1, 2,, n (8) k factors I say in principle, since in practice it may not be possible to explicitly determine the roots algebraically According to a famous theorem of algebra, no general formula exists (like the famous solution to the quadratic equation) for an arbitrary polynomial of fifth order or above Of course, one can always determine the roots numerically 3

4 2 The Cayley-Hamilton Theorem Theorem: Given an n n matrix A, the characteristic polynomial is defined by p(λ) = det(a λi) = ( 1) n [λ n + c 1 λ n 1 + c 2 λ n c n 1 λ + c n ], it follows that p(a) = ( 1) n [ A n + c 1 A n 1 + c 2 A n c n 1A + c n I ] = 0, (9) where A 0 I is the n n identity matrix and 0 is the n n zero matrix False proof: The characteristic polynomial is p(λ) = det(a λi) Setting λ = A, we get p(a) = det(a AI) = det(a A) = det(0) = 0 This proof does not make any sense In particular, p(a) is an n n matrix, but in this false proof we obtained p(a) = 0 where 0 is a number Correct proof: Recall that the classical adjoint of M, denoted by adj M, is the transpose of the matrix of cofactors In class, we showed that the cofactor expansion of the determinant is equivalent to the equation In particular, setting M = A λi, it follows that M adj M = I det M (10) (A λi) adj(a λi) = p(λ)i, (11) where p(λ) = det(a λi) is the characteristic polynomial Since p(λ) is an nth-order polynomial, it follows from eq (11) that adj(a λi) is a matrix polynomial of order n 1 Thus, we can write: adj(a λi) = B 0 + B 1 λ + B 2 λ B n 1 λ n 1, where B 0, B 1,,B n 1 are n n matrices (whose explicit forms are not required in these notes) Inserting the above result into eq (11) and using eq (4), one obtains: (A λi)(b 0 +B 1 λ+b 2 λ 2 + +B n 1 λ n 1 ) = ( 1) n [ λ n + c 1 λ n c n 1 λ + c n ] I (12) Eq (12) is true for any value of λ Consequently, the coefficient of λ k on the left-hand side of eq (12) must equal the coefficient of λ k on the right-hand side of eq (12), for k = 0, 1, 2,, n This yields the following n + 1 equations: AB 0 = ( 1) n c n I, (13) B k 1 + AB k = ( 1) n c n k I, k = 1, 2,, n 1, (14) B n 1 = ( 1) n I (15) In the expression for p(λ), we interpret c n to mean c n λ 0 Thus, when evaluating p(a), the coefficient c n multiplies A 0 I If det M 0, then we may divide both sides of eq (10) by the determinant and identify M 1 = adj M/det M, since the inverse satisfies MM 1 = I 4

5 Using eqs (13) (15), we can evaluate the matrix polynomial p(a) p(a) = ( 1) n [ A n + c 1 A n 1 + c 2 A n c n 1 A + c n I ] = AB 0 + ( B 0 + B 1 A)A + ( B 1 + B 2 )A ( B n 2 + B n 1 A)A n 1 B n 1 A n = A(B 0 B 0 ) + A 2 (B 1 B 1 ) + A 3 (B 2 B 2 ) + + A n 1 (B n 2 B n 2 ) + A n (B n 1 B n 1 ) = 0, which completes the proof of the Cayley-Hamilton theorem It is instructive to illustrate the Cayley-Hamilton theorem for 2 2 matrices In this case, p(λ) = λ 2 λtr A + det A Hence, by the Cayley-Hamilton theorem, p(a) = A 2 A Tr A + I det A = 0 Let us take the trace of this equation Since Tr I = 2 for the 2 2 identity matrix, It follows that det A = 1 2 Tr(A 2 ) (Tr A) det A = 0 [ (Tr A) 2 Tr(A 2 ) ], for any 2 2 matrix You can easily verify this formula for any 2 2 matrix Appendix: Identifying the coefficients of the characteristic polynomial in terms of traces The characteristic polynomial of an n n matrix A is given by: p(λ) = det(a λi) = ( 1) n [ λ n + c 1 λ n 1 + c 2 λ n c n 1 λ + c n ] In Section 1, we identified: c 1 = Tr A, c n = ( 1) n det A (16) One can also derive expressions for c 2, c 3,, c n 1 in terms of traces of powers of A In this appendix, I will exhibit the relevant results without proofs (which can be found in the references at the end of these notes) Let us introduce the notation: t k = Tr(A k ) Then, the following set of recursive equations can be proven: t 1 + c 1 = 0 and t k + c 1 t k c k 1 t 1 + kc k = 0, k = 2, 3,, n (17) 5

6 These equations are called Newton s identities A nice proof of these identities can be found in ref [1] The equations exhibited in eq (17) are called recursive, since one can solve for the c k in terms of the traces t 1, t 2,, t k iteratively by starting with c 1 = t 1, and then proceeding step by step by solving the equations with k = 2, 3,, n in successive order This recursive procedure yields: c 1 = t 1, c 2 = 1 2 (t2 1 t 2), c 3 = 1 6 t t 1t t 3, c 4 = 1 24 t t2 1t t 1t t t 4, etc The results above can be summarized by the following equation [2], c m = t m m + 1 2! m 1 m 1 i=1 j=1 i+j=m t i t j ij 1 3! m 2 m 2 m 2 i=1 j=1 k=1 i+j+k=m t i t j t k ijk + t m +( 1)m 1 m!, m = 1, 2,, n Note that by using c n = ( 1) n det A, one obtains a general expression for the determinant in terms of traces of powers of A, det A = ( 1) n c n = ( 1) n t n n + 1 n 1 n 1 2! i=1 j=1 i+j=n t i t j ij 1 n 2 3! n 2 n 2 i=1 j=1 k=1 i+j+k=n t i t j t k ijk + + ( 1)n t n 1 n! where t k Tr(A k ) One can verify that: [ det A = 1 2 (Tr A) 2 Tr(A 2 ) ], for any 2 2 matrix, [ det A = 1 6 (Tr A) 3 3 Tr A Tr(A 2 ) + 2 Tr(A 3 ) ], for any 3 3 matrix, etc The coefficients of the characteristic polynomial, c k, can also be expressed directly in terms of the eigenvalues of A, as shown in eq (8), BONUS MATERIAL One can derive another closed-form expression for the c k To see how to do this, let us write out the Newton identities explicitly 6

7 Eq (17) for k = 1, 2,, n yields: c 1 = t 1, t 1 c 1 + 2c 2 = t 2, t 2 c 1 + t 1 c 2 + 3c 3 = t 3, t k 1 c 1 + t k 2 c t 1 c k 1 + kc k = t k, t n 1 c 1 + t n 2 c t 1 c n 1 + nc n = t n Consider the first k equations above (for any value of k = 1, 2,, n) This is a system of linear equations for c 1, c 2,,c k, which can be written in matrix form: c 1 t 1 t c 2 t 2 t 2 t c 3 t 3 = t k 2 t k 3 t k 4 k 1 0 c k 1 t k 1 t k 1 t k 2 t k 3 t 1 k t k Applying Cramer s rule, we can solve for c k in terms of t 1, t 2,,t k [3]: t 1 t t 2 t 2 t t 3 t k 2 t k 3 t k 4 k 1 t k 1 t k 1 t k 2 t k 3 t 1 t k c k = t t 2 t t k 2 t k 3 t k 4 k 1 0 t k 1 t k 2 t k 3 t 1 k The denominator is the determinant of a lower triangular matrix, which is equal to the product of its diagonal elements Hence, t 1 t t 2 c k = 1 t 2 t t 3 k! t k 2 t k 3 t k 4 k 1 t k 1 t k 1 t k 2 t k 3 t 1 t k 7 c k

8 It is convenient to multiply the kth column by 1, and then move the kth column over to the first column (which requires a series of k 1 interchanges of adjacent columns) These operations multiply the determinant by ( 1) and ( 1) k 1 respectively, leading to an overall sign change of ( 1) k Hence, our final result is: c k = ( 1)k k! t t 2 t t 3 t 2 t 1 3 0, k = 1, 2,, n t k 1 t k 2 t k 3 t k 4 k 1 t k t k 1 t k 2 t k 3 t 1 We can test this formula by evaluating the the first three cases k = 1, 2, 3: c 1 = t 1, c 2 = 1 2! t 1 1 t 2 t 1 = 1 2 (t2 1 t 2), c 3 = 1 t ! t 2 t 1 2 t 3 t 2 t 1 = [ ] 1 6 t t 1 t 2 2t 3, which coincide with the previously stated results Finally, setting k = n yields the determinant of the n n matrix A, det A = ( 1) n c n, in terms of traces of powers of A, t t 2 t det A = 1 t 3 t 2 t n!, t n 1 t n 2 t n 3 t n 4 n 1 t n t n 1 t n 2 t n 3 t 1 where t k Tr(A k ) Indeed, one can check that our previous results for the determinants of a 2 2 matrix and a 3 3 matrix are recovered REFERENCES 1 Dan Kalman, A Matrix Proof of Newton s Identities, Mathematics Magazine 73, (2000) 2 HK Krishnapriyan, On Evaluating the Characteristic Polynomial through Symmetric Functions, J Chem Inf Comput Sci 35, (1995) 3 VV Prasolov, Problems and Theorems in Linear Algebra (American Mathematical Society, Providence, RI, 1994) This result is derived in section 41 on p 20 of ref [3] However, the determinantal expression given in ref [3] for σ k ( 1) k c k contains a typographical error the diagonal series of integers, 1, 1, 1,, 1, appearing just above the main diagonal of σ k should be replaced by 1, 2, 3,, k 1 8

(a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular.

Theorem.7.: (Properties of Triangular Matrices) (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. (b) The product

2.1: Determinants by Cofactor Expansion. Math 214 Chapter 2 Notes and Homework. Evaluate a Determinant by Expanding by Cofactors

2.1: Determinants by Cofactor Expansion Math 214 Chapter 2 Notes and Homework Determinants The minor M ij of the entry a ij is the determinant of the submatrix obtained from deleting the i th row and the

Cofactor Expansion: Cramer s Rule

Cofactor Expansion: Cramer s Rule MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Introduction Today we will focus on developing: an efficient method for calculating

UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure

UNIT 2 MATRICES - I Matrices - I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress

Matrix Inverse and Determinants

DM554 Linear and Integer Programming Lecture 5 and Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1 2 3 4 and Cramer s rule 2 Outline 1 2 3 4 and

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

Facts About Eigenvalues By Dr David Butler Definitions Suppose A is an n n matrix An eigenvalue of A is a number λ such that Av = λv for some nonzero vector v An eigenvector of A is a nonzero vector v

Determinants. Dr. Doreen De Leon Math 152, Fall 2015

Determinants Dr. Doreen De Leon Math 52, Fall 205 Determinant of a Matrix Elementary Matrices We will first discuss matrices that can be used to produce an elementary row operation on a given matrix A.

1 Determinants. Definition 1

Determinants The determinant of a square matrix is a value in R assigned to the matrix, it characterizes matrices which are invertible (det 0) and is related to the volume of a parallelpiped described

DETERMINANTS. b 2. x 2

DETERMINANTS 1 Systems of two equations in two unknowns A system of two equations in two unknowns has the form a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 This can be written more concisely in

2.5 Elementary Row Operations and the Determinant

2.5 Elementary Row Operations and the Determinant Recall: Let A be a 2 2 matrtix : A = a b. The determinant of A, denoted by det(a) c d or A, is the number ad bc. So for example if A = 2 4, det(a) = 2(5)

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

Notes on Determinant

ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

9 Matrices, determinants, inverse matrix, Cramer s Rule

AAC - Business Mathematics I Lecture #9, December 15, 2007 Katarína Kálovcová 9 Matrices, determinants, inverse matrix, Cramer s Rule Basic properties of matrices: Example: Addition properties: Associative:

5.3 Determinants and Cramer s Rule

290 5.3 Determinants and Cramer s Rule Unique Solution of a 2 2 System The 2 2 system (1) ax + by = e, cx + dy = f, has a unique solution provided = ad bc is nonzero, in which case the solution is given

Lecture 11. Shuanglin Shao. October 2nd and 7th, 2013

Lecture 11 Shuanglin Shao October 2nd and 7th, 2013 Matrix determinants: addition. Determinants: multiplication. Adjoint of a matrix. Cramer s rule to solve a linear system. Recall that from the previous

Mathematics Notes for Class 12 chapter 3. Matrices

1 P a g e Mathematics Notes for Class 12 chapter 3. Matrices A matrix is a rectangular arrangement of numbers (real or complex) which may be represented as matrix is enclosed by [ ] or ( ) or Compact form

The Solution of Linear Simultaneous Equations

Appendix A The Solution of Linear Simultaneous Equations Circuit analysis frequently involves the solution of linear simultaneous equations. Our purpose here is to review the use of determinants to solve

Topic 1: Matrices and Systems of Linear Equations.

Topic 1: Matrices and Systems of Linear Equations Let us start with a review of some linear algebra concepts we have already learned, such as matrices, determinants, etc Also, we shall review the method

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

Presentation 3: Eigenvalues and Eigenvectors of a Matrix

Colleen Kirksey, Beth Van Schoyck, Dennis Bowers MATH 280: Problem Solving November 18, 2011 Presentation 3: Eigenvalues and Eigenvectors of a Matrix Order of Presentation: 1. Definitions of Eigenvalues

Chapter 1 - Matrices & Determinants

Chapter 1 - Matrices & Determinants Arthur Cayley (August 16, 1821 - January 26, 1895) was a British Mathematician and Founder of the Modern British School of Pure Mathematics. As a child, Cayley enjoyed

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix.

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. Inverse matrix Definition. Let A be an n n matrix. The inverse of A is an n n matrix, denoted

MATH 240 Fall, Chapter 1: Linear Equations and Matrices

MATH 240 Fall, 2007 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 9th Ed. written by Prof. J. Beachy Sections 1.1 1.5, 2.1 2.3, 4.2 4.9, 3.1 3.5, 5.3 5.5, 6.1 6.3, 6.5, 7.1 7.3 DEFINITIONS

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

1 Sets and Set Notation.

LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

Math 315: Linear Algebra Solutions to Midterm Exam I

Math 35: Linear Algebra s to Midterm Exam I # Consider the following two systems of linear equations (I) ax + by = k cx + dy = l (II) ax + by = 0 cx + dy = 0 (a) Prove: If x = x, y = y and x = x 2, y =

Diagonal, Symmetric and Triangular Matrices

Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by

Sergei Silvestrov, Christopher Engström, Karl Lundengård, Johan Richter, Jonas Österberg. November 13, 2014

Sergei Silvestrov,, Karl Lundengård, Johan Richter, Jonas Österberg November 13, 2014 Analysis Todays lecture: Course overview. Repetition of matrices elementary operations. Repetition of solvability of

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

Physics 116A Winter 2011 The Matrix Elements of a 3 3 Orthogonal Matrix Revisited 1. Introduction In a class handout entitled, Three-Dimensional Proper and Improper Rotation Matrices, I provided a derivation

MATH10212 Linear Algebra B Homework 7

MATH22 Linear Algebra B Homework 7 Students are strongly advised to acquire a copy of the Textbook: D C Lay, Linear Algebra and its Applications Pearson, 26 (or other editions) Normally, homework assignments

Introduction to Matrix Algebra

Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

1 Eigenvalues and Eigenvectors

Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x

INTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL

SOLUTIONS OF THEORETICAL EXERCISES selected from INTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL Eighth Edition, Prentice Hall, 2005. Dr. Grigore CĂLUGĂREANU Department of Mathematics

MATH36001 Background Material 2015

MATH3600 Background Material 205 Matrix Algebra Matrices and Vectors An ordered array of mn elements a ij (i =,, m; j =,, n) written in the form a a 2 a n A = a 2 a 22 a 2n a m a m2 a mn is said to be

Chapter 8. Matrices II: inverses. 8.1 What is an inverse?

Chapter 8 Matrices II: inverses We have learnt how to add subtract and multiply matrices but we have not defined division. The reason is that in general it cannot always be defined. In this chapter, we

Introduction to Matrix Algebra I

Appendix A Introduction to Matrix Algebra I Today we will begin the course with a discussion of matrix algebra. Why are we studying this? We will use matrix algebra to derive the linear regression model

The Inverse of a Matrix

The Inverse of a Matrix 7.4 Introduction In number arithmetic every number a ( 0) has a reciprocal b written as a or such that a ba = ab =. Some, but not all, square matrices have inverses. If a square

= [a ij ] 2 3. Square matrix A square matrix is one that has equal number of rows and columns, that is n = m. Some examples of square matrices are

This document deals with the fundamentals of matrix algebra and is adapted from B.C. Kuo, Linear Networks and Systems, McGraw Hill, 1967. It is presented here for educational purposes. 1 Introduction In

The Cayley Hamilton Theorem

The Cayley Hamilton Theorem Attila Máté Brooklyn College of the City University of New York March 23, 2016 Contents 1 Introduction 1 1.1 A multivariate polynomial zero on all integers is identically zero............

Definition A square matrix M is invertible (or nonsingular) if there exists a matrix M 1 such that

0. Inverse Matrix Definition A square matrix M is invertible (or nonsingular) if there exists a matrix M such that M M = I = M M. Inverse of a 2 2 Matrix Let M and N be the matrices: a b d b M =, N = c

Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible:

Cramer s Rule and the Adjugate Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible: Theorem [Cramer s Rule] If A is an invertible

by the matrix A results in a vector which is a reflection of the given

Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

Practice Math 110 Final. Instructions: Work all of problems 1 through 5, and work any 5 of problems 10 through 16.

Practice Math 110 Final Instructions: Work all of problems 1 through 5, and work any 5 of problems 10 through 16. 1. Let A = 3 1 1 3 3 2. 6 6 5 a. Use Gauss elimination to reduce A to an upper triangular

Diagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions

Chapter 3 Diagonalisation Eigenvalues and eigenvectors, diagonalisation of a matrix, orthogonal diagonalisation fo symmetric matrices Reading As in the previous chapter, there is no specific essential

1 Spherical Kinematics

ME 115(a): Notes on Rotations 1 Spherical Kinematics Motions of a 3-dimensional rigid body where one point of the body remains fixed are termed spherical motions. A spherical displacement is a rigid body

Calculus and linear algebra for biomedical engineering Week 4: Inverse matrices and determinants

Calculus and linear algebra for biomedical engineering Week 4: Inverse matrices and determinants Hartmut Führ fuehr@matha.rwth-aachen.de Lehrstuhl A für Mathematik, RWTH Aachen October 30, 2008 Overview

Matrices, Determinants and Linear Systems

September 21, 2014 Matrices A matrix A m n is an array of numbers in rows and columns a 11 a 12 a 1n r 1 a 21 a 22 a 2n r 2....... a m1 a m2 a mn r m c 1 c 2 c n We say that the dimension of A is m n (we

Additional Topics in Linear Algebra Supplementary Material for Math 540. Joseph H. Silverman

Additional Topics in Linear Algebra Supplementary Material for Math 540 Joseph H Silverman E-mail address: jhs@mathbrownedu Mathematics Department, Box 1917 Brown University, Providence, RI 02912 USA Contents

( % . This matrix consists of \$ 4 5 " 5' the coefficients of the variables as they appear in the original system. The augmented 3 " 2 2 # 2 " 3 4&

Matrices define matrix We will use matrices to help us solve systems of equations. A matrix is a rectangular array of numbers enclosed in parentheses or brackets. In linear algebra, matrices are important

APPLICATIONS OF MATRICES. Adj A is nothing but the transpose of the co-factor matrix [A ij ] of A.

APPLICATIONS OF MATRICES ADJOINT: Let A = [a ij ] be a square matrix of order n. Let Aij be the co-factor of a ij. Then the n th order matrix [A ij ] T is called the adjoint of A. It is denoted by adj

Elementary Row Operations and Matrix Multiplication

Contents 1 Elementary Row Operations and Matrix Multiplication 1.1 Theorem (Row Operations using Matrix Multiplication) 2 Inverses of Elementary Row Operation Matrices 2.1 Theorem (Inverses of Elementary

1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each)

Math 33 AH : Solution to the Final Exam Honors Linear Algebra and Applications 1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each) (1) If A is an invertible

Similarity and Diagonalization. Similar Matrices

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

We know a formula for and some properties of the determinant. Now we see how the determinant can be used.

Cramer s rule, inverse matrix, and volume We know a formula for and some properties of the determinant. Now we see how the determinant can be used. Formula for A We know: a b d b =. c d ad bc c a Can we

Chapter 6. Orthogonality

6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix.

MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix. Matrices Definition. An m-by-n matrix is a rectangular array of numbers that has m rows and n columns: a 11

Determinants. Chapter Properties of the Determinant

Chapter 4 Determinants Chapter 3 entailed a discussion of linear transformations and how to identify them with matrices. When we study a particular linear transformation we would like its matrix representation

Factorization Theorems

Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

Similar matrices and Jordan form

Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

Summary of week 8 (Lectures 22, 23 and 24)

WEEK 8 Summary of week 8 (Lectures 22, 23 and 24) This week we completed our discussion of Chapter 5 of [VST] Recall that if V and W are inner product spaces then a linear map T : V W is called an isometry

In this leaflet we explain what is meant by an inverse matrix and how it is calculated.

5.5 Introduction The inverse of a matrix In this leaflet we explain what is meant by an inverse matrix and how it is calculated. 1. The inverse of a matrix The inverse of a square n n matrix A, is another

Inverses and powers: Rules of Matrix Arithmetic

Contents 1 Inverses and powers: Rules of Matrix Arithmetic 1.1 What about division of matrices? 1.2 Properties of the Inverse of a Matrix 1.2.1 Theorem (Uniqueness of Inverse) 1.2.2 Inverse Test 1.2.3

4. MATRICES Matrices

4. MATRICES 170 4. Matrices 4.1. Definitions. Definition 4.1.1. A matrix is a rectangular array of numbers. A matrix with m rows and n columns is said to have dimension m n and may be represented as follows:

Name: Section Registered In:

Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are

1.5 Elementary Matrices and a Method for Finding the Inverse

.5 Elementary Matrices and a Method for Finding the Inverse Definition A n n matrix is called an elementary matrix if it can be obtained from I n by performing a single elementary row operation Reminder:

Solutions to Linear Algebra Practice Problems

Solutions to Linear Algebra Practice Problems. Find all solutions to the following systems of linear equations. (a) x x + x 5 x x x + x + x 5 (b) x + x + x x + x + x x + x + 8x Answer: (a) We create the

NON SINGULAR MATRICES. DEFINITION. (Non singular matrix) An n n A is called non singular or invertible if there exists an n n matrix B such that

NON SINGULAR MATRICES DEFINITION. (Non singular matrix) An n n A is called non singular or invertible if there exists an n n matrix B such that AB = I n = BA. Any matrix B with the above property is called

1 Determinants and the Solvability of Linear Systems

1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped

NOTES on LINEAR ALGEBRA 1

School of Economics, Management and Statistics University of Bologna Academic Year 205/6 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

5 =5. Since 5 > 0 Since 4 7 < 0 Since 0 0

a p p e n d i x e ABSOLUTE VALUE ABSOLUTE VALUE E.1 definition. The absolute value or magnitude of a real number a is denoted by a and is defined by { a if a 0 a = a if a

9.2 Summation Notation

9. Summation Notation 66 9. Summation Notation In the previous section, we introduced sequences and now we shall present notation and theorems concerning the sum of terms of a sequence. We begin with a

Matrix Algebra, Class Notes (part 1) by Hrishikesh D. Vinod Copyright 1998 by Prof. H. D. Vinod, Fordham University, New York. All rights reserved.

Matrix Algebra, Class Notes (part 1) by Hrishikesh D. Vinod Copyright 1998 by Prof. H. D. Vinod, Fordham University, New York. All rights reserved. 1 Sum, Product and Transpose of Matrices. If a ij with

equations Karl Lundengård December 3, 2012 MAA704: Matrix functions and matrix equations Matrix functions Matrix equations Matrix equations, cont d

and and Karl Lundengård December 3, 2012 Solving General, Contents of todays lecture and (Kroenecker product) Solving General, Some useful from calculus and : f (x) = x n, x C, n Z + : f (x) = n x, x R,

Various Symmetries in Matrix Theory with Application to Modeling Dynamic Systems

Various Symmetries in Matrix Theory with Application to Modeling Dynamic Systems A Aghili Ashtiani a, P Raja b, S K Y Nikravesh a a Department of Electrical Engineering, Amirkabir University of Technology,

Helpsheet. Giblin Eunson Library MATRIX ALGEBRA. library.unimelb.edu.au/libraries/bee. Use this sheet to help you:

Helpsheet Giblin Eunson Library ATRIX ALGEBRA Use this sheet to help you: Understand the basic concepts and definitions of matrix algebra Express a set of linear equations in matrix notation Evaluate determinants

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

Chapter 5. Matrices. 5.1 Inverses, Part 1

Chapter 5 Matrices The classification result at the end of the previous chapter says that any finite-dimensional vector space looks like a space of column vectors. In the next couple of chapters we re

CHARACTERISTIC ROOTS AND VECTORS

CHARACTERISTIC ROOTS AND VECTORS 1 DEFINITION OF CHARACTERISTIC ROOTS AND VECTORS 11 Statement of the characteristic root problem Find values of a scalar λ for which there exist vectors x 0 satisfying

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

University of Warwick, EC9A0: Pre-sessional Advanced Mathematics Course Peter J. Hammond & Pablo F. Beker 1 of 55 EC9A0: Pre-sessional Advanced Mathematics Course Slides 1: Matrix Algebra Peter J. Hammond

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

Direct Methods for Solving Linear Systems. Linear Systems of Equations

Direct Methods for Solving Linear Systems Linear Systems of Equations Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

Matrix Algebra A. Doerr Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Some Basic Matrix Laws Assume the orders of the matrices are such that

PROVING STATEMENTS IN LINEAR ALGEBRA

Mathematics V2010y Linear Algebra Spring 2007 PROVING STATEMENTS IN LINEAR ALGEBRA Linear algebra is different from calculus: you cannot understand it properly without some simple proofs. Knowing statements

8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

Using the Singular Value Decomposition

Using the Singular Value Decomposition Emmett J. Ientilucci Chester F. Carlson Center for Imaging Science Rochester Institute of Technology emmett@cis.rit.edu May 9, 003 Abstract This report introduces

26. Determinants I. 1. Prehistory

26. Determinants I 26.1 Prehistory 26.2 Definitions 26.3 Uniqueness and other properties 26.4 Existence Both as a careful review of a more pedestrian viewpoint, and as a transition to a coordinate-independent

Advanced Techniques for Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz

Advanced Techniques for Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Vectors Arrays of numbers Vectors represent a point in a n dimensional

Solution to Homework 2

Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are