Diagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "Diagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions"

Transcription

1 Chapter 3 Diagonalisation Eigenvalues and eigenvectors, diagonalisation of a matrix, orthogonal diagonalisation fo symmetric matrices Reading As in the previous chapter, there is no specific essential reading for this chapter. It is essential that you do some reading, but the topics discussed in this chapter are adequately covered in many texts on linear algebra. The list below gives examples of relevant reading. (For full publication details, see Chapter.) Ostaszewski, A. Mathematics in Economics, Chapter 7, Sections 7., 7.4, 7.6. Ostaszewski, A. Advanced Mathematical Methods. Chapter 5, sections 5. and 5.. Leon, S.J., Linear Algebra with Applications. Chapter 6, sections 6. and 6.3. Simon, C.P. and Blume, L., Mathematics for Economists. Chapter 3, sections 3., 3.7. Introduction One of the most useful techniques in applications of matrices and linear algebra is diagonalisation. Before discussing this, we have to look at the topic of eigenvalues and eigenvectors. We shall explore a number of applications of diagonalisation in the next chapter of the guide. Eigenvalues and eigenvectors Definitions Suppose that A is a square matrix. The number λ is said to be an eigenvalue of A if for some non-zero vector x, Ax = λx. Any non-zero vector x for which this 37

2 equation holds is called an eigenvector for eigenvalue λ or an eigenvector of A corresponding to eigenvalue λ. Finding eigenvalues and eigenvectors To determine whether λ is an eigenvalue of A, we need to determine whether there are any non-zero solutions to the matrix equation Ax = λx. Note that the matrix equation Ax = λx is not of the standard form, since the right-hand side is not a fixed vector b, but depends explicitly on x. However, we can rewrite it in standard form. Note that λx = λix, where I is, as usual, the identity matrix. So, the equation is equivalent to Ax = λix, or Ax λix =, which is equivalent to (A λi)x =. Now, a square linear system Bx = has solutions other than x = precisely when B =. Therefore, taking B = A λi, λ is an eigenvalue if and only if the determinant of the matrix A λi is zero. This determinant, p(λ) = A λi, is known as the characteristic polynomial of A, since it is a polynomial in the variable λ. To find the eigenvalues, we solve the equation A λi =. Let us illustrate with a very simple example. Example: Let Then A λi = A =. λ λ = λ and the characteristic polynomial is A λi = λ λ = ( λ)( λ) = λ 3λ + = λ 3λ. So the eigenvalues are the solutions of λ 3λ =. To solve this, one could use either the formula for the solutions to a quadratic, or simply observe that the equation is λ(λ 3) = with solutions λ = and λ = 3. Hence the eigenvalues of A are and 3. To find an eigenvector for eigenvalue λ, we have to find a solution to (A λi)x =, other than the zero vector. (I stress the fact that eigenvectors cannot be the zero vector because this is a mistake many students make.) This is easy, since for a particular value of λ, all we need to do is solve a simple linear system We illustrate by finding the eigenvectors for the matrix of the example just given. Example: We find eigenvectors of A =. We have seen that the eigenvalues are and 3. To find an eigenvector for eigenvalue we solve the system (A I)x = : that is, Ax =, or x =. x This could be solved using row operations. (Note that it cannot be solved by using inverse matrices since A is not invertible. In fact, inverse matrix techniques or 38

3 Cramer s rule will never be of use here since λ being an eigenvalue means that A λi is not invertible.) However, we can solve this fairly directly just by looking at the equations. We have to solve x + x =, x + x =. Clearly both equations are equivalent. From either one, we obtain x = x. We can choose x to be any number we like. Let s take x = ; then we need x = x =. It follows that an eigenvector for is x =. The choice x = was arbitrary; we could have chosen any non-zero number, so, for example, the following are eigenvectors for : 5.,. 5. There are infinitely many eigenvectors for : for each α, α α is an eigenvector for. But be careful not to think that you can choose α = ; for then x becomes the zero vector, and this is never an eigenvector, simply by definition. To find an eigenvector for 3, we solve (A 3I)x =, which is ( This is equivalent to the equations ) x = x ( ). x + x =, x x =, which are together equivalent to the single equation x = x. If we choose x =, we obtain the eigenvector x =. (Again, any non-zero scalar multiple of this vector is also an eigenvector for eigenvalue 3.) We illustrated with a example just for simplicity, but you should be able to work with 3 3 matrices. We give three such examples. Example: Suppose that A = Find the eigenvalues of A and obtain one eigenvector for each eigenvalue. To find the eigenvalues we solve A λi =. Now, 4 λ 4 A λi = 4 λ λ = (4 λ) 4 λ λ λ 4 4 = (4 λ) ((4 λ)(8 λ) 6) + 4 ( 4(4 λ)) = (4 λ) ((4 λ)(8 λ) 6) 6(4 λ). 39

4 Now, we notice that each of the two terms in this expression has 4 λ as a factor, so instead of expanding everything, we take 4 λ out as a common factor, obtaining A λi = (4 λ) ((4 λ)(8 λ) 6 6) = (4 λ)(3 λ + λ 3) = (4 λ)(λ λ) = (4 λ)λ(λ ). It follows that the eigenvalues are 4,,. (The characteristic polynomial will not always factorise so easily. Here it was simple because of the common factor (4 λ). The next example is more difficult.) To find an eigenvector for 4, we have to solve the equation (A 4I)x =, that is, 4 4 x x = x 3 Of course, we could use row operations, but the system is simple enough to solve straight away. The equations are 4x 3 = 4x 3 = 4x + 4x + 4x 3 =, so x 3 = and x = x. Choosing x =, we get the eigenvector. (Again, we can choose x to be any non-negative number. eigenvalue 4 are all non-zero multiples of this vector.) So the eigenvalues for Activity 3. Determine eigenvectors for and. You should find that for λ =, your eigenvector is a non-zero multiple of and that for λ = your eigenvector is a non-zero multiple of. Example: Let 3 A =. Given that is an eigenvalue of A, find all the eigenvalues of A. 4

5 We calculate the characteristic polynomial of A: 3 A λi = = ( 3 λ) λ λ ( ) λ λ = ( 3 λ)(λ + λ ) + ( λ ) ( + λ) = λ 3 4λ 5λ. Now, the fact that is an eigenvalue means that is a solution of the equation A λi =, which means that (λ ( )), that is, (λ + ), is a factor of the characteristic polynomial A λi. So this characteristic polynomial can be written in the form (λ + )(aλ + bλ + c). Clearly we must have a = and c = to obtain the correct λ 3 term and the correct constant. Given this, b = 3. In other words, the characteristic polynomial is (λ + )( λ 3λ ) = (λ + )(λ + 3λ + ) = (λ + )(λ + )(λ + ). That is, A λi = (λ + ) (λ + ). The eigenvalues are the solutions to A λi =, so they are λ = and λ =. Note that in this case, there are only two eigenvalues (or, the eigenvalue is repeated, or has multiplicity, as it is sometimes said). Example: Let A = 3. 3 Then (check this!), the characteristic polynomial is λ 3 + 8λ λ + 6. factorises (check!) as (λ )(λ )(λ 4), This so the eigenvalues are and 4. There are only two eigenvalues in this case. (We sometimes say that the eigenvalue is repeated or has multiplicity, because (λ ) is a factor of the characteristic polynomial.) To find an eigenvector for λ = 4, we have to solve the equation (A 4I)x =, that is, The equations are x x = x 3 x x + x 3 = x = x x x 3 =, so x = and x = x 3. Choosing x 3 =, we get the eigenvector. For λ =, we have to solve the equation (A I)x =, that is, x x =. x 3. 4

6 This system is equivalent to the single equation x x +x 3 =. (Convince yourself!) Choosing x 3 = and x = we have x =, so we obtain the eigenvector. Complex eigenvalues Although we shall only deal in this subject with real matrices (that is, matrices whose entries are real numbers), it is possible for such real matrices to have complex eigenvalues. This is not something you have to spend much time on, but you have to be aware of it. We briefly describe complex numbers. The complex numbers For more discussion of are based on the complex number i, which is defined to be the square root of. (Of course, no such real number exists.) Any complex number z can be written in the form z = a + bi where a, b are real numbers. We call a the real part and b the imaginary part of z. Of course, any real number is a complex number, since a = a + i. The following example shows a matrix with complex eigenvalues, and it also demonstrates how to deal with complex numbers. Example: Consider the matrix A =. We shall see that it has complex 9 eigenvalues. First, the characteristic polynomial A λi is ( λ) +9 = λ λ+. (Check this!) Using the formula for the roots of a quadratic equation, the eigenvalues are ± 36. Now 36 = (36)( ) = 36 = 6 = 6i. complex numbers, see Appendix A3 of Simon and Blume. So the eigenvalues are the complex numbers + 3i and 3i. Let s proceed with finding eigenvectors. To find an eigenvector for + 3i, we solve (A ( + 3i)I)x =, which is 3i x =. 9 3i This is equivalent to the equations x 3ix + x =, 9x 3ix =. But the second equation is just 3i times the first, so both are equivalent. Taking the second, we see that x = i/3x. So an eigenvector is (taking x = 3) ( i, 3) T. For λ = 3i, we end up solving the system a solution of which is (i, 3) T. 3ix + x =, 9x + 3ix =, You should be aware, then, that even though we are not dealing with matrices that have complex numbers as their entries, the possibility still exists that eigenvalues (and eigenvectors) will involve complex numbers. However, if a matrix is symmetric (that is, it equals its transpose), then it certainly has real eigenvalues. This useful fact, which we shall prove later, is important when we consider quadratic forms in the next chapter. 4

7 Diagonalisation of a square matrix Square matrices A and B are similar if there is an invertible matrix P such that P AP = B. The matrix A is diagonalisable if it is similar to a diagonal matrix; in other words, if there is a diagonal matrix D and an invertible matrix P such that P AP = D. Suppose that matrix A is diagonalisable, and that P AP = D, where D is a diagonal matrix λ λ D = diag(λ, λ,..., λ n ) =.... λ n (Note the useful notation for describing the diagonal matrix D.) AP = DP. If the columns of P are the vectors v, v,..., v n, then and So this means that AP = A (v... v n ) = (Av... Av n ), λ λ DP =... (v... v n ) = (λ v... λ n v n ). λ n Av = λ v, Av = λ v,..., Av n = λ n v n. Then we have The fact that P exists means that none of the vectors v i is the zero vector. So this means that (for i =,,..., n) λ i is an eigenvalue of A and v i is a corresponding eigenvector. Since P has an inverse, these eigenvectors are linearly independent. Therefore, A has n linearly independent eigenvectors. Conversely, if A has n linearly independent eigenvectors, then the matrix P whose columns are these eigenvectors will be invertible, and we will have P AP = D where D is a diagonal matrix with entries equal to the eigenvalues of A. We have therefore established the following result. Theorem 3. A matrix A is diagonalisable if and only if it has n linearly independent eigenvectors. Suppose that this is the case, and let v,..., v n be n linearly independent eigenvectors, where v i is an eigenvector for eigenvalue λ i. Then the matrix P = (v... v n ) is such that P exists, and P AP = D where D = diag(λ,..., λ n ). There is a more sophisticated way to think about this result, in terms of change of basis and matrix representations of linear transformations. Suppose that T is the linear transformation corresponding to A, so that T (x) = Ax for all x. Suppose that A has a set of n linearly independent eigenvectors B = {x, x,..., x n }, corresponding (respectively) to the eigenvalues λ,..., λ n. Since this is a linearly independent set of size n in R n, B is a basis for R n. By Theorem.8, the matrix of T with respect to B is A T [B, B] = ([T (x )] B... [T (x n )] B ). But T (x i ) = Ax i = λx i, so the coordinate vector of T (x i ) with respect to B is [T (x i )] B = (,,...,, λ i,,..., ), 43

8 which has λ i in entry i and all other entries zero. Therefore A T [B, B] = diag(λ,..., λ n ) = D. But by Theorem., A T [B, B] = P A T P, where P = (x... x n ) and A T is the matrix representing T, which in this case is simply A itself. We therefore see that P AP = A T [B, B] = D, and so the matrix P diagonalises A. Example: Consider again the matrix A = We have seen that it has three distinct eigenvalues, 4,, and that eigenvectors corresponding to eigenvalues 4,, are (in that order),,. We now form the matrix P whose columns are these eigenvectors: P =. Then, according to the theory, P should have an inverse, and we should have P AP = D = diag(4,, ). To check that this is true, we could calculate P and evaluate the product. The inverse may be calculated using either elementary row operations or determinants. (Matrix inversion is not part of this subject: however, it is part of the pre-requisite subject Mathematics for economists. You should therefore know how to invert a matrix.) Activity 3. Calculate P and verify that P AP = D. Not all n n matrices have n linearly independent eigenvalues, as the following example shows. Example: The matrix A = ( 4 ) has characteristic polynomial λ 6λ + 9 = (λ 3), so there is only one eigenvalue, λ = 3. The eigenvectors are the non-zero solutions to (A 3I)x = : that is, x =. This is equivalent to the single equation x + x =, with general solution x = x. Setting x = r, we see that the solution set of the system consists of all x 44

9 r vectors of the form as r runs through all non-zero real numbers. So the r eigenvectors are precisely the non-zero scalar multiples of the fixed vector. Any two eigenvectors are therefore multiples of each other and hence form a linearly dependent set. In other words, there are not two linearly independent eigenvectors, and the matrix is not diagonalisable. The following result is useful. It shows that if a matrix has n different eigenvalues then it is diagonalisable. Theorem 3. Eigenvectors corresponding to different eigenvalues are linearly independent. So if an n n matrix has n different eigenvalues, then it has a set of n linearly independent eigenvectors and is therefore diagonalisable. For a proof, see Ostaszewski, Mathematics in Economics, Section 7.4. It is not, however, necessary for the eigenvalues to be distinct. What is needed for diagonalisation is a set of n linearly independent eigenvectors, and this can happen even when there is a repeated eigenvalue (that is, when there are fewer than n different eigenvalues). The following example illustrates this. Example: We considered the matrix A = 3 3 above, and we saw that it has only two eigenvalues, 4 and. If we want to diagonalise it, we need to find three linearly independent eigenvectors. We found that an eigenvector corresponding to λ = 4 is, and that, for λ =, the eigenvectors are given by the non-zero-vector solutions to the system consisting of just the single equation x x + x 3 =. Above, we simply wanted to find an eigenvector, but now we want to find two which, together with the eigenvector for λ = 4, form a linearly independent set. Now, the system for the eigenvectors corresponding to λ = has just one equation and is therefore of rank ; it follows that the solution set is two-dimensional. Let s see exactly what the general solution looks like. We have x = x x 3, and x, x 3 can be chosen independently of each other. Setting x 3 = r and x = s, we see that the general solution is x x = s r s = s + r, (r, s R). x 3 r This shows that the solution space (the eigenspace, as it is called in this instance) is spanned by the two linearly independent vectors,. Now, each of these is an eigenvector corresponding to eigenvalue and, together with our eigenvector for λ = 4, the three form a linearly independent set. So there are 45

10 three linearly independent eigenvectors, even though two of them correspond to the same eigenvalue. The matrix is therefore diagonalisable. We may take P =. Then (Check!) P AP = D = diag(4,, ). Orthogonal diagonalisation of symmetric matrices The matrix we considered above, A = , is symmetric: that is, its transpose A T is equal to itself. It turns out that such matrices are always diagonalisable. They are, furthermore, diagonalisable in a special way. A matrix P is orthogonal if P T P = P P T = I: that is, if P has inverse P T. A matrix A is said to be orthogonally diagonalisable if there is an orthogonal matrix P such that P T AP = D where D is a diagonal matrix. Note that P T = P, so P T AP = P AP. The argument given above shows that the columns of P must be n linearly independent eigenvectors of A. But the condition that P T P = I means something else, as we now discuss. Suppose that the columns of P are x, x,..., x n, so that P = (x x... x n ). Then the rows of the transpose P T are x T,..., x T n, so x T x T P T =. Calculating the matrix product P T P, we find that the (i, j)-entry of P T P is x T i x j. But, since P T P = I, we must have x T n. x T i x i = (i =,,..., n), x T i x j = (i j). We say that vectors x, y are orthogonal if the matrix product x T y is. (Orthogonality will be discussed in more detail later.) So, any two of the eigenvectors x,..., x n must be orthogonal. Furthermore, for i =,,..., n, x T i x i =. The length of a vector x is x = n x i = x T x. i= So, not only must any two of these eigenvectors be orthogonal, but each must have length. We shall discuss orthogonality in more detail in the next chapter. For the moment, we have the following result. Theorem 3.3 If the matrix A is symmetric (A T = A) then eigenvectors corresponding to different eigenvalues are orthogonal. 46

11 Proof Suppose that λ and µ are any two different eigenvalues of A and that x, y are corresponding eigenvectors. Then Ax = λx and Ay = µy. The trick in this proof is to find two different expressions for the product x T Ay (which then must, of course, be equal to each other). Note that the matrix product x T Ay is a matrix or, equivalently, a number. First, since Ay = µy, we have x T Ay = x T (µy) = µx T y. But also, since Ax = λx, we have (Ax) T = (λx) T = λx T. Now, for any matrices M, N, (MN) T = N T M T, so (Ax) T = x T A T. But A T = A (because A is symmetric), so x T A = λx T and hence x T Ay = λx T y. We therefore have two different expressions for x T Ay: it equals µx T y and λx T y. Hence, µx T y = λx T y, or (µ λ)x T y =. But since λ µ (they are different eigenvalues), we have µ λ. We deduce, therefore, that x T y =. But this says precisely that x and y are orthogonal, which is exactly what we wanted to prove. This is quite a sneaky proof: the trick is to remember to consider x T Ay. The result just presented shows that if an n n symmetric matrix has exactly n different eigenvalues then any n corresponding eigenvectors are orthogonal to one another. Since we may take the eigenvectors to have length, this shows that the matrix is orthogonally diagonalisable. The following result makes this precise. Theorem 3.4 Suppose that A has n different eigenvalues. Take n corresponding eigenvectors, each of length. (Recall that the length of a vector x is just x = n i= x i.) Form the matrix P which has these eigenvectors as its columns. Then P = P T (that is, P is an orthogonal matrix) and P T AP = D, the diagonal matrix whose entries are the eigenvalues of A. (Note that we have only shown here that symmetric matrices with n different eigenvalues are orthogonally diagonalisable, but it turns out that all symmetric matrices are orthogonally diagonalisable.) Example: We work with the same matrix we used earlier, A = As we have already observed, this is symmetric. We have seen that it has three distinct eigenvalues, 4,. Earlier, we found that eigenvectors for eigenvalues 4,, are (in that order),,. Activity 3.3 Convince yourself that any two of these three eigenvectors are orthogonal. 47

12 Now, these eigenvectors are not of length. For example, the first one has length + ( ) + =. If we divide each entry of it by, we will indeed obtain an eigenvector of length : / /. We can similarly normalise the other two vectors, obtaining / 3 / 3 /, / 6 / 6 3 /. 6 Activity 3.4 Make sure you understand this normalisation. We now form the matrix P whose columns are these normalised eigenvectors: P = / / 3 / 6 / / 3 / 6 / 3 /. 6 Then P is orthogonal and P T AP = D = diag(4,, ). Activity 3.5 Check that P is orthogonal by calculating P T P. Example: Let A = Note that A is symmetric. We find an orthogonal matrix P such that P T AP is a diagonal matrix. The characteristic polynomial of A is 7 λ 9 A λi = λ 9 7 λ = ( λ)[(7 λ)(7 λ) 8] = ( λ)(λ 4λ 3) = ( λ)(λ 6)(λ + ), where we have expanded the determinant using the middle row. So the eigenvalues are, 6,. An eigenvector for λ = is given by 5x + 9z =, 9x + 5z =. This means x = z =. So we may take (,, ) T. This already has length so there is no need to normalise it. (Recall that we need three eigenvectors which are of length.) For λ = we find that an eigenvector is (,, ) T (or some multiple of this). To normalise (that is, to make of length ), we divide by its length, which is, obtaining (/ )(,, ) T. For λ = 6, we find a normalised eigenvector is (/ )(,, ). It follows that if we let P = / / / /, then P is orthogonal and P T AP = D = diag(,, 6). Check this! 48

13 Learning outcomes This chapter has discussed eigenvalues and eigenvectors and the very important technique of diagonalisation. We shall see in the next chapter how useful a technique diagonalisation is. At the end of this chapter and the relevant reading, you should be able to: explain what is meant by eigenvectors and eigenvalues, and by diagonalisation find eigenvalues and corresponding eigenvectors for a square matrix diagonalise a diagonalisable matrix recognise what diagonalisation says in terms of change of basis and matrix representation of linear transformations perform orthogonal diagonalisation on a symmetric matrix that has distinct eigenvalues. Sample examination questions The following are typical exam questions, or parts of questions. Question 3. Find the eigenvalues of the matrix A = and find an eigenvector for each eigenvalue. Hence find an invertible matrix P and a diagonal matrix D such that P AP = D. Question 3. Prove that the matrix is not diagonalizable. Question 3.3 Let A be any (real) n n matrix and suppose λ is an eigenvalue of A. Show that {x : Ax = λx}, the set of eigenvectors for eigenvalue λ together with the zero-vector, is a subspace of R n. Question 3.4 Let Show that the vector A = 6 6. x = is an eigenvector of A. What is the corresponding eigenvalue? Find the other eigenvalues of A, and an eigenvector for each of them. Find an invertible matrix P and a diagonal matrix D such that P AP = D. 49

14 Question 3.5 Let A =. 3 Find an invertible matrix P and a diagonal matrix D such that P AP = D. Question 3.6 Let A =. Find the eigenvalues of A, and an eigenvector for each of them. Find an orthogonal matrix P and a diagonal matrix D such that P T AP = D. Question 3.7 Suppose that A is a real diagonalisable matrix and that all the eigenvalues of A are non-negative. Prove that there is a matrix B such that B = A. Sketch answers or comments on selected questions Question 3. The characteristic polynomial is λ 3 + 4λ 48, which is easily factorised as λ(λ 6)(λ 8). So the eigenvalues are, 6, 8. Corresponding eigenvectors, respectively, are calculated to be (and non-zero multiples of) (/, /, ) T, (/,, ) T, (/4,, ) T. We may therefore take / / /4 P = /, D = diag(, 6, 8). Question 3. You can check that the only eigenvalue is and that the corresponding eigenvectors are all the scalar multiples of (, ) T. So there cannot be two linearly independent eigenvectors, and hence the matrix is not diagonalisable. Question 3.3 Denote the set described by W. First, W. Suppose now that x, y are in W and that α R. We need to show that x + y and αx are also in W. We know that Ax = λx and Ay = λy, so and so x + y and αx are indeed in W. A(x + y) = Ax + Ay = λx + λy = λ(x + y) A(αx) = α(ax) = α(λx) = λ(αx), Question 3.4 It is given that x is an eigenvector. To determine the corresponding eigenvalue we work out Ax. This should be λx where λ is the required eigenvalue. Performing the calculation, we see that Ax = x and so λ =. The characteristic 5

15 polynomial of A is p(λ) = λ 3 + λ + λ. Since λ = is a root, we know that (λ ) is a factor. Factorising, we obtain p(λ) = (λ )( λ + λ + ) = (λ )(λ )(λ + ), so the other eigenvalues are λ =,. Corresponding eigenvectors are, respectively, (,, ) T and (,, ) T. We may therefore take P =, D = diag(,, ). Question 3.5 This is slightly more complicated since there are not 3 distinct eigenvalues. The eigenvalues turn out to be and, with two occurring twice. An eigenvector for is (,, ) T. We need to find a set of 3 linearly independent eigenvectors, so we need another two coming from the eigenspace corresponding to. You should find that the eigenspace for λ = is two-dimensional and has a basis consisting of (,, ) T and (,, ) T. These two vectors together with (,, ) T do indeed form a linearly independent set. Therefore we may take P =, D = diag(,, ). Question 3.6 The characteristic polynomial turns out to be p(λ) = λ 3 + 4λ 3λ = λ(λ 3)(λ ), so the eigenvalues are,, 3. Corresponding eigenvectors are, respectively, (,, ) T, (,, ) T, (,, ) T. To perform orthogonal diagonalisation (rather than simply diagonalisation) we need to normalise these (that is, make them of length by dividing each by its length). The lengths of the vectors are (respectively), 3,, 6, so the normalised eigenvectors are ( / 3, / 3, / 3) T, (, /, / ) T, (/ 6, / 6, / 6) T. If we take then P = / 3 / 6 / 3 / / 6 / 3 / /, 6 P AP = diag(,, 3) = D. Question 3.7 Since A can be diagonalised, we have P AP = D for some P, where D = diag(λ,..., λ n ), these entries being the eigenvalues of A. It is given that all λ i. We have A = P DP. Let Then B = P diag( λ, λ,..., λ n )P. B = P diag( λ, λ,..., λ n )P P diag( λ, λ,..., λ n )P and we are done. = P diag( λ, λ,..., λn )P = P DP = A, 5

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

1 Eigenvalues and Eigenvectors

1 Eigenvalues and Eigenvectors Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x

More information

MATH 551 - APPLIED MATRIX THEORY

MATH 551 - APPLIED MATRIX THEORY MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

More information

Section 2.1. Section 2.2. Exercise 6: We have to compute the product AB in two ways, where , B =. 2 1 3 5 A =

Section 2.1. Section 2.2. Exercise 6: We have to compute the product AB in two ways, where , B =. 2 1 3 5 A = Section 2.1 Exercise 6: We have to compute the product AB in two ways, where 4 2 A = 3 0 1 3, B =. 2 1 3 5 Solution 1. Let b 1 = (1, 2) and b 2 = (3, 1) be the columns of B. Then Ab 1 = (0, 3, 13) and

More information

Eigenvalues and eigenvectors of a matrix

Eigenvalues and eigenvectors of a matrix Eigenvalues and eigenvectors of a matrix Definition: If A is an n n matrix and there exists a real number λ and a non-zero column vector V such that AV = λv then λ is called an eigenvalue of A and V is

More information

Linear Dependence Tests

Linear Dependence Tests Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks

More information

MAT 242 Test 2 SOLUTIONS, FORM T

MAT 242 Test 2 SOLUTIONS, FORM T MAT 242 Test 2 SOLUTIONS, FORM T 5 3 5 3 3 3 3. Let v =, v 5 2 =, v 3 =, and v 5 4 =. 3 3 7 3 a. [ points] The set { v, v 2, v 3, v 4 } is linearly dependent. Find a nontrivial linear combination of these

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION 4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

Solution based on matrix technique Rewrite. ) = 8x 2 1 4x 1x 2 + 5x x1 2x 2 2x 1 + 5x 2

Solution based on matrix technique Rewrite. ) = 8x 2 1 4x 1x 2 + 5x x1 2x 2 2x 1 + 5x 2 8.2 Quadratic Forms Example 1 Consider the function q(x 1, x 2 ) = 8x 2 1 4x 1x 2 + 5x 2 2 Determine whether q(0, 0) is the global minimum. Solution based on matrix technique Rewrite q( x1 x 2 = x1 ) =

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

More information

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7. Review University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

More information

Inner products on R n, and more

Inner products on R n, and more Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

[1] Diagonal factorization

[1] Diagonal factorization 8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each)

1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each) Math 33 AH : Solution to the Final Exam Honors Linear Algebra and Applications 1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each) (1) If A is an invertible

More information

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns L. Vandenberghe EE133A (Spring 2016) 4. Matrix inverses left and right inverse linear independence nonsingular matrices matrices with linearly independent columns matrices with linearly independent rows

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Recall that two vectors in are perpendicular or orthogonal provided that their dot Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

More information

Chapter 20. Vector Spaces and Bases

Chapter 20. Vector Spaces and Bases Chapter 20. Vector Spaces and Bases In this course, we have proceeded step-by-step through low-dimensional Linear Algebra. We have looked at lines, planes, hyperplanes, and have seen that there is no limit

More information

Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I. Ronald van Luijk, 2012 Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

More information

4.6 Null Space, Column Space, Row Space

4.6 Null Space, Column Space, Row Space NULL SPACE, COLUMN SPACE, ROW SPACE Null Space, Column Space, Row Space In applications of linear algebra, subspaces of R n typically arise in one of two situations: ) as the set of solutions of a linear

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure

UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure UNIT 2 MATRICES - I Matrices - I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress

More information

Linear Least Squares

Linear Least Squares Linear Least Squares Suppose we are given a set of data points {(x i,f i )}, i = 1,...,n. These could be measurements from an experiment or obtained simply by evaluating a function at some points. One

More information

Inverses and powers: Rules of Matrix Arithmetic

Inverses and powers: Rules of Matrix Arithmetic Contents 1 Inverses and powers: Rules of Matrix Arithmetic 1.1 What about division of matrices? 1.2 Properties of the Inverse of a Matrix 1.2.1 Theorem (Uniqueness of Inverse) 1.2.2 Inverse Test 1.2.3

More information

(a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular.

(a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. Theorem.7.: (Properties of Triangular Matrices) (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. (b) The product

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

University of Ottawa

University of Ottawa University of Ottawa Department of Mathematics and Statistics MAT 1302A: Mathematical Methods II Instructor: Alistair Savage Final Exam April 2013 Surname First Name Student # Seat # Instructions: (a)

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

MATH 240 Fall, Chapter 1: Linear Equations and Matrices

MATH 240 Fall, Chapter 1: Linear Equations and Matrices MATH 240 Fall, 2007 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 9th Ed. written by Prof. J. Beachy Sections 1.1 1.5, 2.1 2.3, 4.2 4.9, 3.1 3.5, 5.3 5.5, 6.1 6.3, 6.5, 7.1 7.3 DEFINITIONS

More information

Similar matrices and Jordan form

Similar matrices and Jordan form Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

INTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL

INTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL SOLUTIONS OF THEORETICAL EXERCISES selected from INTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL Eighth Edition, Prentice Hall, 2005. Dr. Grigore CĂLUGĂREANU Department of Mathematics

More information

8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

More information

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

More information

MATH10212 Linear Algebra B Homework 7

MATH10212 Linear Algebra B Homework 7 MATH22 Linear Algebra B Homework 7 Students are strongly advised to acquire a copy of the Textbook: D C Lay, Linear Algebra and its Applications Pearson, 26 (or other editions) Normally, homework assignments

More information

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions. 3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

More information

DETERMINANTS. b 2. x 2

DETERMINANTS. b 2. x 2 DETERMINANTS 1 Systems of two equations in two unknowns A system of two equations in two unknowns has the form a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 This can be written more concisely in

More information

Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 51 First Exam January 29, 2015 Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

More information

Methods for Finding Bases

Methods for Finding Bases Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,

More information

Row and column operations

Row and column operations Row and column operations It is often very useful to apply row and column operations to a matrix. Let us list what operations we re going to be using. 3 We ll illustrate these using the example matrix

More information

Examination paper for TMA4115 Matematikk 3

Examination paper for TMA4115 Matematikk 3 Department of Mathematical Sciences Examination paper for TMA45 Matematikk 3 Academic contact during examination: Antoine Julien a, Alexander Schmeding b, Gereon Quick c Phone: a 73 59 77 82, b 40 53 99

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

WHICH LINEAR-FRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE?

WHICH LINEAR-FRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE? WHICH LINEAR-FRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE? JOEL H. SHAPIRO Abstract. These notes supplement the discussion of linear fractional mappings presented in a beginning graduate course

More information

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) 1 9.4.4 Write the given system in matrix form x = Ax + f ( ) sin(t) x y 1 0 5 z = dy cos(t)

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) 1 9.4.4 Write the given system in matrix form x = Ax + f ( ) sin(t) x y 1 0 5 z = dy cos(t) Solutions HW 9.4.2 Write the given system in matrix form x = Ax + f r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + We write this as ( ) r (t) θ (t) = ( ) ( ) 2 r(t) θ(t) + ( ) sin(t) 9.4.4 Write the given system

More information

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i. Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

More information

Diagonal, Symmetric and Triangular Matrices

Diagonal, Symmetric and Triangular Matrices Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by

More information

3.3. Solving Polynomial Equations. Introduction. Prerequisites. Learning Outcomes

3.3. Solving Polynomial Equations. Introduction. Prerequisites. Learning Outcomes Solving Polynomial Equations 3.3 Introduction Linear and quadratic equations, dealt within Sections 3.1 and 3.2, are members of a class of equations, called polynomial equations. These have the general

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

More information

some algebra prelim solutions

some algebra prelim solutions some algebra prelim solutions David Morawski August 19, 2012 Problem (Spring 2008, #5). Show that f(x) = x p x + a is irreducible over F p whenever a F p is not zero. Proof. First, note that f(x) has no

More information

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

More information

Lecture 6. Inverse of Matrix

Lecture 6. Inverse of Matrix Lecture 6 Inverse of Matrix Recall that any linear system can be written as a matrix equation In one dimension case, ie, A is 1 1, then can be easily solved as A x b Ax b x b A 1 A b A 1 b provided that

More information

Facts About Eigenvalues

Facts About Eigenvalues Facts About Eigenvalues By Dr David Butler Definitions Suppose A is an n n matrix An eigenvalue of A is a number λ such that Av = λv for some nonzero vector v An eigenvector of A is a nonzero vector v

More information

Presentation 3: Eigenvalues and Eigenvectors of a Matrix

Presentation 3: Eigenvalues and Eigenvectors of a Matrix Colleen Kirksey, Beth Van Schoyck, Dennis Bowers MATH 280: Problem Solving November 18, 2011 Presentation 3: Eigenvalues and Eigenvectors of a Matrix Order of Presentation: 1. Definitions of Eigenvalues

More information

NOTES on LINEAR ALGEBRA 1

NOTES on LINEAR ALGEBRA 1 School of Economics, Management and Statistics University of Bologna Academic Year 205/6 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

Practice Math 110 Final. Instructions: Work all of problems 1 through 5, and work any 5 of problems 10 through 16.

Practice Math 110 Final. Instructions: Work all of problems 1 through 5, and work any 5 of problems 10 through 16. Practice Math 110 Final Instructions: Work all of problems 1 through 5, and work any 5 of problems 10 through 16. 1. Let A = 3 1 1 3 3 2. 6 6 5 a. Use Gauss elimination to reduce A to an upper triangular

More information

Matrices, Determinants and Linear Systems

Matrices, Determinants and Linear Systems September 21, 2014 Matrices A matrix A m n is an array of numbers in rows and columns a 11 a 12 a 1n r 1 a 21 a 22 a 2n r 2....... a m1 a m2 a mn r m c 1 c 2 c n We say that the dimension of A is m n (we

More information

Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Pearson Education, Inc.

Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Pearson Education, Inc. 2 Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Theorem 8: Let A be a square matrix. Then the following statements are equivalent. That is, for a given A, the statements are either all true

More information

Topic 1: Matrices and Systems of Linear Equations.

Topic 1: Matrices and Systems of Linear Equations. Topic 1: Matrices and Systems of Linear Equations Let us start with a review of some linear algebra concepts we have already learned, such as matrices, determinants, etc Also, we shall review the method

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001, Fall 2010 September 30th, 2010 A. Donev (Courant Institute)

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

More information

Chapter 8. Matrices II: inverses. 8.1 What is an inverse?

Chapter 8. Matrices II: inverses. 8.1 What is an inverse? Chapter 8 Matrices II: inverses We have learnt how to add subtract and multiply matrices but we have not defined division. The reason is that in general it cannot always be defined. In this chapter, we

More information

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two

More information

MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix.

MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix. MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix. Matrices Definition. An m-by-n matrix is a rectangular array of numbers that has m rows and n columns: a 11

More information

Basic Terminology for Systems of Equations in a Nutshell. E. L. Lady. 3x 1 7x 2 +4x 3 =0 5x 1 +8x 2 12x 3 =0.

Basic Terminology for Systems of Equations in a Nutshell. E. L. Lady. 3x 1 7x 2 +4x 3 =0 5x 1 +8x 2 12x 3 =0. Basic Terminology for Systems of Equations in a Nutshell E L Lady A system of linear equations is something like the following: x 7x +4x =0 5x +8x x = Note that the number of equations is not required

More information

(January 14, 2009) End k (V ) End k (V/W )

(January 14, 2009) End k (V ) End k (V/W ) (January 14, 29) [16.1] Let p be the smallest prime dividing the order of a finite group G. Show that a subgroup H of G of index p is necessarily normal. Let G act on cosets gh of H by left multiplication.

More information

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix.

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. Inverse matrix Definition. Let A be an n n matrix. The inverse of A is an n n matrix, denoted

More information

The Hadamard Product

The Hadamard Product The Hadamard Product Elizabeth Million April 12, 2007 1 Introduction and Basic Results As inexperienced mathematicians we may have once thought that the natural definition for matrix multiplication would

More information

Brief Introduction to Vectors and Matrices

Brief Introduction to Vectors and Matrices CHAPTER 1 Brief Introduction to Vectors and Matrices In this chapter, we will discuss some needed concepts found in introductory course in linear algebra. We will introduce matrix, vector, vector-valued

More information

STUDY GUIDE LINEAR ALGEBRA. David C. Lay University of Maryland College Park AND ITS APPLICATIONS THIRD EDITION UPDATE

STUDY GUIDE LINEAR ALGEBRA. David C. Lay University of Maryland College Park AND ITS APPLICATIONS THIRD EDITION UPDATE STUDY GUIDE LINEAR ALGEBRA AND ITS APPLICATIONS THIRD EDITION UPDATE David C. Lay University of Maryland College Park Copyright 2006 Pearson Addison-Wesley. All rights reserved. Reproduced by Pearson Addison-Wesley

More information

Section 4.4 Inner Product Spaces

Section 4.4 Inner Product Spaces Section 4.4 Inner Product Spaces In our discussion of vector spaces the specific nature of F as a field, other than the fact that it is a field, has played virtually no role. In this section we no longer

More information

Notes on Jordan Canonical Form

Notes on Jordan Canonical Form Notes on Jordan Canonical Form Eric Klavins University of Washington 8 Jordan blocks and Jordan form A Jordan Block of size m and value λ is a matrix J m (λ) having the value λ repeated along the main

More information

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1 (d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

More information

Matrix Algebra and Applications

Matrix Algebra and Applications Matrix Algebra and Applications Dudley Cooke Trinity College Dublin Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 1 / 49 EC2040 Topic 2 - Matrices and Matrix Algebra Reading 1 Chapters

More information

Lecture 1: Schur s Unitary Triangularization Theorem

Lecture 1: Schur s Unitary Triangularization Theorem Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections

More information

Lecture 5: Singular Value Decomposition SVD (1)

Lecture 5: Singular Value Decomposition SVD (1) EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system

More information

Matrix Algebra, Class Notes (part 1) by Hrishikesh D. Vinod Copyright 1998 by Prof. H. D. Vinod, Fordham University, New York. All rights reserved.

Matrix Algebra, Class Notes (part 1) by Hrishikesh D. Vinod Copyright 1998 by Prof. H. D. Vinod, Fordham University, New York. All rights reserved. Matrix Algebra, Class Notes (part 1) by Hrishikesh D. Vinod Copyright 1998 by Prof. H. D. Vinod, Fordham University, New York. All rights reserved. 1 Sum, Product and Transpose of Matrices. If a ij with

More information

More than you wanted to know about quadratic forms

More than you wanted to know about quadratic forms CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences More than you wanted to know about quadratic forms KC Border Contents 1 Quadratic forms 1 1.1 Quadratic forms on the unit

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

Equations, Inequalities & Partial Fractions

Equations, Inequalities & Partial Fractions Contents Equations, Inequalities & Partial Fractions.1 Solving Linear Equations 2.2 Solving Quadratic Equations 1. Solving Polynomial Equations 1.4 Solving Simultaneous Linear Equations 42.5 Solving Inequalities

More information

Inverses. Stephen Boyd. EE103 Stanford University. October 27, 2015

Inverses. Stephen Boyd. EE103 Stanford University. October 27, 2015 Inverses Stephen Boyd EE103 Stanford University October 27, 2015 Outline Left and right inverses Inverse Solving linear equations Examples Pseudo-inverse Left and right inverses 2 Left inverses a number

More information

Additional Topics in Linear Algebra Supplementary Material for Math 540. Joseph H. Silverman

Additional Topics in Linear Algebra Supplementary Material for Math 540. Joseph H. Silverman Additional Topics in Linear Algebra Supplementary Material for Math 540 Joseph H Silverman E-mail address: jhs@mathbrownedu Mathematics Department, Box 1917 Brown University, Providence, RI 02912 USA Contents

More information

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n. ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

More information

CHARACTERISTIC ROOTS AND VECTORS

CHARACTERISTIC ROOTS AND VECTORS CHARACTERISTIC ROOTS AND VECTORS 1 DEFINITION OF CHARACTERISTIC ROOTS AND VECTORS 11 Statement of the characteristic root problem Find values of a scalar λ for which there exist vectors x 0 satisfying

More information

Matrix Inverse and Determinants

Matrix Inverse and Determinants DM554 Linear and Integer Programming Lecture 5 and Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1 2 3 4 and Cramer s rule 2 Outline 1 2 3 4 and

More information