Diagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions

Size: px
Start display at page:

Download "Diagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions"

Transcription

1 Chapter 3 Diagonalisation Eigenvalues and eigenvectors, diagonalisation of a matrix, orthogonal diagonalisation fo symmetric matrices Reading As in the previous chapter, there is no specific essential reading for this chapter. It is essential that you do some reading, but the topics discussed in this chapter are adequately covered in many texts on linear algebra. The list below gives examples of relevant reading. (For full publication details, see Chapter.) Ostaszewski, A. Mathematics in Economics, Chapter 7, Sections 7., 7.4, 7.6. Ostaszewski, A. Advanced Mathematical Methods. Chapter 5, sections 5. and 5.. Leon, S.J., Linear Algebra with Applications. Chapter 6, sections 6. and 6.3. Simon, C.P. and Blume, L., Mathematics for Economists. Chapter 3, sections 3., 3.7. Introduction One of the most useful techniques in applications of matrices and linear algebra is diagonalisation. Before discussing this, we have to look at the topic of eigenvalues and eigenvectors. We shall explore a number of applications of diagonalisation in the next chapter of the guide. Eigenvalues and eigenvectors Definitions Suppose that A is a square matrix. The number λ is said to be an eigenvalue of A if for some non-zero vector x, Ax = λx. Any non-zero vector x for which this 37

2 equation holds is called an eigenvector for eigenvalue λ or an eigenvector of A corresponding to eigenvalue λ. Finding eigenvalues and eigenvectors To determine whether λ is an eigenvalue of A, we need to determine whether there are any non-zero solutions to the matrix equation Ax = λx. Note that the matrix equation Ax = λx is not of the standard form, since the right-hand side is not a fixed vector b, but depends explicitly on x. However, we can rewrite it in standard form. Note that λx = λix, where I is, as usual, the identity matrix. So, the equation is equivalent to Ax = λix, or Ax λix =, which is equivalent to (A λi)x =. Now, a square linear system Bx = has solutions other than x = precisely when B =. Therefore, taking B = A λi, λ is an eigenvalue if and only if the determinant of the matrix A λi is zero. This determinant, p(λ) = A λi, is known as the characteristic polynomial of A, since it is a polynomial in the variable λ. To find the eigenvalues, we solve the equation A λi =. Let us illustrate with a very simple example. Example: Let Then A λi = A =. λ λ = λ and the characteristic polynomial is A λi = λ λ = ( λ)( λ) = λ 3λ + = λ 3λ. So the eigenvalues are the solutions of λ 3λ =. To solve this, one could use either the formula for the solutions to a quadratic, or simply observe that the equation is λ(λ 3) = with solutions λ = and λ = 3. Hence the eigenvalues of A are and 3. To find an eigenvector for eigenvalue λ, we have to find a solution to (A λi)x =, other than the zero vector. (I stress the fact that eigenvectors cannot be the zero vector because this is a mistake many students make.) This is easy, since for a particular value of λ, all we need to do is solve a simple linear system We illustrate by finding the eigenvectors for the matrix of the example just given. Example: We find eigenvectors of A =. We have seen that the eigenvalues are and 3. To find an eigenvector for eigenvalue we solve the system (A I)x = : that is, Ax =, or x =. x This could be solved using row operations. (Note that it cannot be solved by using inverse matrices since A is not invertible. In fact, inverse matrix techniques or 38

3 Cramer s rule will never be of use here since λ being an eigenvalue means that A λi is not invertible.) However, we can solve this fairly directly just by looking at the equations. We have to solve x + x =, x + x =. Clearly both equations are equivalent. From either one, we obtain x = x. We can choose x to be any number we like. Let s take x = ; then we need x = x =. It follows that an eigenvector for is x =. The choice x = was arbitrary; we could have chosen any non-zero number, so, for example, the following are eigenvectors for : 5.,. 5. There are infinitely many eigenvectors for : for each α, α α is an eigenvector for. But be careful not to think that you can choose α = ; for then x becomes the zero vector, and this is never an eigenvector, simply by definition. To find an eigenvector for 3, we solve (A 3I)x =, which is ( This is equivalent to the equations ) x = x ( ). x + x =, x x =, which are together equivalent to the single equation x = x. If we choose x =, we obtain the eigenvector x =. (Again, any non-zero scalar multiple of this vector is also an eigenvector for eigenvalue 3.) We illustrated with a example just for simplicity, but you should be able to work with 3 3 matrices. We give three such examples. Example: Suppose that A = Find the eigenvalues of A and obtain one eigenvector for each eigenvalue. To find the eigenvalues we solve A λi =. Now, 4 λ 4 A λi = 4 λ λ = (4 λ) 4 λ λ λ 4 4 = (4 λ) ((4 λ)(8 λ) 6) + 4 ( 4(4 λ)) = (4 λ) ((4 λ)(8 λ) 6) 6(4 λ). 39

4 Now, we notice that each of the two terms in this expression has 4 λ as a factor, so instead of expanding everything, we take 4 λ out as a common factor, obtaining A λi = (4 λ) ((4 λ)(8 λ) 6 6) = (4 λ)(3 λ + λ 3) = (4 λ)(λ λ) = (4 λ)λ(λ ). It follows that the eigenvalues are 4,,. (The characteristic polynomial will not always factorise so easily. Here it was simple because of the common factor (4 λ). The next example is more difficult.) To find an eigenvector for 4, we have to solve the equation (A 4I)x =, that is, 4 4 x x = x 3 Of course, we could use row operations, but the system is simple enough to solve straight away. The equations are 4x 3 = 4x 3 = 4x + 4x + 4x 3 =, so x 3 = and x = x. Choosing x =, we get the eigenvector. (Again, we can choose x to be any non-negative number. eigenvalue 4 are all non-zero multiples of this vector.) So the eigenvalues for Activity 3. Determine eigenvectors for and. You should find that for λ =, your eigenvector is a non-zero multiple of and that for λ = your eigenvector is a non-zero multiple of. Example: Let 3 A =. Given that is an eigenvalue of A, find all the eigenvalues of A. 4

5 We calculate the characteristic polynomial of A: 3 A λi = = ( 3 λ) λ λ ( ) λ λ = ( 3 λ)(λ + λ ) + ( λ ) ( + λ) = λ 3 4λ 5λ. Now, the fact that is an eigenvalue means that is a solution of the equation A λi =, which means that (λ ( )), that is, (λ + ), is a factor of the characteristic polynomial A λi. So this characteristic polynomial can be written in the form (λ + )(aλ + bλ + c). Clearly we must have a = and c = to obtain the correct λ 3 term and the correct constant. Given this, b = 3. In other words, the characteristic polynomial is (λ + )( λ 3λ ) = (λ + )(λ + 3λ + ) = (λ + )(λ + )(λ + ). That is, A λi = (λ + ) (λ + ). The eigenvalues are the solutions to A λi =, so they are λ = and λ =. Note that in this case, there are only two eigenvalues (or, the eigenvalue is repeated, or has multiplicity, as it is sometimes said). Example: Let A = 3. 3 Then (check this!), the characteristic polynomial is λ 3 + 8λ λ + 6. factorises (check!) as (λ )(λ )(λ 4), This so the eigenvalues are and 4. There are only two eigenvalues in this case. (We sometimes say that the eigenvalue is repeated or has multiplicity, because (λ ) is a factor of the characteristic polynomial.) To find an eigenvector for λ = 4, we have to solve the equation (A 4I)x =, that is, The equations are x x = x 3 x x + x 3 = x = x x x 3 =, so x = and x = x 3. Choosing x 3 =, we get the eigenvector. For λ =, we have to solve the equation (A I)x =, that is, x x =. x 3. 4

6 This system is equivalent to the single equation x x +x 3 =. (Convince yourself!) Choosing x 3 = and x = we have x =, so we obtain the eigenvector. Complex eigenvalues Although we shall only deal in this subject with real matrices (that is, matrices whose entries are real numbers), it is possible for such real matrices to have complex eigenvalues. This is not something you have to spend much time on, but you have to be aware of it. We briefly describe complex numbers. The complex numbers For more discussion of are based on the complex number i, which is defined to be the square root of. (Of course, no such real number exists.) Any complex number z can be written in the form z = a + bi where a, b are real numbers. We call a the real part and b the imaginary part of z. Of course, any real number is a complex number, since a = a + i. The following example shows a matrix with complex eigenvalues, and it also demonstrates how to deal with complex numbers. Example: Consider the matrix A =. We shall see that it has complex 9 eigenvalues. First, the characteristic polynomial A λi is ( λ) +9 = λ λ+. (Check this!) Using the formula for the roots of a quadratic equation, the eigenvalues are ± 36. Now 36 = (36)( ) = 36 = 6 = 6i. complex numbers, see Appendix A3 of Simon and Blume. So the eigenvalues are the complex numbers + 3i and 3i. Let s proceed with finding eigenvectors. To find an eigenvector for + 3i, we solve (A ( + 3i)I)x =, which is 3i x =. 9 3i This is equivalent to the equations x 3ix + x =, 9x 3ix =. But the second equation is just 3i times the first, so both are equivalent. Taking the second, we see that x = i/3x. So an eigenvector is (taking x = 3) ( i, 3) T. For λ = 3i, we end up solving the system a solution of which is (i, 3) T. 3ix + x =, 9x + 3ix =, You should be aware, then, that even though we are not dealing with matrices that have complex numbers as their entries, the possibility still exists that eigenvalues (and eigenvectors) will involve complex numbers. However, if a matrix is symmetric (that is, it equals its transpose), then it certainly has real eigenvalues. This useful fact, which we shall prove later, is important when we consider quadratic forms in the next chapter. 4

7 Diagonalisation of a square matrix Square matrices A and B are similar if there is an invertible matrix P such that P AP = B. The matrix A is diagonalisable if it is similar to a diagonal matrix; in other words, if there is a diagonal matrix D and an invertible matrix P such that P AP = D. Suppose that matrix A is diagonalisable, and that P AP = D, where D is a diagonal matrix λ λ D = diag(λ, λ,..., λ n ) =.... λ n (Note the useful notation for describing the diagonal matrix D.) AP = DP. If the columns of P are the vectors v, v,..., v n, then and So this means that AP = A (v... v n ) = (Av... Av n ), λ λ DP =... (v... v n ) = (λ v... λ n v n ). λ n Av = λ v, Av = λ v,..., Av n = λ n v n. Then we have The fact that P exists means that none of the vectors v i is the zero vector. So this means that (for i =,,..., n) λ i is an eigenvalue of A and v i is a corresponding eigenvector. Since P has an inverse, these eigenvectors are linearly independent. Therefore, A has n linearly independent eigenvectors. Conversely, if A has n linearly independent eigenvectors, then the matrix P whose columns are these eigenvectors will be invertible, and we will have P AP = D where D is a diagonal matrix with entries equal to the eigenvalues of A. We have therefore established the following result. Theorem 3. A matrix A is diagonalisable if and only if it has n linearly independent eigenvectors. Suppose that this is the case, and let v,..., v n be n linearly independent eigenvectors, where v i is an eigenvector for eigenvalue λ i. Then the matrix P = (v... v n ) is such that P exists, and P AP = D where D = diag(λ,..., λ n ). There is a more sophisticated way to think about this result, in terms of change of basis and matrix representations of linear transformations. Suppose that T is the linear transformation corresponding to A, so that T (x) = Ax for all x. Suppose that A has a set of n linearly independent eigenvectors B = {x, x,..., x n }, corresponding (respectively) to the eigenvalues λ,..., λ n. Since this is a linearly independent set of size n in R n, B is a basis for R n. By Theorem.8, the matrix of T with respect to B is A T [B, B] = ([T (x )] B... [T (x n )] B ). But T (x i ) = Ax i = λx i, so the coordinate vector of T (x i ) with respect to B is [T (x i )] B = (,,...,, λ i,,..., ), 43

8 which has λ i in entry i and all other entries zero. Therefore A T [B, B] = diag(λ,..., λ n ) = D. But by Theorem., A T [B, B] = P A T P, where P = (x... x n ) and A T is the matrix representing T, which in this case is simply A itself. We therefore see that P AP = A T [B, B] = D, and so the matrix P diagonalises A. Example: Consider again the matrix A = We have seen that it has three distinct eigenvalues, 4,, and that eigenvectors corresponding to eigenvalues 4,, are (in that order),,. We now form the matrix P whose columns are these eigenvectors: P =. Then, according to the theory, P should have an inverse, and we should have P AP = D = diag(4,, ). To check that this is true, we could calculate P and evaluate the product. The inverse may be calculated using either elementary row operations or determinants. (Matrix inversion is not part of this subject: however, it is part of the pre-requisite subject Mathematics for economists. You should therefore know how to invert a matrix.) Activity 3. Calculate P and verify that P AP = D. Not all n n matrices have n linearly independent eigenvalues, as the following example shows. Example: The matrix A = ( 4 ) has characteristic polynomial λ 6λ + 9 = (λ 3), so there is only one eigenvalue, λ = 3. The eigenvectors are the non-zero solutions to (A 3I)x = : that is, x =. This is equivalent to the single equation x + x =, with general solution x = x. Setting x = r, we see that the solution set of the system consists of all x 44

9 r vectors of the form as r runs through all non-zero real numbers. So the r eigenvectors are precisely the non-zero scalar multiples of the fixed vector. Any two eigenvectors are therefore multiples of each other and hence form a linearly dependent set. In other words, there are not two linearly independent eigenvectors, and the matrix is not diagonalisable. The following result is useful. It shows that if a matrix has n different eigenvalues then it is diagonalisable. Theorem 3. Eigenvectors corresponding to different eigenvalues are linearly independent. So if an n n matrix has n different eigenvalues, then it has a set of n linearly independent eigenvectors and is therefore diagonalisable. For a proof, see Ostaszewski, Mathematics in Economics, Section 7.4. It is not, however, necessary for the eigenvalues to be distinct. What is needed for diagonalisation is a set of n linearly independent eigenvectors, and this can happen even when there is a repeated eigenvalue (that is, when there are fewer than n different eigenvalues). The following example illustrates this. Example: We considered the matrix A = 3 3 above, and we saw that it has only two eigenvalues, 4 and. If we want to diagonalise it, we need to find three linearly independent eigenvectors. We found that an eigenvector corresponding to λ = 4 is, and that, for λ =, the eigenvectors are given by the non-zero-vector solutions to the system consisting of just the single equation x x + x 3 =. Above, we simply wanted to find an eigenvector, but now we want to find two which, together with the eigenvector for λ = 4, form a linearly independent set. Now, the system for the eigenvectors corresponding to λ = has just one equation and is therefore of rank ; it follows that the solution set is two-dimensional. Let s see exactly what the general solution looks like. We have x = x x 3, and x, x 3 can be chosen independently of each other. Setting x 3 = r and x = s, we see that the general solution is x x = s r s = s + r, (r, s R). x 3 r This shows that the solution space (the eigenspace, as it is called in this instance) is spanned by the two linearly independent vectors,. Now, each of these is an eigenvector corresponding to eigenvalue and, together with our eigenvector for λ = 4, the three form a linearly independent set. So there are 45

10 three linearly independent eigenvectors, even though two of them correspond to the same eigenvalue. The matrix is therefore diagonalisable. We may take P =. Then (Check!) P AP = D = diag(4,, ). Orthogonal diagonalisation of symmetric matrices The matrix we considered above, A = , is symmetric: that is, its transpose A T is equal to itself. It turns out that such matrices are always diagonalisable. They are, furthermore, diagonalisable in a special way. A matrix P is orthogonal if P T P = P P T = I: that is, if P has inverse P T. A matrix A is said to be orthogonally diagonalisable if there is an orthogonal matrix P such that P T AP = D where D is a diagonal matrix. Note that P T = P, so P T AP = P AP. The argument given above shows that the columns of P must be n linearly independent eigenvectors of A. But the condition that P T P = I means something else, as we now discuss. Suppose that the columns of P are x, x,..., x n, so that P = (x x... x n ). Then the rows of the transpose P T are x T,..., x T n, so x T x T P T =. Calculating the matrix product P T P, we find that the (i, j)-entry of P T P is x T i x j. But, since P T P = I, we must have x T n. x T i x i = (i =,,..., n), x T i x j = (i j). We say that vectors x, y are orthogonal if the matrix product x T y is. (Orthogonality will be discussed in more detail later.) So, any two of the eigenvectors x,..., x n must be orthogonal. Furthermore, for i =,,..., n, x T i x i =. The length of a vector x is x = n x i = x T x. i= So, not only must any two of these eigenvectors be orthogonal, but each must have length. We shall discuss orthogonality in more detail in the next chapter. For the moment, we have the following result. Theorem 3.3 If the matrix A is symmetric (A T = A) then eigenvectors corresponding to different eigenvalues are orthogonal. 46

11 Proof Suppose that λ and µ are any two different eigenvalues of A and that x, y are corresponding eigenvectors. Then Ax = λx and Ay = µy. The trick in this proof is to find two different expressions for the product x T Ay (which then must, of course, be equal to each other). Note that the matrix product x T Ay is a matrix or, equivalently, a number. First, since Ay = µy, we have x T Ay = x T (µy) = µx T y. But also, since Ax = λx, we have (Ax) T = (λx) T = λx T. Now, for any matrices M, N, (MN) T = N T M T, so (Ax) T = x T A T. But A T = A (because A is symmetric), so x T A = λx T and hence x T Ay = λx T y. We therefore have two different expressions for x T Ay: it equals µx T y and λx T y. Hence, µx T y = λx T y, or (µ λ)x T y =. But since λ µ (they are different eigenvalues), we have µ λ. We deduce, therefore, that x T y =. But this says precisely that x and y are orthogonal, which is exactly what we wanted to prove. This is quite a sneaky proof: the trick is to remember to consider x T Ay. The result just presented shows that if an n n symmetric matrix has exactly n different eigenvalues then any n corresponding eigenvectors are orthogonal to one another. Since we may take the eigenvectors to have length, this shows that the matrix is orthogonally diagonalisable. The following result makes this precise. Theorem 3.4 Suppose that A has n different eigenvalues. Take n corresponding eigenvectors, each of length. (Recall that the length of a vector x is just x = n i= x i.) Form the matrix P which has these eigenvectors as its columns. Then P = P T (that is, P is an orthogonal matrix) and P T AP = D, the diagonal matrix whose entries are the eigenvalues of A. (Note that we have only shown here that symmetric matrices with n different eigenvalues are orthogonally diagonalisable, but it turns out that all symmetric matrices are orthogonally diagonalisable.) Example: We work with the same matrix we used earlier, A = As we have already observed, this is symmetric. We have seen that it has three distinct eigenvalues, 4,. Earlier, we found that eigenvectors for eigenvalues 4,, are (in that order),,. Activity 3.3 Convince yourself that any two of these three eigenvectors are orthogonal. 47

12 Now, these eigenvectors are not of length. For example, the first one has length + ( ) + =. If we divide each entry of it by, we will indeed obtain an eigenvector of length : / /. We can similarly normalise the other two vectors, obtaining / 3 / 3 /, / 6 / 6 3 /. 6 Activity 3.4 Make sure you understand this normalisation. We now form the matrix P whose columns are these normalised eigenvectors: P = / / 3 / 6 / / 3 / 6 / 3 /. 6 Then P is orthogonal and P T AP = D = diag(4,, ). Activity 3.5 Check that P is orthogonal by calculating P T P. Example: Let A = Note that A is symmetric. We find an orthogonal matrix P such that P T AP is a diagonal matrix. The characteristic polynomial of A is 7 λ 9 A λi = λ 9 7 λ = ( λ)[(7 λ)(7 λ) 8] = ( λ)(λ 4λ 3) = ( λ)(λ 6)(λ + ), where we have expanded the determinant using the middle row. So the eigenvalues are, 6,. An eigenvector for λ = is given by 5x + 9z =, 9x + 5z =. This means x = z =. So we may take (,, ) T. This already has length so there is no need to normalise it. (Recall that we need three eigenvectors which are of length.) For λ = we find that an eigenvector is (,, ) T (or some multiple of this). To normalise (that is, to make of length ), we divide by its length, which is, obtaining (/ )(,, ) T. For λ = 6, we find a normalised eigenvector is (/ )(,, ). It follows that if we let P = / / / /, then P is orthogonal and P T AP = D = diag(,, 6). Check this! 48

13 Learning outcomes This chapter has discussed eigenvalues and eigenvectors and the very important technique of diagonalisation. We shall see in the next chapter how useful a technique diagonalisation is. At the end of this chapter and the relevant reading, you should be able to: explain what is meant by eigenvectors and eigenvalues, and by diagonalisation find eigenvalues and corresponding eigenvectors for a square matrix diagonalise a diagonalisable matrix recognise what diagonalisation says in terms of change of basis and matrix representation of linear transformations perform orthogonal diagonalisation on a symmetric matrix that has distinct eigenvalues. Sample examination questions The following are typical exam questions, or parts of questions. Question 3. Find the eigenvalues of the matrix A = and find an eigenvector for each eigenvalue. Hence find an invertible matrix P and a diagonal matrix D such that P AP = D. Question 3. Prove that the matrix is not diagonalizable. Question 3.3 Let A be any (real) n n matrix and suppose λ is an eigenvalue of A. Show that {x : Ax = λx}, the set of eigenvectors for eigenvalue λ together with the zero-vector, is a subspace of R n. Question 3.4 Let Show that the vector A = 6 6. x = is an eigenvector of A. What is the corresponding eigenvalue? Find the other eigenvalues of A, and an eigenvector for each of them. Find an invertible matrix P and a diagonal matrix D such that P AP = D. 49

14 Question 3.5 Let A =. 3 Find an invertible matrix P and a diagonal matrix D such that P AP = D. Question 3.6 Let A =. Find the eigenvalues of A, and an eigenvector for each of them. Find an orthogonal matrix P and a diagonal matrix D such that P T AP = D. Question 3.7 Suppose that A is a real diagonalisable matrix and that all the eigenvalues of A are non-negative. Prove that there is a matrix B such that B = A. Sketch answers or comments on selected questions Question 3. The characteristic polynomial is λ 3 + 4λ 48, which is easily factorised as λ(λ 6)(λ 8). So the eigenvalues are, 6, 8. Corresponding eigenvectors, respectively, are calculated to be (and non-zero multiples of) (/, /, ) T, (/,, ) T, (/4,, ) T. We may therefore take / / /4 P = /, D = diag(, 6, 8). Question 3. You can check that the only eigenvalue is and that the corresponding eigenvectors are all the scalar multiples of (, ) T. So there cannot be two linearly independent eigenvectors, and hence the matrix is not diagonalisable. Question 3.3 Denote the set described by W. First, W. Suppose now that x, y are in W and that α R. We need to show that x + y and αx are also in W. We know that Ax = λx and Ay = λy, so and so x + y and αx are indeed in W. A(x + y) = Ax + Ay = λx + λy = λ(x + y) A(αx) = α(ax) = α(λx) = λ(αx), Question 3.4 It is given that x is an eigenvector. To determine the corresponding eigenvalue we work out Ax. This should be λx where λ is the required eigenvalue. Performing the calculation, we see that Ax = x and so λ =. The characteristic 5

15 polynomial of A is p(λ) = λ 3 + λ + λ. Since λ = is a root, we know that (λ ) is a factor. Factorising, we obtain p(λ) = (λ )( λ + λ + ) = (λ )(λ )(λ + ), so the other eigenvalues are λ =,. Corresponding eigenvectors are, respectively, (,, ) T and (,, ) T. We may therefore take P =, D = diag(,, ). Question 3.5 This is slightly more complicated since there are not 3 distinct eigenvalues. The eigenvalues turn out to be and, with two occurring twice. An eigenvector for is (,, ) T. We need to find a set of 3 linearly independent eigenvectors, so we need another two coming from the eigenspace corresponding to. You should find that the eigenspace for λ = is two-dimensional and has a basis consisting of (,, ) T and (,, ) T. These two vectors together with (,, ) T do indeed form a linearly independent set. Therefore we may take P =, D = diag(,, ). Question 3.6 The characteristic polynomial turns out to be p(λ) = λ 3 + 4λ 3λ = λ(λ 3)(λ ), so the eigenvalues are,, 3. Corresponding eigenvectors are, respectively, (,, ) T, (,, ) T, (,, ) T. To perform orthogonal diagonalisation (rather than simply diagonalisation) we need to normalise these (that is, make them of length by dividing each by its length). The lengths of the vectors are (respectively), 3,, 6, so the normalised eigenvectors are ( / 3, / 3, / 3) T, (, /, / ) T, (/ 6, / 6, / 6) T. If we take then P = / 3 / 6 / 3 / / 6 / 3 / /, 6 P AP = diag(,, 3) = D. Question 3.7 Since A can be diagonalised, we have P AP = D for some P, where D = diag(λ,..., λ n ), these entries being the eigenvalues of A. It is given that all λ i. We have A = P DP. Let Then B = P diag( λ, λ,..., λ n )P. B = P diag( λ, λ,..., λ n )P P diag( λ, λ,..., λ n )P and we are done. = P diag( λ, λ,..., λn )P = P DP = A, 5

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

MATH 551 - APPLIED MATRIX THEORY

MATH 551 - APPLIED MATRIX THEORY MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

More information

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION 4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:

More information

MAT 242 Test 2 SOLUTIONS, FORM T

MAT 242 Test 2 SOLUTIONS, FORM T MAT 242 Test 2 SOLUTIONS, FORM T 5 3 5 3 3 3 3. Let v =, v 5 2 =, v 3 =, and v 5 4 =. 3 3 7 3 a. [ points] The set { v, v 2, v 3, v 4 } is linearly dependent. Find a nontrivial linear combination of these

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7. Review University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

More information

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

Inner products on R n, and more

Inner products on R n, and more Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +

More information

[1] Diagonal factorization

[1] Diagonal factorization 8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

More information

Chapter 20. Vector Spaces and Bases

Chapter 20. Vector Spaces and Bases Chapter 20. Vector Spaces and Bases In this course, we have proceeded step-by-step through low-dimensional Linear Algebra. We have looked at lines, planes, hyperplanes, and have seen that there is no limit

More information

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Recall that two vectors in are perpendicular or orthogonal provided that their dot Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I. Ronald van Luijk, 2012 Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

More information

Methods for Finding Bases

Methods for Finding Bases Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,

More information

Similar matrices and Jordan form

Similar matrices and Jordan form Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

More information

Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 51 First Exam January 29, 2015 Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

Examination paper for TMA4115 Matematikk 3

Examination paper for TMA4115 Matematikk 3 Department of Mathematical Sciences Examination paper for TMA45 Matematikk 3 Academic contact during examination: Antoine Julien a, Alexander Schmeding b, Gereon Quick c Phone: a 73 59 77 82, b 40 53 99

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions. 3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

More information

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) 1 9.4.4 Write the given system in matrix form x = Ax + f ( ) sin(t) x y 1 0 5 z = dy cos(t)

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) 1 9.4.4 Write the given system in matrix form x = Ax + f ( ) sin(t) x y 1 0 5 z = dy cos(t) Solutions HW 9.4.2 Write the given system in matrix form x = Ax + f r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + We write this as ( ) r (t) θ (t) = ( ) ( ) 2 r(t) θ(t) + ( ) sin(t) 9.4.4 Write the given system

More information

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i. Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

More information

3.3. Solving Polynomial Equations. Introduction. Prerequisites. Learning Outcomes

3.3. Solving Polynomial Equations. Introduction. Prerequisites. Learning Outcomes Solving Polynomial Equations 3.3 Introduction Linear and quadratic equations, dealt within Sections 3.1 and 3.2, are members of a class of equations, called polynomial equations. These have the general

More information

160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

More information

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001, Fall 2010 September 30th, 2010 A. Donev (Courant Institute)

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two

More information

Section 4.4 Inner Product Spaces

Section 4.4 Inner Product Spaces Section 4.4 Inner Product Spaces In our discussion of vector spaces the specific nature of F as a field, other than the fact that it is a field, has played virtually no role. In this section we no longer

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

More information

Lecture 5: Singular Value Decomposition SVD (1)

Lecture 5: Singular Value Decomposition SVD (1) EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

Brief Introduction to Vectors and Matrices

Brief Introduction to Vectors and Matrices CHAPTER 1 Brief Introduction to Vectors and Matrices In this chapter, we will discuss some needed concepts found in introductory course in linear algebra. We will introduce matrix, vector, vector-valued

More information

Equations, Inequalities & Partial Fractions

Equations, Inequalities & Partial Fractions Contents Equations, Inequalities & Partial Fractions.1 Solving Linear Equations 2.2 Solving Quadratic Equations 1. Solving Polynomial Equations 1.4 Solving Simultaneous Linear Equations 42.5 Solving Inequalities

More information

Linearly Independent Sets and Linearly Dependent Sets

Linearly Independent Sets and Linearly Dependent Sets These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for in-class presentation

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

Lecture 1: Schur s Unitary Triangularization Theorem

Lecture 1: Schur s Unitary Triangularization Theorem Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections

More information

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1 (d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

More information

More than you wanted to know about quadratic forms

More than you wanted to know about quadratic forms CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences More than you wanted to know about quadratic forms KC Border Contents 1 Quadratic forms 1 1.1 Quadratic forms on the unit

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A = MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the

More information

Math 4310 Handout - Quotient Vector Spaces

Math 4310 Handout - Quotient Vector Spaces Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable

More information

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n. ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

More information

Linear Algebra: Determinants, Inverses, Rank

Linear Algebra: Determinants, Inverses, Rank D Linear Algebra: Determinants, Inverses, Rank D 1 Appendix D: LINEAR ALGEBRA: DETERMINANTS, INVERSES, RANK TABLE OF CONTENTS Page D.1. Introduction D 3 D.2. Determinants D 3 D.2.1. Some Properties of

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

4 MT210 Notebook 4 3. 4.1 Eigenvalues and Eigenvectors... 3. 4.1.1 Definitions; Graphical Illustrations... 3

4 MT210 Notebook 4 3. 4.1 Eigenvalues and Eigenvectors... 3. 4.1.1 Definitions; Graphical Illustrations... 3 MT Notebook Fall / prepared by Professor Jenny Baglivo c Copyright 9 by Jenny A. Baglivo. All Rights Reserved. Contents MT Notebook. Eigenvalues and Eigenvectors................................... Definitions;

More information

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices. Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R

More information

Linear Algebra Notes

Linear Algebra Notes Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note

More information

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone

More information

( ) which must be a vector

( ) which must be a vector MATH 37 Linear Transformations from Rn to Rm Dr. Neal, WKU Let T : R n R m be a function which maps vectors from R n to R m. Then T is called a linear transformation if the following two properties are

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

MA106 Linear Algebra lecture notes

MA106 Linear Algebra lecture notes MA106 Linear Algebra lecture notes Lecturers: Martin Bright and Daan Krammer Warwick, January 2011 Contents 1 Number systems and fields 3 1.1 Axioms for number systems......................... 3 2 Vector

More information

Math 312 Homework 1 Solutions

Math 312 Homework 1 Solutions Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

More information

3.2. Solving quadratic equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

3.2. Solving quadratic equations. Introduction. Prerequisites. Learning Outcomes. Learning Style Solving quadratic equations 3.2 Introduction A quadratic equation is one which can be written in the form ax 2 + bx + c = 0 where a, b and c are numbers and x is the unknown whose value(s) we wish to find.

More information

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Matrix Algebra A. Doerr Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Some Basic Matrix Laws Assume the orders of the matrices are such that

More information

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

More information

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Review Jeopardy. Blue vs. Orange. Review Jeopardy Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

CURVE FITTING LEAST SQUARES APPROXIMATION

CURVE FITTING LEAST SQUARES APPROXIMATION CURVE FITTING LEAST SQUARES APPROXIMATION Data analysis and curve fitting: Imagine that we are studying a physical system involving two quantities: x and y Also suppose that we expect a linear relationship

More information

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of

More information

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. ( 1 1 5 2 0 6

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. ( 1 1 5 2 0 6 Chapter 7 Matrices Definition An m n matrix is an array of numbers set out in m rows and n columns Examples (i ( 1 1 5 2 0 6 has 2 rows and 3 columns and so it is a 2 3 matrix (ii 1 0 7 1 2 3 3 1 is a

More information

JUST THE MATHS UNIT NUMBER 1.8. ALGEBRA 8 (Polynomials) A.J.Hobson

JUST THE MATHS UNIT NUMBER 1.8. ALGEBRA 8 (Polynomials) A.J.Hobson JUST THE MATHS UNIT NUMBER 1.8 ALGEBRA 8 (Polynomials) by A.J.Hobson 1.8.1 The factor theorem 1.8.2 Application to quadratic and cubic expressions 1.8.3 Cubic equations 1.8.4 Long division of polynomials

More information

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

More information

Matrix Representations of Linear Transformations and Changes of Coordinates

Matrix Representations of Linear Transformations and Changes of Coordinates Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,

More information

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors. 3. Cross product Definition 3.1. Let v and w be two vectors in R 3. The cross product of v and w, denoted v w, is the vector defined as follows: the length of v w is the area of the parallelogram with

More information

Lecture 2: Homogeneous Coordinates, Lines and Conics

Lecture 2: Homogeneous Coordinates, Lines and Conics Lecture 2: Homogeneous Coordinates, Lines and Conics 1 Homogeneous Coordinates In Lecture 1 we derived the camera equations λx = P X, (1) where x = (x 1, x 2, 1), X = (X 1, X 2, X 3, 1) and P is a 3 4

More information

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited Physics 116A Winter 2011 The Matrix Elements of a 3 3 Orthogonal Matrix Revisited 1. Introduction In a class handout entitled, Three-Dimensional Proper and Improper Rotation Matrices, I provided a derivation

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2014 Timo Koski () Mathematisk statistik 24.09.2014 1 / 75 Learning outcomes Random vectors, mean vector, covariance

More information

Solution to Homework 2

Solution to Homework 2 Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

Name: Section Registered In:

Name: Section Registered In: Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are

More information

Lecture 5 Principal Minors and the Hessian

Lecture 5 Principal Minors and the Hessian Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and

More information

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0. Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability

More information

1.2 Solving a System of Linear Equations

1.2 Solving a System of Linear Equations 1.. SOLVING A SYSTEM OF LINEAR EQUATIONS 1. Solving a System of Linear Equations 1..1 Simple Systems - Basic De nitions As noticed above, the general form of a linear system of m equations in n variables

More information

ASEN 3112 - Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1

ASEN 3112 - Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1 19 MDOF Dynamic Systems ASEN 3112 Lecture 1 Slide 1 A Two-DOF Mass-Spring-Dashpot Dynamic System Consider the lumped-parameter, mass-spring-dashpot dynamic system shown in the Figure. It has two point

More information

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone

More information

1.5. Factorisation. Introduction. Prerequisites. Learning Outcomes. Learning Style

1.5. Factorisation. Introduction. Prerequisites. Learning Outcomes. Learning Style Factorisation 1.5 Introduction In Block 4 we showed the way in which brackets were removed from algebraic expressions. Factorisation, which can be considered as the reverse of this process, is dealt with

More information