7.2 Positive Definite Matrices and the SVD

Size: px
Start display at page:

Download "7.2 Positive Definite Matrices and the SVD"

Transcription

1 96 Chapter 7. Applied Mathematics and A T A 7. Positive efinite Matrices and the SV This chapter about applications of A T A depends on two important ideas in linear algebra. These ideas have big parts to play, we focus on them now. 1. Positive definite symmetric matrices (both A T A and A T CA are positive definite). Singular Value ecomposition ( U V T gives perfect bases for the 4 subspaces) Those are orthogonal matrices U and V in the SV. Their columns are orthonormal eigenvectors of AA T and A T A. The entries in the diagonal matrix are the square roots of the eigenvalues. The matrices AA T and A T A have the same nonzero eigenvalues. Section 6.5 showed that the eigenvectors of these symmetric matrices are orthogonal. I will show now that the eigenvalues of A T A are positive, if A has independent columns. Start with A T Ax x. Then x T A T Ax x T x. Therefore jjaxjj =jjxjj > 0 I separated x T A T Ax into.ax/ T.Ax/ jjaxjj. We don t have 0 because A T A is invertible (since A has independent columns). The eigenvalues must be positive. Those are the key steps to understanding positive definite matrices. They give us three tests on S three ways to recognize when a symmetric matrix S is positive definite : Positive definite symmetric 1. All the eigenvalues of S are positive.. The energy x T Sx is positive for all nonzero vectors x.. S has the form A T A with independent columns in A. There is also a test on the pivots.all > 0/ and a test on n determinants.all > 0/. Example 1 Are these matrices positive definite? When their eigenvalues are positive, construct matrices A with A T A and find the positive energy x T Sx (a) (b) (c) 4 5 Solution The answers are yes, yes, and no. The eigenvalues of those matrices S are (a) 4 and 1 : positive (b) 9 and 1 : positive (c) 9 and 1 : not positive. A quicker test than eigenvalues uses two determinants : the 1 by 1 determinant S 11 and the by determinant of S. Example (b) has S 11 5 and det (pass). Example (c) has S 11 4 but det (fail the test).

2 7.. Positive efinite Matrices and the SV 97 Positive energy is equivalent to positive eigenvalues, when S is symmetric. Let me test the energy x T Sx in all three examples. Two examples pass and the third fails : Œx 1 Œx 1 Œx x x x1 4x x 1 C x > 0 Positive energy when x 0 x1 5x 4 5 x 1 C 8x 1x C 5x Positive energy when x 0 x1 4x x 1 C 10x 1x C 4x Energy when x.1; 1/ 4 5 x Positive energy is a fundamental property. This is the best definition of positive definiteness. When the eigenvalues are positive, there will be many matrices A that give A T S. One choice of A is symmetric and positive definite! Then A T A is A, and this choice p S is a true square root of S. The successful examples (a) and (b) have A : and We know that all symmetric matrices have the form VƒV T with orthonormal eigenvectors in V. The diagonal matrix ƒ has a square root p ƒ, when all eigenvalues are positive. In this case p V p ƒv T is the symmetric positive definite square root : A T p S p.v p ƒv T /.V p ƒv T / V p ƒ p ƒv T S because V T V I: Starting from this unique square root p S, other choices of A come easily. Multiply p S by any matrix Q that has orthonormal columns (so that Q T Q I ). Then Q p S is another choice for A (not a symmetric choice). In fact all choices come this way : A T.Q p S/ T.Q p S/ p SQ T Q p S: (1) I will choose a particular Q in Example 1, to get particular choices of A. Example 1 (continued) Choose Q to multiply p S. Then Q p S has A T 1 1 has A 1 1 T : 4 5

3 98 Chapter 7. Applied Mathematics and A T A Positive Semidefinite Matrices Positive semidefinite matrices include positive definite matrices, and more. Eigenvalues of S can be zero. Columns of A can be dependent. The energy x T Sx can be zero but not negative. This gives new equivalent conditions on a (possibly singular) matrix S T. 1 0 All eigenvalues of S satisfy 0 (semidefinite allows zero eigenvalues). 0 The energy is nonnegative for every x : x T Sx 0 (zero energy is allowed). 0 S has the form A T A (every A is allowed; its columns can be dependent). Example third : (d) The first two matrices are singular and positive semidefinite but not the 0 0 (e) (f) The eigenvalues are 1; 0 and 8; 0 and 8; 0. The energies x T Sx are x and 4.x 1 C x / and 4.x 1 x /. So the third matrix is actually negative semidefinite.. Singular Value ecomposition Now we start with A, square or rectangular. Applications also start this way the matrix comes from the model. The SV splits any matrix into orthogonal U times diagonal times orthogonal V T. Those orthogonal factors will give orthogonal bases for the four fundamental subspaces associated with A. Let me describe the goal for any m by n matrix, and then how to achieve that goal. Find orthonormal bases v 1 ; : : : ;v n for R n and u 1 ; : : : ; u m for R m so that Av 1 1 u 1 : : : Av r r u r Av rc1 0 : : : Av n 0 () The rank of A is r. Those requirements in (4) are expressed by a multiplication AV U. The r nonzero singular values 1 : : : r > 0 are on the diagonal of : AV U A 4v 1 : : : v r : : : v n 5 4 u 1 : : : u r : : : u m : :: r () The last n r vectors in V are a basis for the nullspace of A. The last m r vectors in U are a basis for the nullspace of A T. The diagonal matrix is m by n, with r nonzeros. Remember that V 1 V T, because the columns v 1 ; : : : ;v n are orthonormal in R n : Singular Value ecomposition AV U becomes U V T : (4)

4 7.. Positive efinite Matrices and the SV 99 The SV has orthogonal matrices U and V, containing eigenvectors of AA T and A T A. Comment. A square matrix is diagonalized by its eigenvectors : Ax i i x i is like Av i i u i. But even if A has n eigenvectors, they may not be orthogonal. We need two bases an input basis of v s in R n and an output basis of u s in R m. With two bases, any m by n matrix can be diagonalized. The beauty of those bases is that they can be chosen orthonormal. Then U T U I and V T V I. The v s are eigenvectors of the symmetric matrix A T A. We can guarantee their orthogonality, so that v T j v i 0 for j i. That matrix S is positive semidefinite, so its eigenvalues are i 0. The key to the SV is that Av j is orthogonal to Av i : Orthogonal u s.av j / T.Av i / v T j.at Av i / v T j. i v i/ i if j i 0 if j i (5) This says that the vectors u i Av i = i are orthonormal for i 1; : : : ; r. They are a basis for the column space of A. And the u s are eigenvectors of the symmetric matrix AA T, which is usually different from A T A (but the eigenvalues 1 ; : : : ; r are the same). Example Find the input and output eigenvectors v and u for the rectangular matrix A : U V T : Solution Compute A T A and its unit eigenvectors v 1 ;v ;v. The eigenvalues are 8; ; 0 so the positive singular values are 1 p 8 and p : A T p has v 1 1 p 4 5 ; v p p 5 ; v : 0 1 The outputs u 1 Av 1 = 1 and u Av = are also orthonormal, with 1 p 8 and p. Those vectors u 1 and u are in the column space of A : u v1 1 p 8 0 and u v 0 p 1 : Then U I and the Singular Value ecomposition for this by matrix is U V T : " p p 0 # 1 4 p p 0 p p T :

5 400 Chapter 7. Applied Mathematics and A T A The Fundamental Theorem of Linear Algebra I think of the SV as the final step in the Fundamental Theorem. First come the dimensions of the four subspaces in Figure 7.. Then come the orthogonality of those pairs of subspaces. Now come the orthonormal bases of v s and u s that diagonalize A : SV Av j j u j for j r Av j 0 for j > r A T u j j v j for j r A T u j 0 for j > r Multiplying Av j j u j by A T and dividing by j gives that equation A T u j j v j. dim r row space of A column space of A dim r v 1 u 1 Av 1 1 u 1 1 u 1 R n v r Av r r u r r u r v rc1 v n Av n 0 u rc1 nullspace nullspace of A T of A dim n r dim m r R m u m u r Figure 7.: Orthonormal bases of v s and u s that diagonalize A : m by n with rank r. The norm of A is its largest singular value : jjajj 1. This measures the largest possible ratio of jjavjj to jjvjj. That ratio of lengths is a maximum when v v 1 and Av 1 u 1. This singular value 1 is a much better measure for the size of a matrix than the largest eigenvalue. An extreme case can have zero eigenvalues and just one eigenvector (1; 1) for A. But A T A can still be large : if v.1; 1/ then Av is 00 times larger has max 0: But max norm of 00: (6)

6 7.. Positive efinite Matrices and the SV 401 The Condition Number A valuable property of U V T is that it puts the pieces of A in order of importance. Multiplying a column u i times a row i v T i produces one piece of the matrix. There will be r nonzero pieces from r nonzero s, when A has rank r. The pieces add up to A, when we multiply columns of U times rows of V T : The pieces have rank 1 4 u 1 : : : u r 5 4 1v T 1 : : : : : r v T r 5 u 1. 1 v T 1 / C C u r. r v T r /: (7) The first piece gives the norm of A which is 1. The last piece gives the norm of A 1, which is 1= n when A is invertible. The condition number is 1 times 1= n : Condition number of A c.a/ jjajj jja 1 jj 1 n : (8) This number c.a/ is the key to numerical stability in solving Av b. When A is an orthogonal matrix, the symmetric A T A is the identity matrix. So all singular values of an orthogonal matrix are 1. At the other extreme, a singular matrix has n 0. In that case c 1. Orthogonal matrices have the best condition number c 1. ata Matrices : Application of the SV Big data is the linear algebra problem of this century (and we won t solve it here). Sensors and scanners and imaging devices produce enormous volumes of information. Making decisive sense of that data is the problem for a world of analysts (mathematicians and statisticians of a new type). Most often the data comes in the form of a matrix. The usual approach is by PCA Principal Component Analysis. That is essentially the SV. The first piece 1 u 1 v T 1 holds the most information (in statistics this piece has the greatest variance). It tells us the most. The Chapter 7 Notes include references. REVIEW OF THE KEY IEAS 1. Positive definite symmetric matrices have positive eigenvalues and pivots and energy.. A T A is positive definite if and only if A has independent columns.. x T A T Ax.Ax/ T.Ax/ is zero when Ax 0. A T A can be positive semidefinite. 4. The SV is a factorization U V T (orthogonal) (diagonal) (orthogonal). 5. The columns of V and U are eigenvectors of A T A and AA T (singular vectors of A). 6. Those orthonormal bases achieve Av i i u i and A is diagonalized. 7. The largest piece of 1 u 1 v T 1 C C ru r v T r gives the norm jjajj 1.

7 40 Chapter 7. Applied Mathematics and A T A Problem Set 7. 1 For a by matrix, suppose the 1 by 1 and by determinants a and ac b are positive. Then c > b =a is also positive. (i) 1 and have the same sign because their product 1 equals. (i) That sign is positive because 1 C equals. Conclusion : The tests a > 0; ac b > 0 guarantee positive eigenvalues 1 ;. Which of S 1, S, S, S 4 has two positive eigenvalues? Use a and ac b, don t compute the s. Find an x with x T S 1 x < 0, confirming that A 1 fails the test S 1 S 6 7 S 5 S : 101 For which numbers b and c are these matrices positive definite? 1 b 4 c b : b 9 4 c b c 4 What is the energy q ax C bxy C cy x T Sx for each of these matrices? Complete the square to write q as a sum of squares d 1. / C d. /. 1 1 and : x T Sx x 1 x certainly has a saddle point and not a minimum at.0; 0/. What symmetric matrix S produces this energy? What are its eigenvalues? 6 Test to see if A T A is positive definite in each case : and 41 5 and : Which by symmetric matrices S and T produce these quadratic energies? x T Sx x 1 C x C x x 1 x x x. Why is S positive definite? x T T x x 1 C x C x x 1 x x 1 x x x. Why is T semidefinite? 8 Compute the three upper left determinants of S to establish positive definiteness. (The first is.) Verify that their ratios give the second and third pivots. Pivots ratios of determinants : 0 8

8 7.. Positive efinite Matrices and the SV 40 9 For what numbers c and d are S and T positive definite? Test the determinants : c c 15 and T 4 d 45 : 1 1 c If S is positive definite then S 1 is positive definite. Best proof : The eigenvalues of S 1 are positive because. Second proof (only for by ) : The entries of S 1 1 c b pass the determinant tests : ac b b a 11 If S and T are positive definite, their sum S C T is positive definite. Pivots and eigenvalues are not convenient for S C T. Better to prove x T.S C T /x > 0. 1 A positive definite matrix cannot have a zero (or even worse, a negative number) on its diagonal. Show that this matrix fails to have x T Sx > 0 : x 1 x1 x x x 5 is not positive when.x 1 ; x ; x /. ; ; /: 1 5 x 1 A diagonal entry a jj of a symmetric matrix cannot be smaller than all the s. If it were, then A a jj I would have eigenvalues and would be positive definite. But A a jj I has a on the main diagonal. 14 Show that if all > 0 then x T Sx > 0. We must do this for every nonzero x, not just the eigenvectors. So write x as a combination of the eigenvectors and explain why all cross terms are x T i x j 0. Then x T Sx is.c 1 x 1 C Cc n x n / T.c 1 1 x 1 C Cc n n x n / c 1 1x T 1 x 1 C Cc n nx T n x n > 0: 15 Give a quick reason why each of these statements is true: (a) Every positive definite matrix is invertible. (b) The only positive definite projection matrix is P I. (c) A diagonal matrix with positive diagonal entries is positive definite. (d) A symmetric matrix with a positive determinant might not be positive definite! 16 With positive pivots in, the factorization LL T becomes L p p L T. (Square roots of the pivots give p p.) Then p L T yields the Cholesky factorization A T A which is symmetrized L U : From 1 0 find S: From find chol.s/:

9 404 Chapter 7. Applied Mathematics and A T A cos 17 Without multiplying sin sin 0 cos cos 0 5 sin sin, find cos (a) the determinant of S (c) the eigenvectors of S (b) the eigenvalues of S (d) a reason why S is symmetric positive definite. 18 For F 1.x; y/ 1 4 x4 C x y C y and F.x; y/ x C xy x find the second derivative matrices H 1 and H : Test for minimum " F=@x@y F=@y is positive definite H 1 is positive definite so F 1 is concave up. convex). Find the minimum point of F 1 and the saddle point of F (look only where first derivatives are zero). 19 The graph of z x C y is a bowl opening upward. The graph of z x y is a saddle. The graph of z x y is a bowl opening downward. What is a test on a; b; c for z ax C bxy C cy to have a saddle point at.0; 0/? 0 Which values of c give a bowl and which c give a saddle point for the graph of z 4x C 1xy C cy? escribe this graph at the borderline value of c. 1 When S and T are symmetric positive definite, ST might not even be symmetric. But its eigenvalues are still positive. Start from ST x x and take dot products with T x. Then prove > 0. Suppose C is positive definite (so y T C y > 0 whenever y 0) and A has independent columns (so Ax 0 whenever x 0). Apply the energy test to x T A T CAx to show that A T CA is positive definite : the crucial matrix in engineering. Find the eigenvalues and unit eigenvectors v 1 ;v of A T A. Then find u 1 Av 1 = 1 : 1 6 and A T and AA T 5 15 : 15 Verify that u 1 is a unit eigenvector of AA T. Complete the matrices U; ; V. SV 1 h i h i T u 6 1 u 1 v v : 4 Write down orthonormal bases for the four fundamental subspaces of this A. 5 (a) Why is the trace of A T A equal to the sum of all a ij? (b) For every rank-one matrix, why is 1 sum of all a ij?

10 7.. Positive efinite Matrices and the SV Find the eigenvalues and unit eigenvectors of A T A and AA T. Keep each Av u: 1 1 Fibonacci matrix 1 0 Construct the singular value decomposition and verify that A equals U V T. 7 Compute A T A and AA T and their eigenvalues and unit eigenvectors for V and U: Rectangular matrix : 1 Check AV U (this will decide signs in U ). has the same shape as A. 8 Construct the matrix with rank one that has Av 1u for v 1.1; 1; 1; 1/ and u 1.; ; 1/. Its only singular value is 1. 9 Suppose A is invertible (with 1 > > 0). Change A by as small a matrix as possible to produce a singular matrix A 0. Hint : U and V do not change. h i h i T From u 1 u 1 v 1 v find the nearest A 0. 0 The SV for A C I doesn t use C I. Why is.a C I/ not just.a/ C I? 1 Multiply A T Av v by A. Put in parentheses to show that Av is an eigenvector of AA T. We divide by its length jjavjj to get the unit eigenvector u. My favorite example of the SV is when Av.x/ dv=dx, with the endpoint conditions v.0/ 0 and v.1/ 0. We are looking for orthogonal functions v.x/ so that their derivatives Av dv=dx are also orthogonal. The perfect choice is v 1 sin x and v sin x and v k sin kx. Then each u k is a cosine. The derivative of v 1 is Av 1 cos x u 1. The singular values are 1 and k k. Orthogonality of the sines (and orthogonality of the cosines) is the foundation for Fourier series. You may object to AV U. The derivative d=dx is not a matrix! The orthogonal factor V has functions sin kx in its columns, not vectors. The matrix U has cosine functions cos kx. Since when is this allowed? One answer is to refer you to the chebfun package on the web. This extends linear algebra to matrices whose columns are functions not vectors. Another answer is to replace d=dx by a first difference matrix A. Its shape will be N C 1 by N. A has 1 s down the diagonal and 1 s on the diagonal below. Then AV U has discrete sines in V and discrete cosines in U. For N those will be sines and cosines of 0 ı and 60 ı in v 1 and u 1. Can you construct the difference matrix A ( by ) and A T A ( by )? The discrete sines are v 1. p =; p =/ and v. p p =; =/. Test that Av1 is orthogonal to Av. What are the singular values 1 and in?

Similar matrices and Jordan form

Similar matrices and Jordan form Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1 (d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

More information

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A = MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

More information

[1] Diagonal factorization

[1] Diagonal factorization 8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

More information

3. INNER PRODUCT SPACES

3. INNER PRODUCT SPACES . INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

More information

Vector Math Computer Graphics Scott D. Anderson

Vector Math Computer Graphics Scott D. Anderson Vector Math Computer Graphics Scott D. Anderson 1 Dot Product The notation v w means the dot product or scalar product or inner product of two vectors, v and w. In abstract mathematics, we can talk about

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

Lecture 5: Singular Value Decomposition SVD (1)

Lecture 5: Singular Value Decomposition SVD (1) EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system

More information

MATH 551 - APPLIED MATRIX THEORY

MATH 551 - APPLIED MATRIX THEORY MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

More information

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

CS3220 Lecture Notes: QR factorization and orthogonal transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

Lecture 5 Principal Minors and the Hessian

Lecture 5 Principal Minors and the Hessian Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions. 3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

More information

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8 Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e

More information

6. Cholesky factorization

6. Cholesky factorization 6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010 Math 550 Notes Chapter 7 Jesse Crawford Department of Mathematics Tarleton State University Fall 2010 (Tarleton State University) Math 550 Chapter 7 Fall 2010 1 / 34 Outline 1 Self-Adjoint and Normal Operators

More information

8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

More information

DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II. Matrix Algorithms DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

More information

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0. Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability

More information

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

More information

Linearly Independent Sets and Linearly Dependent Sets

Linearly Independent Sets and Linearly Dependent Sets These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for in-class presentation

More information

ISOMETRIES OF R n KEITH CONRAD

ISOMETRIES OF R n KEITH CONRAD ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x

More information

Linear Algebraic Equations, SVD, and the Pseudo-Inverse

Linear Algebraic Equations, SVD, and the Pseudo-Inverse Linear Algebraic Equations, SVD, and the Pseudo-Inverse Philip N. Sabes October, 21 1 A Little Background 1.1 Singular values and matrix inversion For non-smmetric matrices, the eigenvalues and singular

More information

Orthogonal Projections

Orthogonal Projections Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors

More information

Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors

Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors 1 Chapter 13. VECTORS IN THREE DIMENSIONAL SPACE Let s begin with some names and notation for things: R is the set (collection) of real numbers. We write x R to mean that x is a real number. A real number

More information

Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 51 First Exam January 29, 2015 Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

More information

Applied Linear Algebra I Review page 1

Applied Linear Algebra I Review page 1 Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

More information

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Review Jeopardy. Blue vs. Orange. Review Jeopardy Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

5. Orthogonal matrices

5. Orthogonal matrices L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

More information

x + y + z = 1 2x + 3y + 4z = 0 5x + 6y + 7z = 3

x + y + z = 1 2x + 3y + 4z = 0 5x + 6y + 7z = 3 Math 24 FINAL EXAM (2/9/9 - SOLUTIONS ( Find the general solution to the system of equations 2 4 5 6 7 ( r 2 2r r 2 r 5r r x + y + z 2x + y + 4z 5x + 6y + 7z 2 2 2 2 So x z + y 2z 2 and z is free. ( r

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i. Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

More information

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices. Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R

More information

Math 312 Homework 1 Solutions

Math 312 Homework 1 Solutions Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

More information

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n. ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

More information

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two

More information

Math 2270 - Lecture 33 : Positive Definite Matrices

Math 2270 - Lecture 33 : Positive Definite Matrices Math 2270 - Lecture 33 : Positive Definite Matrices Dylan Zwick Fall 2012 This lecture covers section 6.5 of the textbook. Today we re going to talk about a special type of symmetric matrix, called a positive

More information

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Matrix Algebra A. Doerr Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Some Basic Matrix Laws Assume the orders of the matrices are such that

More information

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication). MAT 2 (Badger, Spring 202) LU Factorization Selected Notes September 2, 202 Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix

More information

17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function

17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function 17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function, : V V R, which is symmetric, that is u, v = v, u. bilinear, that is linear (in both factors):

More information

Math 215 HW #6 Solutions

Math 215 HW #6 Solutions Math 5 HW #6 Solutions Problem 34 Show that x y is orthogonal to x + y if and only if x = y Proof First, suppose x y is orthogonal to x + y Then since x, y = y, x In other words, = x y, x + y = (x y) T

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

4.3 Least Squares Approximations

4.3 Least Squares Approximations 18 Chapter. Orthogonality.3 Least Squares Approximations It often happens that Ax D b has no solution. The usual reason is: too many equations. The matrix has more rows than columns. There are more equations

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

Inner product. Definition of inner product

Inner product. Definition of inner product Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product

More information

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

Chapter 20. Vector Spaces and Bases

Chapter 20. Vector Spaces and Bases Chapter 20. Vector Spaces and Bases In this course, we have proceeded step-by-step through low-dimensional Linear Algebra. We have looked at lines, planes, hyperplanes, and have seen that there is no limit

More information

MATH 10034 Fundamental Mathematics IV

MATH 10034 Fundamental Mathematics IV MATH 0034 Fundamental Mathematics IV http://www.math.kent.edu/ebooks/0034/funmath4.pdf Department of Mathematical Sciences Kent State University January 2, 2009 ii Contents To the Instructor v Polynomials.

More information

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

More information

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system 1. Systems of linear equations We are interested in the solutions to systems of linear equations. A linear equation is of the form 3x 5y + 2z + w = 3. The key thing is that we don t multiply the variables

More information

Figure 1.1 Vector A and Vector F

Figure 1.1 Vector A and Vector F CHAPTER I VECTOR QUANTITIES Quantities are anything which can be measured, and stated with number. Quantities in physics are divided into two types; scalar and vector quantities. Scalar quantities have

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

Lecture notes on linear algebra

Lecture notes on linear algebra Lecture notes on linear algebra David Lerner Department of Mathematics University of Kansas These are notes of a course given in Fall, 2007 and 2008 to the Honors sections of our elementary linear algebra

More information

18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in 2-106. Total: 175 points.

18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in 2-106. Total: 175 points. 806 Problem Set 4 Solution Due Wednesday, March 2009 at 4 pm in 2-06 Total: 75 points Problem : A is an m n matrix of rank r Suppose there are right-hand-sides b for which A x = b has no solution (a) What

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

Quadratic forms Cochran s theorem, degrees of freedom, and all that

Quadratic forms Cochran s theorem, degrees of freedom, and all that Quadratic forms Cochran s theorem, degrees of freedom, and all that Dr. Frank Wood Frank Wood, fwood@stat.columbia.edu Linear Regression Models Lecture 1, Slide 1 Why We Care Cochran s theorem tells us

More information

More than you wanted to know about quadratic forms

More than you wanted to know about quadratic forms CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences More than you wanted to know about quadratic forms KC Border Contents 1 Quadratic forms 1 1.1 Quadratic forms on the unit

More information

( ) which must be a vector

( ) which must be a vector MATH 37 Linear Transformations from Rn to Rm Dr. Neal, WKU Let T : R n R m be a function which maps vectors from R n to R m. Then T is called a linear transformation if the following two properties are

More information

5.3 The Cross Product in R 3

5.3 The Cross Product in R 3 53 The Cross Product in R 3 Definition 531 Let u = [u 1, u 2, u 3 ] and v = [v 1, v 2, v 3 ] Then the vector given by [u 2 v 3 u 3 v 2, u 3 v 1 u 1 v 3, u 1 v 2 u 2 v 1 ] is called the cross product (or

More information

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj Section 5. l j v j = [ u u j u m ] l jj = l jj u j + + l mj u m. l mj Section 5. 5.. Not orthogonal, the column vectors fail to be perpendicular to each other. 5..2 his matrix is orthogonal. Check that

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

Review of Fundamental Mathematics

Review of Fundamental Mathematics Review of Fundamental Mathematics As explained in the Preface and in Chapter 1 of your textbook, managerial economics applies microeconomic theory to business decision making. The decision-making tools

More information

3.2 Sources, Sinks, Saddles, and Spirals

3.2 Sources, Sinks, Saddles, and Spirals 3.2. Sources, Sinks, Saddles, and Spirals 6 3.2 Sources, Sinks, Saddles, and Spirals The pictures in this section show solutions to Ay 00 C By 0 C Cy D 0. These are linear equations with constant coefficients

More information

1 0 5 3 3 A = 0 0 0 1 3 0 0 0 0 0 0 0 0 0 0

1 0 5 3 3 A = 0 0 0 1 3 0 0 0 0 0 0 0 0 0 0 Solutions: Assignment 4.. Find the redundant column vectors of the given matrix A by inspection. Then find a basis of the image of A and a basis of the kernel of A. 5 A The second and third columns are

More information

Lecture 1: Schur s Unitary Triangularization Theorem

Lecture 1: Schur s Unitary Triangularization Theorem Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Recall that two vectors in are perpendicular or orthogonal provided that their dot Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

More information

Solution to Homework 2

Solution to Homework 2 Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

Section 1.1. Introduction to R n

Section 1.1. Introduction to R n The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

MAT 242 Test 2 SOLUTIONS, FORM T

MAT 242 Test 2 SOLUTIONS, FORM T MAT 242 Test 2 SOLUTIONS, FORM T 5 3 5 3 3 3 3. Let v =, v 5 2 =, v 3 =, and v 5 4 =. 3 3 7 3 a. [ points] The set { v, v 2, v 3, v 4 } is linearly dependent. Find a nontrivial linear combination of these

More information

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION 4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:

More information

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

More information

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form Section 1.3 Matrix Products A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form (scalar #1)(quantity #1) + (scalar #2)(quantity #2) +...

More information

Zeros of Polynomial Functions

Zeros of Polynomial Functions Zeros of Polynomial Functions The Rational Zero Theorem If f (x) = a n x n + a n-1 x n-1 + + a 1 x + a 0 has integer coefficients and p/q (where p/q is reduced) is a rational zero, then p is a factor of

More information