Fat, Square and Thin Matrices - Number of Solutions to Systems of Linear Equations

Size: px
Start display at page:

Download "Fat, Square and Thin Matrices - Number of Solutions to Systems of Linear Equations"

Transcription

1 Fat, Square and Thin Matrices - Number of Solutions to Systems of Linear Equations (With Answers to True/False Questions posted by Dr. Webb) March 30, 2016 Introduction The goal of this short article is to present a summary of the key concepts in linear algebra that helps us understand the number of solutions of systems of linear equations defined by fat, square and thin matrices. Towards the end of the discussion, I will provide brief solutions to the True/False questions posted by Dr. Webb in his website. The Theory Let A be an m n matrix (m is the number of rows and n is the number of columns). Then we say that A is a fat matrix if n > m, a square matrix if n = m and a thin matrix if n < m. First, let us state the fundamental theorem concerning the number of solutions of a system of linear equations. Theorem 0.1. [The fundamental theorem] 1. For any system of equations, Ax = b, only one of the following three possibilities hold. (a) The system has no solution. (b) The system has a unique solution. (c) The system has infinitely many solutions. 2. For a homogeneous system of equations, Ax = 0, only one of the following two possibilities hold. (a) The system has a unique solution. (b) The system has infinitely many solutions. When a system of equations Ax = b has at least one solution, we say that the system is consistent, otherwise inconsistent. We are concerned with determining when a system of equations is consistent and studying the number of solutions of a system of equations. After having learned the theory of linear algebra, one can observe that the existence (of at least one) and the uniqueness of solutions to a system of equations depends on the rank of the matrix (which is the number of pivot columns of the matrix, and which is also equal to the number of nonzero rows in any echelon form of the matrix), which in turn also depends to some extent on the size of the matrix A (fat, square or thin). For the sake of clarity, let us recall the 1

2 definition of the rank of a matrix. (I will assume that you know the meanings of an echelon form, pivots, pivot columns, non-pivot columns, leading variables, free variables, column and row spaces of a matrix). Definition 0.2. Let A be an m n matrix. Let E be any echelon form of A obtained through a sequence of elementary row operations. Then the column rank of A is defined to be the dimension of the column space of A. And one can show that the column rank of A is the number of pivot columns of A. And similarly, the row rank of A is defined to be the dimension of the row space of A. And it can be shown that the row rank of A is the number of non-all-zero rows (which we call nonzero rows) of E. Finally, observe that the number of pivot columns of A is same as the number of pivot columns of E which in turn is same as the number of nonzero rows of E. Hence, it follows that the row rank equals the column rank. We call this common number to be the rank of the matrix A. We denote it by rank(a). We observe that the following result holds. And this relates the rank of any matrix with its size, m n. Proposition 0.3. Let A be an m n matrix. Then rank(a) min(m, n). Proof. Trivial. Follows from the definition of the rank of A. (Exercise for you!) Definition 0.4. Let A be an m n matrix. Then we say that A has full row rank if the rank of A equals the number of rows of A, that is, if rank(a) = m. And similary, we say that A has full column rank if the rank of A equals the number of columns of A, that is, if rank(a) = n. Intuitively, any system is consistent for all possible vectors b if and only if there are no all-zero rows in an echelon form of A if and only if all the rows of any echelon form are nonzero if and only if A has full row rank (that is, the rank equals the number of rows). And on the other hand, a consistent system has a unique solution if and only if there are no free variables (that is, no non-pivot columns) if and only if all the columns are pivot columns if and only if A has full column rank (that is, the rank equals the number of columns). I will give you a moment to think about the above statements by considering the following examples: M = N = P = The fat matrix M is an example of a matrix that has full row rank (M is already in an echelon form and all the rows are nonzero). Then observe that the system of equations Mx = b is always consistent (no matter what b is). (In other words at least one solution exists). This is because we can always solve for all variables (no matter what b is). (In other words, the column space of M is (all of) R 3 ). Now imagine what happens when the last row happened to be an all-zero row. In that case, the matrix does not have full row rank and the columns will not span all of R 3 and there is at least one b, say for example, b = (0, 0, 1) for which the system of equations is inconsistent. Thus we see that when M does not have full row rank, there is atleast one b for which the system is not consistent. Now, consider the thin matrix N. This is an example of a matrix that has full column rank (N is already in an echelon form and all its columns are pivot columns). Now suppose that there is a b for which the system of equations Nx = b is consistent. Then observe that 2

3 the solution to this system in unique. This is because full column rank implies that there are no free variables. All the three variables can uniquely be solved by back substitution. Now imagine what happens when the third row of N is an all-zero row. In that case, the matrix no longer has full column rank (why?) and any consistent system Nx = b has infinitely many solutions, because there is a free variable. In other words, when N does not have full column rank, the solution to any consistent system is no longer unique. Finally observe that the square matrix P has rank 2 (why?) and hence it neither has full row rank, nor has full column rank. Since P does not have full row rank, we see that the system P x = b is not consistent for all b. And similarly, since P does not have full column rank, for any consistent system P x = b, the solution is not unique. Let us make these conclusions relating the existence of solutions to the scenario of full row rank and the uniqueness of solutions to the scenario of full column rank formal in the next couple of theorems. Observe that these theorems that I am about to state are given as problems #31 and #32 in section 4.5 and these were assigned as homework problems to think about. Hence, I believe you have some idea about what I am about to state. Theorem 0.5. (Existence of Solutions - Full Row Rank) Let A be an m n matrix. Then the system of equations, Ax = b is consistent for every b in R m if and only if A has full row rank, that is, rank(a) = m. Proof. First, I have to warn you not to naively use the arguments in the above examples. In the above examples, we considered matrices that were already in an echelon form and hence constructing some b for which the system is inconsistent was relatively straightforward. In the general case, one needs to be a bit careful with this since we would have to go from the echelon form back to the original matrix for such construction. I would prefer to give a cleaner proof as follows. On one had, if rank(a) = m, then the column space of A is an m-dimensional subspace of R m which means that the column space equals R m. In other words the columns of A span (all of) R m. Hence every b in R m can be expressed as a linear combination of the columns of A. This means that the system of equations, Ax = b is consistent for every b in R m. On the other hand, if rank(a) m, then observe that rank(a) < m by Proposition 0.3. But then this means that the column space of A is a proper subspace of R m. Hence the column space of A do not span all of R m. So there is some b in R m that is not in the column space of A. And hence it is not possible to write this b as a linear combination of the columns of A. In other words the system of equations Ax = b is inconsistent for this b. Theorem 0.6. (Uniqueness of Solutions - Full Column Rank) Let A be an m n matrix and suppose that the system of equations Ax = b is consistent. Then its solution is unique if and only if A has full column rank, that is, rank(a) = n. Proof. Observe that since the system is consistent, by Theorem 0.1, the system either has a unique solution or it has infinitely many solutions. On one hand, if rank(a) = n, then all the columns of A are pivot columns. are no free variables. So the system must have a unique solution. Hence, there Conversely, if rank(a) n, then observe that rank(a) < n by Proposition 0.3. Hence, not 3

4 all columns of A are pivot columns. Hence, there is at least one free variable and thus the system must have infinitely many solutions. In the above theorem, when we relax the condition that the system must be consistent in the first place, we get the following corollory. Corollory 0.7. Let A be an m n matrix. Then the system of equations Ax = b has at most one solution for every b in R m if and only if A has full column rank, that is, rank(a) = n. Proof. Suppose rank(a) = n. Let b be in R m. Then the system of equations Ax = b is either consistent or inconsistent. In the former case, the system has a unique solution by the above theorem. In the latter case, the system has no solution. Hence, the system has at most one solution. Now suppose rank(a) n. Then rank(a) < n by Proposition 0.3. Then A has at least one non-pivot column. Now set b = 0. Then the homogeneous system of equations Ax = b has infinitely many solutions. The above theorems applied to the special cases when the matrix is fat, square or thin along with the earlier known theorems about invertible square matrices and fat matrices help us deduce the existence and uniqueness of solutions in relation to the size of the matrix. Let us first recall the following known theorems (Theorem 3 in section 3.3 and Theorem 7 in section 3.5 (along with the equivalence Theorem 2 in section 3.6)). Theorem 0.8. (First Fat Matrix Theorem) Let A be an m n fat matrix, that is, n > m. Then the homogeneous system of equations Ax = 0 has infinitely many solutions. Theorem 0.9. (First Square Matrix Theorem - Nonsingularity) Let A be a n n (square) matrix. Then the following are equivalent: (a) A is invertible. (b) A is row equivalent to the n n identity matrix. (c) The homogeneous system of equations Ax = 0 has a unique (namely, the trivial) solution. (d) The system of equations Ax = b has a unique solution for every b in R n. (e) The system of equations Ax = b is consistent for every b in R n. (f) det A 0. Remark Observe how the above theorem on square matrices relates to the Existence - Full Row Rank and Uniqueness - Full Column Rank theorems stated earlier. If A is an n n square matrix, then A has full row rank if and only if A has full column rank if and only if rank(a) = n. And this happens if and only if any of the equivalent conditions in the above theorem hold. Let us now present the summary of all the theorems into one grand theorem for each type of matrix. 4

5 Theorem (Fat Matrix) Let A be an m n fat matrix, that is, n > m. Then the following assertions hold. (a) rank(a) m. (b) The homogeneous system of equations Ax = 0 has infinitely many solutions. (c) The system of equations Ax = b is either inconsistent or has infinitely many solutions (in other words, it never has a unique solution). (d) The system of equations Ax = b is consistent for every b in R m if and only if rank(a) = m. Proof. Exercise for you! Theorem (Square Matrix) Let A be an n n square matrix. Then the following assertions hold. (a) rank(a) n. (b) The following are equivalent: (i) rank(a) = n. (ii) A is invertible. (iii) A is row equivalent to the n n identity matrix. (iv) The homogeneous system of equations Ax = 0 has a unique (namely, the trivial) solution. (v) The system of equations Ax = b has a unique solution for every b in R n. (vi) The system of equations Ax = b is consistent for every b in R n. (vii) det A 0. (c) From part (b) it follows that the following are equivalent. (i) rank(a) < n. (ii) A is not invertible. (iii) A is not row equivalent to the n n identity matrix. (iv) The homogeneous system of equations Ax = 0 has infinitely many solutions. (v) The system of equations Ax = b has infinitely many solutions for some b in R n. (vi) The system of equations Ax = b is inconsistent for some b in R n. (vii) det A = 0. Proof. Part (a) is trivial. We have already proven part (b) in the remark given in the previous page. Part (c) was just stated for clarity. Theorem (Thin Matrix) Let A be an m n thin matrix, that is, n < m. Then the following assertions hold. (a) rank(a) n. (b) The homogeneous system of equations Ax = 0 has a unique (namely, the trivial) solution if and only if rank(a) = n (in other words, it has infinitely many solutions if and only if rank(a) < n). 5

6 (c) The system of equations Ax = b is inconsistent for some b in R m (in other words, the system is never consistent for all b in R m ). (d) The system of equations Ax = b is has at most one solution for every b in R m if and only if rank(a) = n. Proof. Exercise for you! 6

7 Answers to the True/False Questions posted by Dr. Webb (a) TRUE. If for some b, the system Ax = b has a unique solution, then it follows that A has no free variables which means that all the columns of A are pivot columns, which means that A has full column rank, that is, rank(a) = n. (This is exactly the Uniqueness - Full Column Rank Theorem stated earlier. And observe that such an A can never be fat - why?). Then by the rank-nullity theorem, dim Null(A) = # of columns of A rank(a) = n n = 0. (Or one might just observe that since there are no non-pivot columns and hence there are no free variables which means that the homogeneous system Ax = 0 has a unique solution and hence the null space is the singleton set containing the zero vector). (b) FALSE. If A has zero null space, then A has full column rank (since there are no free variables). This would force A to be either square or thin. If A was a square matrix, then the statement would be true since in this case the system will always be consistent and full column rank implies that the solution is unique. However, in general, the statement is false because, if A was a thin matrix, then there is some b for which the system is inconsistent. (c) TRUE. One might apply the Corollory following the Uniqueness - Full Column Rank Theorem. The argument from scratch looks like this. If for every b the system has at most one solution, then A must have full column rank, for if not, then there is at least one non-pivot column and then for b = 0, the system would have infinitely many solutions due to the existence of at least one free variable. Then since A has full column rank, we must have m n (why? see Proposition 0.3). Or one might look at the contrapositive: If n > m, then by the First Fat Matrix Theorem, the homogeneous system Ax = 0 has infinitely many solutions. It follows that if n > m, then one can not have at most one solution for the system for all b (We just found one b namely, b = 0 for which there is more than one solution). (d) FALSE. The given statement says that there can not be a thin matrix (n < m) A for which for every b the system Ax = b has at most one solution. But this is not true. For example, consider the thin matrix, 1 0 A = Then since A has full column rank, for every b, the system Ax = b is either inconsistent or has a unique solution (that is, it has at most one solution). (e) FALSE. The statement says that if A is a fat or a square matrix, then for every b, the system Ax = b has at least one solution. This is not true unless A has full row rank. (Please see the Existence-Row Rank Theorem). For example, consider the square matrix (you can 7

8 also think of similar fat matrix examples) A = Then since A does not have full row rank, (rank(a) = 2 < 3 = number of rows of A), the system is not consistent for every b. For example, for b = (0, 0, 1), the system has no solution. (f) FALSE. The statement says that if A is either a fat or a square matrix, the homogeneous system Ax = 0 has infinitely many solutions. The statement would be true for fat and singular square matrices, but fails for a nonsingular square matrix. For example, consider [ ] 1 0 A = 0 1 Then since A is nonsingular (invertible, full column rank, etc. - see the bunch of equivalent conditions in the Square Matrix Theorem), the homogeneous system Ax = 0 has a unique solution. (g) FALSE. If the system Ax = 0 has a unique solution, then A has full column rank (all columns are pivot columns, no non-pivot columns, no free variables). Then it follows that A is either thin or square (why?). The statement would be true for square matrices since in that case A would also have full row rank. However, the statement is false in general. For example, consider the thin matrix, 1 0 A = Then the system Ax = 0 has a unique solution since A has full column rank. However, the columns of A do not span (all of) R 3. For example, the vector b = (0, 0, 1) is not in the column space of A. (h) TRUE. This is the definition of linear independence! (Also, observe that if Ax = 0 has a unique solution, then A is definitely not fat (why?)). (i) FALSE. If Ax = 0 has a unique solution, then A has full column rank (because then the columns are all linearly independent, or one could see that we have a unique solution if and only if all columns are pivot columns), and not full row rank. For example consider a thin full rank matrix, 1 0 A = Then the system Ax = 0 has a unique solution because A has full column rank, but then rank(a) = 2 3 = the number of rows of A. 8

9 (j) TRUE. If Ax = 0 has a unique solution, then A has full column rank, because then the columns are all linearly independent, or one could see that we have a unique solution if and only if all columns are pivot columns. Hence rank(a) = n. (k) TRUE. The statement says that Ax = b has a unique solution for all b if and only if A is a square nonsingular matrix. On one hand, if A is a square nonsingular matrix, then Ax = b has a unique solution for all b follows from the big theorem on square matrices. On the other hand, if Ax = b has a unique solution for all b, then we first claim that A must be square. A can not be fat, for if it were, then we may choose b = 0 and the system Ax = b would then have infinitely many solutions contradicting our assumption. Also, A can not be thin, for if it were, then there is at least one b for which the system Ax = b is inconsistent since the columns of a thin matrix are not sufficient to span all of R m. Hence, again our assumption is contradicted. Thus, A must be a square matrix. Then it follows that A is nonsingular again by the big theorem on square matrices. (l) TRUE. For example, consider the 3 3 matrix, A = Then the null space of A is spanned by (1, 1, 1). (Solve the system Ax = 0). (m) FALSE. From the given information, since both the spanning vectors are nonzero, it follows that both the spanning sets (one for the null space and the other for the row space) are also linearly independent. Hence those spanning sets are bases for the null space and the row space respectively. It follows that the rank of the matrix is 1 and the dimension of the null space of the matrix is also 1. But then this contradicts the rank-nullity theorem, for rank(a) + dim Null(A) = = the number of columns of A. (n) TRUE. Every subspace of R n has dimension less than or equal to n. So, given any subspace of R n, there is a basis for the subsbace consisting of less than or equal to n elements and this basis is sufficient to span the subspace. (A basis is a spanning set). 9

160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

More information

Methods for Finding Bases

Methods for Finding Bases Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,

More information

Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 51 First Exam January 29, 2015 Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

More information

( ) which must be a vector

( ) which must be a vector MATH 37 Linear Transformations from Rn to Rm Dr. Neal, WKU Let T : R n R m be a function which maps vectors from R n to R m. Then T is called a linear transformation if the following two properties are

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A = MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the

More information

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Linear Systems, Continued and The Inverse of a Matrix , Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

More information

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

Row Echelon Form and Reduced Row Echelon Form

Row Echelon Form and Reduced Row Echelon Form These notes closely follow the presentation of the material given in David C Lay s textbook Linear Algebra and its Applications (3rd edition) These notes are intended primarily for in-class presentation

More information

Name: Section Registered In:

Name: Section Registered In: Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are

More information

8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

More information

Linear Algebra Notes

Linear Algebra Notes Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note

More information

Lecture Notes 2: Matrices as Systems of Linear Equations

Lecture Notes 2: Matrices as Systems of Linear Equations 2: Matrices as Systems of Linear Equations 33A Linear Algebra, Puck Rombach Last updated: April 13, 2016 Systems of Linear Equations Systems of linear equations can represent many things You have probably

More information

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

More information

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

More information

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above: Section 1.2: Row Reduction and Echelon Forms Echelon form (or row echelon form): 1. All nonzero rows are above any rows of all zeros. 2. Each leading entry (i.e. left most nonzero entry) of a row is in

More information

MAT 242 Test 2 SOLUTIONS, FORM T

MAT 242 Test 2 SOLUTIONS, FORM T MAT 242 Test 2 SOLUTIONS, FORM T 5 3 5 3 3 3 3. Let v =, v 5 2 =, v 3 =, and v 5 4 =. 3 3 7 3 a. [ points] The set { v, v 2, v 3, v 4 } is linearly dependent. Find a nontrivial linear combination of these

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d. DEFINITION: A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers), subject to the following axioms

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

Linearly Independent Sets and Linearly Dependent Sets

Linearly Independent Sets and Linearly Dependent Sets These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for in-class presentation

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

Section 1.7 22 Continued

Section 1.7 22 Continued Section 1.5 23 A homogeneous equation is always consistent. TRUE - The trivial solution is always a solution. The equation Ax = 0 gives an explicit descriptions of its solution set. FALSE - The equation

More information

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Recall that two vectors in are perpendicular or orthogonal provided that their dot Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

More information

1.5 SOLUTION SETS OF LINEAR SYSTEMS

1.5 SOLUTION SETS OF LINEAR SYSTEMS 1-2 CHAPTER 1 Linear Equations in Linear Algebra 1.5 SOLUTION SETS OF LINEAR SYSTEMS Many of the concepts and computations in linear algebra involve sets of vectors which are visualized geometrically as

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

MATH 551 - APPLIED MATRIX THEORY

MATH 551 - APPLIED MATRIX THEORY MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

More information

Math 312 Homework 1 Solutions

Math 312 Homework 1 Solutions Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

More information

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

More information

1.2 Solving a System of Linear Equations

1.2 Solving a System of Linear Equations 1.. SOLVING A SYSTEM OF LINEAR EQUATIONS 1. Solving a System of Linear Equations 1..1 Simple Systems - Basic De nitions As noticed above, the general form of a linear system of m equations in n variables

More information

Matrix Representations of Linear Transformations and Changes of Coordinates

Matrix Representations of Linear Transformations and Changes of Coordinates Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under

More information

1 0 5 3 3 A = 0 0 0 1 3 0 0 0 0 0 0 0 0 0 0

1 0 5 3 3 A = 0 0 0 1 3 0 0 0 0 0 0 0 0 0 0 Solutions: Assignment 4.. Find the redundant column vectors of the given matrix A by inspection. Then find a basis of the image of A and a basis of the kernel of A. 5 A The second and third columns are

More information

MATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3

MATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3 MATH0 Notebook Fall Semester 06/07 prepared by Professor Jenny Baglivo c Copyright 009 07 by Jenny A. Baglivo. All Rights Reserved. Contents MATH0 Notebook 3. Solving Systems of Linear Equations........................

More information

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

More information

The last three chapters introduced three major proof techniques: direct,

The last three chapters introduced three major proof techniques: direct, CHAPTER 7 Proving Non-Conditional Statements The last three chapters introduced three major proof techniques: direct, contrapositive and contradiction. These three techniques are used to prove statements

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

Section 8.2 Solving a System of Equations Using Matrices (Guassian Elimination)

Section 8.2 Solving a System of Equations Using Matrices (Guassian Elimination) Section 8. Solving a System of Equations Using Matrices (Guassian Elimination) x + y + z = x y + 4z = x 4y + z = System of Equations x 4 y = 4 z A System in matrix form x A x = b b 4 4 Augmented Matrix

More information

MATH1231 Algebra, 2015 Chapter 7: Linear maps

MATH1231 Algebra, 2015 Chapter 7: Linear maps MATH1231 Algebra, 2015 Chapter 7: Linear maps A/Prof. Daniel Chan School of Mathematics and Statistics University of New South Wales danielc@unsw.edu.au Daniel Chan (UNSW) MATH1231 Algebra 1 / 43 Chapter

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

Lecture 5: Singular Value Decomposition SVD (1)

Lecture 5: Singular Value Decomposition SVD (1) EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system

More information

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7. Review University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

More information

5 Homogeneous systems

5 Homogeneous systems 5 Homogeneous systems Definition: A homogeneous (ho-mo-jeen -i-us) system of linear algebraic equations is one in which all the numbers on the right hand side are equal to : a x +... + a n x n =.. a m

More information

18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in 2-106. Total: 175 points.

18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in 2-106. Total: 175 points. 806 Problem Set 4 Solution Due Wednesday, March 2009 at 4 pm in 2-06 Total: 75 points Problem : A is an m n matrix of rank r Suppose there are right-hand-sides b for which A x = b has no solution (a) What

More information

Math 333 - Practice Exam 2 with Some Solutions

Math 333 - Practice Exam 2 with Some Solutions Math 333 - Practice Exam 2 with Some Solutions (Note that the exam will NOT be this long) Definitions (0 points) Let T : V W be a transformation Let A be a square matrix (a) Define T is linear (b) Define

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89 by Joseph Collison Copyright 2000 by Joseph Collison All rights reserved Reproduction or translation of any part of this work beyond that permitted by Sections

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

CS3220 Lecture Notes: QR factorization and orthogonal transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of

More information

Subspaces of R n LECTURE 7. 1. Subspaces

Subspaces of R n LECTURE 7. 1. Subspaces LECTURE 7 Subspaces of R n Subspaces Definition 7 A subset W of R n is said to be closed under vector addition if for all u, v W, u + v is also in W If rv is in W for all vectors v W and all scalars r

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

Vector Spaces 4.4 Spanning and Independence

Vector Spaces 4.4 Spanning and Independence Vector Spaces 4.4 and Independence October 18 Goals Discuss two important basic concepts: Define linear combination of vectors. Define Span(S) of a set S of vectors. Define linear Independence of a set

More information

Linear Algebra. A vector space (over R) is an ordered quadruple. such that V is a set; 0 V ; and the following eight axioms hold:

Linear Algebra. A vector space (over R) is an ordered quadruple. such that V is a set; 0 V ; and the following eight axioms hold: Linear Algebra A vector space (over R) is an ordered quadruple (V, 0, α, µ) such that V is a set; 0 V ; and the following eight axioms hold: α : V V V and µ : R V V ; (i) α(α(u, v), w) = α(u, α(v, w)),

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

Homogeneous systems of algebraic equations. A homogeneous (ho-mo-geen -ius) system of linear algebraic equations is one in which

Homogeneous systems of algebraic equations. A homogeneous (ho-mo-geen -ius) system of linear algebraic equations is one in which Homogeneous systems of algebraic equations A homogeneous (ho-mo-geen -ius) system of linear algebraic equations is one in which all the numbers on the right hand side are equal to : a x + + a n x n = a

More information

Solutions to Homework Section 3.7 February 18th, 2005

Solutions to Homework Section 3.7 February 18th, 2005 Math 54W Spring 5 Solutions to Homeork Section 37 Februar 8th, 5 List the ro vectors and the column vectors of the matrix The ro vectors are The column vectors are ( 5 5 The matrix ( (,,,, 4, (5,,,, (

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

Lecture 1: Systems of Linear Equations

Lecture 1: Systems of Linear Equations MTH Elementary Matrix Algebra Professor Chao Huang Department of Mathematics and Statistics Wright State University Lecture 1 Systems of Linear Equations ² Systems of two linear equations with two variables

More information

Orthogonal Projections

Orthogonal Projections Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

Linear Equations ! 25 30 35$ & " 350 150% & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development

Linear Equations ! 25 30 35$ &  350 150% &  11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development MathsTrack (NOTE Feb 2013: This is the old version of MathsTrack. New books will be created during 2013 and 2014) Topic 4 Module 9 Introduction Systems of to Matrices Linear Equations Income = Tickets!

More information

Solving Systems of Linear Equations Using Matrices

Solving Systems of Linear Equations Using Matrices Solving Systems of Linear Equations Using Matrices What is a Matrix? A matrix is a compact grid or array of numbers. It can be created from a system of equations and used to solve the system of equations.

More information

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

MA106 Linear Algebra lecture notes

MA106 Linear Algebra lecture notes MA106 Linear Algebra lecture notes Lecturers: Martin Bright and Daan Krammer Warwick, January 2011 Contents 1 Number systems and fields 3 1.1 Axioms for number systems......................... 3 2 Vector

More information

Examination paper for TMA4115 Matematikk 3

Examination paper for TMA4115 Matematikk 3 Department of Mathematical Sciences Examination paper for TMA45 Matematikk 3 Academic contact during examination: Antoine Julien a, Alexander Schmeding b, Gereon Quick c Phone: a 73 59 77 82, b 40 53 99

More information

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Review Jeopardy. Blue vs. Orange. Review Jeopardy Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1 (d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

5. Linear algebra I: dimension

5. Linear algebra I: dimension 5. Linear algebra I: dimension 5.1 Some simple results 5.2 Bases and dimension 5.3 Homomorphisms and dimension 1. Some simple results Several observations should be made. Once stated explicitly, the proofs

More information

THE DIMENSION OF A VECTOR SPACE

THE DIMENSION OF A VECTOR SPACE THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field

More information

Linear Algebra: Determinants, Inverses, Rank

Linear Algebra: Determinants, Inverses, Rank D Linear Algebra: Determinants, Inverses, Rank D 1 Appendix D: LINEAR ALGEBRA: DETERMINANTS, INVERSES, RANK TABLE OF CONTENTS Page D.1. Introduction D 3 D.2. Determinants D 3 D.2.1. Some Properties of

More information

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA This handout presents the second derivative test for a local extrema of a Lagrange multiplier problem. The Section 1 presents a geometric motivation for the

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication). MAT 2 (Badger, Spring 202) LU Factorization Selected Notes September 2, 202 Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix

More information

MATH10040 Chapter 2: Prime and relatively prime numbers

MATH10040 Chapter 2: Prime and relatively prime numbers MATH10040 Chapter 2: Prime and relatively prime numbers Recall the basic definition: 1. Prime numbers Definition 1.1. Recall that a positive integer is said to be prime if it has precisely two positive

More information

Using row reduction to calculate the inverse and the determinant of a square matrix

Using row reduction to calculate the inverse and the determinant of a square matrix Using row reduction to calculate the inverse and the determinant of a square matrix Notes for MATH 0290 Honors by Prof. Anna Vainchtein 1 Inverse of a square matrix An n n square matrix A is called invertible

More information

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

More information

is in plane V. However, it may be more convenient to introduce a plane coordinate system in V.

is in plane V. However, it may be more convenient to introduce a plane coordinate system in V. .4 COORDINATES EXAMPLE Let V be the plane in R with equation x +2x 2 +x 0, a two-dimensional subspace of R. We can describe a vector in this plane by its spatial (D)coordinates; for example, vector x 5

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors Chapter 9. General Matrices An n m matrix is an array a a a m a a a m... = [a ij]. a n a n a nm The matrix A has n row vectors and m column vectors row i (A) = [a i, a i,..., a im ] R m a j a j a nj col

More information

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

More information

Applied Linear Algebra I Review page 1

Applied Linear Algebra I Review page 1 Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

More information

3. Mathematical Induction

3. Mathematical Induction 3. MATHEMATICAL INDUCTION 83 3. Mathematical Induction 3.1. First Principle of Mathematical Induction. Let P (n) be a predicate with domain of discourse (over) the natural numbers N = {0, 1,,...}. If (1)

More information

6. Cholesky factorization

6. Cholesky factorization 6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix

More information

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION 4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:

More information