# MATH 240 Fall, Chapter 1: Linear Equations and Matrices

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 MATH 240 Fall, 2007 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 9th Ed. written by Prof. J. Beachy Sections , , , , , , 6.5, DEFINITIONS Chapter 1: Linear Equations and Matrices There are a number of definitions concerning systems of linear equations on pages 1 2 and The numbered definitions and concern matrices and matrix operations. 1.5 (p 18). If A = [a ij ] is an m n matrix, then its transpose is A T = [a ji ] (p 43). A square matrix A is symmetric if A T = A and skew symmetric if A T = A (p 46). An n n matrix A is invertible (also called nonsingular) if it has an inverse matrix A 1 such that AA 1 = I n and A 1 A = I n. If A is not invertible, it is called a singular matrix. (p 33 Ex #43). The trace of a square matrix is the sum of the entries on its main diagonal (pp 34 38) These theorems list the basic properties of matrix operations. The only surprises are that we can have AB BA, and that (AB) T = B T A T (pp 46 48) Inverses are unique. If A and B are invertible n n matrices, then AB, A 1, and A T are invertible, with (AB) 1 = B 1 A 1, (A 1 ) 1 = A, and (A T ) 1 = (A 1 ) T. USEFUL EXERCISES (p 54 Ex #52). The matrix A = [ ] 1 d b. ad bc c a [ a b c d ] is invertible if and only if ad bc 0, and in this case A 1 =

2 DEFINITIONS Chapter 2: Solving Linear Systems 2.1 (p 86). The definitions of row echelon form and reduced row echelon form are easier to understand than to write down. You need to know how to use them. 2.2 (p 87) These are the elementary row operations on a matrix: (I) interchange two rows; (II) multiply a row by a nonzero number; (III) add a multiple of one row to another. 2.3 (p 89) Two matrices are row equivalent if one can be obtained from the other by doing a finite sequence of elementary row operations. 2.4 (p 117) A matrix that is obtained from the identity matrix by doing a single elementary row operation is called an elementary matrix (pp 89 93). Every matrix is row equivalent to a matrix in row echelon form, and row equivalent to a unique matrix in reduced row echelon form. 2.3 (p 96). Linear systems are equivalent if and only if their augmented matrices are row equivalent. 2.4 (p 108). A homogeneous linear system has a nontrivial solution whenever it has more unknowns than equations (pp ). Elementary row operations can be done by multiplying by elementary matrices. 2.7 (p 118). Any elementary matrix has an inverse that is an elementary matrix of the same type. 2.8 (p 119). A matrix is invertible if and only if it can be written as a product of elementary matrices. Cor (p 119). A matrix is invertible if and only if it is row equivalent to the identity. 2.9 (p 120). The homogeneous system Ax = 0 has a nontrivial solution if and only if A is singular (p 122). A square matrix is singular if and only if it is row equivalent to a matrix with a row of zeros (p 124). If A and B are n n matrices with AB = I n, then BA = I n and thus B = A 1. The summary on page 120 is important: The following conditions are equivalent for an n n matrix A: (1) A is invertible; (2) Ax = 0 has only the trivial solution; (3) A is row equivalent to the identity matrix; (4) for every vector b, the system Ax = b has a unique solution; (5) A is a product of elementary matrices. 1. (pp ) Gauss-Jordan reduction. This algorithm puts a matrix in reduced row echelon form. You need to know how to use it quickly and accurately. Gaussian elimination uses back substitution after putting the matrix in row echelon form. 2. (pp ) To find the inverse A 1 of an invertible matrix A, start with the matrix [A I n ] and use Gauss-Jordan reduction to reduce it to [I n A 1 ]. 3. (p 121) To write an invertible matrix A as a product of elementary matrices, row reduce it to the identity and keep track of the corresponding elementary matrices. If E k E 2 E 1 A = I n, then A = E1 1 E 1 2 E 1 k.

3 DEFINITIONS Chapter 4: Real Vector Spaces 4.4 (p 189). A vector space is a set V on which two operations are defined, called vector addition and scalar multiplication, and denoted by + and, respectively. The operation + (vector addition) must satisfy the following conditions: Closure: For all vectors u and v in V, the sum u + v belongs to V. (1) Commutative law: For all vectors u and v in V, u + v = v + u. (2) Associative law: For all vectors u, v, w in V, u + (v + w) = (u + v) + w. (3) Additive identity: The set V contains an additive identity element, denoted by 0, such that for all vectors v in V, 0 + v = v and v + 0 = v. (4) Additive inverses: For each vector v in V, the equations v + x = 0 and x + v = 0 have a solution x in V, called an additive inverse of v, and denoted by v. The operation (scalar multiplication) must satisfy the following conditions: Closure: For all real numbers c and all vectors v in V, the product c v belongs to V. (5) Distributive law: For all real numbers c and all vectors u, v in V, c (u + v) = c u + c v. (6) Distributive law: For all real numbers c, d and all vectors v in V, (c + d) v = c v + d v. (7) Associative law: For all real numbers c, d and all vectors v in V, c (d v) = (cd) v. (8) Unitary law: For all vectors v in V, 1 v = v. 4.5 (p 197). Let V be a vector space, and let W be a subset of V. If W is a vector space with respect to the operations in V, then W is called a subspace of V. 4.6 (p 201). A vector v is a linear combination of the vectors v 1, v 2,..., v k if there are real numbers a 1, a 2,..., a k with v = a 1 v 1 + a 2 v a k v k. Def (p 203). For an m n matrix A, the null space of A is {x Ax = 0}. 4.7 (p 209). The set of all linear combinations of the vectors v 1, v 2,..., v k is called the span of the vectors, and is denoted by span{v 1, v 2,..., v k }. 4.8 (p 211). A set of vectors S = {v 1, v 2,..., v k } spans the vector space V if span{v 1, v 2,..., v k } = V. In this case S is called a spanning set for V. 4.9 (p 218). The vectors v 1, v 2,..., v k are linearly independent if the equation x 1 v 1 + x 2 v x k v k = 0 has only the trivial solution (all zeros). The vectors are linearly dependent if the equation has a nontrivial solution (not all zeros) (p 229). A set of vectors in V is a basis for V if it is a linearly independent spanning set (p 238). A vector space V has dimension n, written dim(v ) = n, if it has a basis with n elements. Def (p 261). Let S = {v 1, v 2,..., v n } be a basis for V. If v = a 1 v 1 + a 2 v a n v n, then the column vector [v] S with entries a 1, a 2,..., a n is called the coordinate vector of v with respect to the ordered basis S. Def (p 261). If S, T are bases for V, then the transition matrix P S T from T to S is defined by the equation [v] S = P S T [v] T (p 270). The subspace spanned by the rows of a matrix is called its row space; the subspace spanned by the columns is its column space (p 272). The row rank of a matrix is the dimension of its row space; the column rank is the dimension of its column space (pp 186, 195). These theorems deal with properties of vectors. 4.3 (p 198). Let V be a vector space, with operations + and, and let W be a subset of V. Then W is a subspace of V if and only if the following conditions hold. W is nonempty: The zero vector belongs to W. Closure under +: If u and v are any vectors in W, then u + v is in W. Closure under : If v is any vector in W, and c is any real number, then c v is in W. 4.4 (p 210). For any vectors v 1, v 2,..., v k, the subset span{v 1, v 2,..., v k } is a subspace. 4.7 (p 225). A set of vectors is linearly dependent if and only if one is a linear combination of the rest. Note: This theorem is probably the best way to think about linear dependence, although the usual definition does give the best way to check whether or not a set of vectors is linearly independent. 4.8 (p 232). A set of vectors is a basis for the vector space V if and only if every vector in V can be expressed uniquely as a linear combination of the vectors in the set.

4 4.9 (p 233). Any spanning set contains a basis (p 236). If a vector space has a basis with n elements, then it cannot contain more than n linearly independent vectors. Cor (pp ). Any two bases have the same number of elements; dim(v ) is the maximum number of linearly independent vectors in V ; dim(v ) is the minimum number of vectors in any spanning set for V (p 240). Any linearly independent set can be expanded to a basis (p 241). If dim(v ) = n, and you have a set of n vectors, then to check that it forms a basis you only need to check one of the two conditions (spanning OR linear independence) (p 270). Row equivalent matrices have the same row space (p 275). For any matrix (of any size), the row rank and column rank are equal (p 276) [Rank-nullity theorem]. If A is any m n matrix, then rank(a) + nullity(a) = n (p 209). An n n matrix has rank n if and only if it is row equivalent to the identity (p 280). Ax = b has a solution if and only if the augmented matrix has the same rank as A. Summary of results on invertibility (p 281). The following are equivalent for any n n matrix A: (1) A is invertible (3) A is row equivalent to the identity (5) A is a product of elementary matrices (4) Ax = b always has a solution (2) Ax = 0 has only the trivial solution (7) rank(a) = n (8) nullity(a) = 0 (9) the rows of A are linearly independent (10) the columns of A are linearly independent (6) det(a) 0 (we can add this equivalent condition after we study determinants in Chapter 3) 1. (p 218). To test whether the vectors v 1, v 2,..., v k are linearly independent or linearly dependent: Solve the equation x 1 v 1 + x 2 v x k v k = 0. Note: the vectors must be columns in a matrix. If the only solution is all zeros, then the vectors are linearly independent. If there is a nonzero solution, then the vectors are linearly dependent. 2. (p 213, Example 8). To check that the vectors v 1, v 2,..., v k span the subspace W : Show that for every vector b in W there is a solution to x 1 v 1 + x 2 v x k v k = b. 3. (p 235, in the highlighted box). To find a basis for the subspace span{v 1, v 2,..., v k } by deleting vectors: (i) Construct the matrix whose columns are the coordinate vectors for the v s (ii) Row reduce (iii) Keep the vectors whose column in the reduced matrix contains a leading 1 Note: The advantage here is that the answer consists of some of the vectors in the original set. 4. (p 240). To find a basis for a vector space that includes a given set of vectors, expand the set to include all of the standard basis vectors and use the previous algorithm. 5. (p 244). To find a basis for the solution space of the system Ax = 0: (i) Row reduce A (ii) Identify the independent variables in the solution (iii) In turn, let one of these variables be 1, and all others be 0 (iv) The corresponding solution vectors form a basis 6. (p 250). To solve Ax = b, find one particular solution v p and add to it all solutions of the system Ax = (p 261). To find the transition matrix P S T : Note: The purpose of the procedure is to allow a change of coordinates [v] S = P S T [v] T. (i) Construct the matrix A whose columns are the coordinate vectors for the basis S (ii) Construct the matrix B whose columns are the coordinate vectors for the basis T (iii) Row reduce the matrix [ A B ] to get [ I P S T ] Shorthand notation: Row reduce [ S T ] [ I P S T ] 8. (p 270) To find a simplified basis for the subspace span{v 1, v 2,..., v k }: (i) Construct the matrix whose rows are the coordinate vectors for the v s (ii) Row reduce (iii) The nonzero rows form a basis Note: The advantage here is that the vectors have lots of zeros, so they are in a useful form.

5 Chapter 3: Determinants DEFINITIONS Important note: the definitions and theorems in this chapter apply to square matrices. 3.1, 3.2 (pp ) The determinant of an n n matrix is the sum of all possible products found by taking an entry from each row and column. The sign of each product comes from the permutation determined by the order of the choice of the row and column. Note: It is more important to understand how to find the determinant by row reducing or evaluating by cofactors than it is to understand this definition. 3.4 (p 157). Let A = [a ij ] be a matrix. The cofactor of a ij is A ij = ( 1) i+j det(m ij ), where M ij is the matrix found by deleting the ith row and j th column of A. 3.5 (p 166). The adjoint of a matrix A is adj(a) = [A ji ], where A ij is the cofactor of a ij. Results connected to row reduction: 3.2 (p 147). Interchanging two rows (or columns) of a matrix changes the sign of its determinant. 3.5 (p 148). If every entry in a row (or column) has a factor k, then k can be factored out of the determinant. 3.6 (p 148). Adding a multiple of one row (or column) to another does not change the determinant. 3.7 (p 149). If A is in row echelon form, then det(a) is the product of terms on the main diagonal. Some consequences: 3.3 (p 147). If two rows (or columns) of a matrix are equal, then its determinant is zero. 3.4 (p 147). If a matrix has a row (or column) of zeros, then its determinant is zero. Other facts about the determinant: 3.1 (p 146). For the matrix A, we have det(a T ) = det(a). 3.8 (p 152). The matrix A is invertible if and only if det(a) (p 153). If A and B are both n n matrices, then det(ab) = det(a) det(b). Cor (p 154). If A is invertible, then det(a 1 1 ) = det(a). Cor (p 154). If P is an invertible matrix and B = P 1 AP, then det(b) = det(a). Expansion by cofactors: 3.10 (p 158). If A is an n n matrix, then det(a) = n j=1 a ija ij. (expansion by cofactors along row i) Note: This can be used to define the determinant by induction, rather than as in Def 3.2 on p (p 167). For the matrix A we have A adj(a) = det(a) I and adj(a) A = det(a) I. Cor (p 168). If A is invertible, then A 1 1 = det(a) adj(a) (p 170) [Cramer s rule]. If det(a) 0, then the system Ax = b has solution x i = det(a i ) / det(a), where A i is the matrix obtained form A by replacing the ith column of A by b. 1. (pp ). Determinants can be computed via reduction to triangular form. (Keep track of what each elementary operation does to the determinant.) 2. (pp ). Determinants can be computed via expansion by cofactors. 3. (p 160). To find the area of a parallelogram in R 2 : (i) Find vectors u, v that determine the sides of the parallelogram. (ii) Find the absolute value of the determinant of the 2 2 matrix with columns u, v. OR (i) Put the coordinates of 3 vertices into the 3 3 matrix given on p 161. (ii) Find the absolute value of the determinant of the matrix. which computes the volume see p 303. USEFUL EXERCISES (p 155 Ex #10). If k is a scalar and A is an n n matrix, then det(ka) = k n det(a). (p 155 Ex #19). Matrices in block form: If A and B are square matrices, then A 0 C B = A B. (p 169 Ex #14). If A is an n n matrix, then det(adj(a)) = [det(a)] n 1. (p 174 Ex #7). If A is invertible, then (adj(a)) 1 = adj(a 1 ). (p 306, Ex #26). To find the volume of a parallelepiped in R 3 : (i) Find vectors u, v, w that determine the edges of the parallelepiped. (ii) Find the absolute value of the determinant of the 3 3 matrix with columns u, v, w. Note: This is the absolute value of (u v) w,

6 Chapter 5: Inner Product Spaces DEFINITIONS 5.1 (p 307). Let V be a real vector space. An inner product on V is a function that assigns to each ordered pair of vectors u, v in V a real number (u, v) satisfying: (1) (u, u) 0; (u, u) = 0 if and only if u = 0. (2) (v, u) = (u, v) for any u, v in V. (3) (u + v, w) = (u, w) + (v, w) for any u, v, w in V. (4) (cu, v) = c(u, v) for any u, v in V and any scalar c. 5.2 (p 312). A vector space with an inner product is called an inner product space. It is a Euclidean space if it is also finite dimensional. Def (p 311). If V is a Euclidean space with ordered basis S, then the matrix C with (v, w) = [v] T S C [w] S is called the matrix of the inner product with respect to S (see Theorem 5.2). 5.3 (p 315). The distance between vectors u and v is d(u, v) = u v. 5.4 (p 315). The vectors u and v are orthogonal if (u, v) = (p 315). A set S of vectors is orthogonal if any two distinct vectors in S are orthogonal. If each vector in S also has length 1, then S is orthonormal. 5.6 (p 332). The orthogonal complement of a subset W is W = {v in V (v, w) = 0 for all w in V }. Think of the orthogonal complement this way: choose an orthonormal basis w 1,..., w m, v m+1,..., v n for V, in which w 1,..., w m is a basis for W. Then W is the subspace spanned by v m+1,..., v n. Def (p 341). Let W be a subspace with orthonormal basis w 1,..., w m. The orthogonal projection of a vector v on W is proj W (v) = m i=1 (v, w i)w i. 5.2 (p 309). Let V be a Euclidean space with ordered basis S = {u 1, u 2,..., u n }. If C is the matrix [(u i, u j )], then for every v, w in V the inner product is given by (v, w) = [v] T S C [w] S. 5.3 (p 312). If u, v belong to an inner product space, then (u, v) u v. (Cauchy Schwarz inequality) Cor (p 314). If u, v belong to an inner product space, then u + v u + v. (Triangle inequality) 5.4 (p 316). A finite orthogonal set of nonzero vectors is linearly independent. 5.5 (p 320). Relative to an orthonormal basis, coordinate vectors can be found by using the inner product. 5.6 (p 321) [Gram Schmidt]. Any finite dimensional subspace of an inner product space has an orthonormal basis. 5.7 (p 325). If S is an orthonormal basis for an inner product space V, then (u, v) = [u] S [v] S. 5.9 (p 332). If W is a subspace of V, then W is a subspace with W W = {0} (p 334). If W is a finite dimensional subspace of an inner product space V, then every vector in V can be written uniquely as a sum of a vector in W and a vector in W. Notation: V = W W 5.12 (p 235). The null space of a matrix is the orthogonal complement of its row space. 1. (p 321) The Gram Schmidt process. To find an orthonormal basis for a finite dimensional subspace W : (i) Start with any basis S = {u 1,..., u m } for W (ii) Start constructing an orthogonal basis T = {v 1,..., v m } for W by letting v 1 = u 1. (iii) To find v 2, start with u 2 and subtract its projection onto v 1. v 2 = u 2 (u 2, v 1 ) (v 1, v 1 ) v 1 (iv) To find v i, start with u i and subtract its projection onto the span of the previous v i s v i = u i (u i, v 1 ) (v 1, v 1 ) v 1 (u i, v 2 ) (v 2, v 2 ) v 2... (u i, v i 1 ) (v i 1, v i 1 ) v i 1 (v) Construct the orthonormal basis T = {w 1,..., w m } by dividing each v i by its length w i = 1 v i v i 2. (p 341) To find the orthogonal projection of v onto a subspace W : (i) Find an orthonormal basis w 1,..., w m for W (ii) Find the inner products (v, w i ) for 1 i m (iii) proj W (v) = m i=1 (v, w i)w i Note: If it is any easier, you might find proj W (v), since v = proj W (v) + proj W (v).

7 DEFINITIONS Chapter 6: Linear Transformations and Matrices 6.1 (p 363). A function L : V W between vector spaces is a linear transformation if L(cu + dv) = cl(u) + dl(v) for all u, v in V and all real numbers c, d. 6.2 (p 375). A linear transformation L : V W is one-to-one if L(v 1 ) = L(v 2 ) implies v 1 = v (p 376). The kernel of a linear transformation L : V W is ker(l) = {v in V L(v) = 0}. 6.4 (p 378). The range of a linear transformation L : V W is range(l) = {L(v) v is in V }. The linear transformation L is called onto if range(l) = W. Def (p 390). Let V, W be a vector spaces, and let L : V W be a linear transformation. If V has basis S and W has basis T, then the matrix A represents L (with respect to S and T ) if [L(x)] T = A [x] S for all x in V. My notation: Let A = M T S (L), so that [L(x)] T = M T S (L) [x] S. Def (p 369) If L : R n R m, and S and T are the standard bases for R n and R m, respectively, then the matrix A that represents L is called the standard matrix representing L. 6.6 (p 410). The square matrices A, B are called similar if there is an invertible matrix P that is a solution of the equation B = X 1 AX. 6.2 (p 368). A linear transformation is completely determined by what it does to basis vectors. 6.4 (p 376). The kernel of a linear transformation L : V W is a subspace of V, and L is one-to-one if and only if its kernel is zero. 6.5 (p 378). The range of a linear transformation L : V W is a subspace of W. 6.6 (p 381). [Rank nullity theorem]. If L : V W is a linear transformation, and V has finite dimension, then dim(ker(l)) + dim(range(l)) = dim(v ). Note: This is a restatement of Theorem (p 384) A linear transformation is invertible if and only if it is one-to-one and onto. Cor (p 384) If L : V W and dim V = dim W, then L is one-to-one if and only if it is onto. 6.9 (p 389). If L : V W is a linear transformation, and V and W have finite bases S and T, then the columns of the matrix M T S (L) that represents L with respect to S and T are found by applying L to the basis vectors in S. Shorthand notation: M T S (L) = [ L(S) ] T 6.12 (p 407). Let L : V W be a linear transformation, and suppose that S, S are bases for V and T, T are bases for W. Then the matrix M T S (L) representing L with respect to S and T is related to the matrix M T S (L) representing L with respect to S and T by the equation M T S (L) = P T T M T S (L) P S S where P S S is the transition matrix that changes coordinates relative to S into coordinates relative to S and P T T is the transition matrix that changes coordinates relative to T into coordinates relative to T. Cor (p 409). Let L : V V be a linear transformation, and suppose that S, S are bases for V. Then B = P 1 AP for B = M S S (L), A = M S S(L), and P = P S S (p 409). If L : V V is a linear transformation represented by A, then dim(range(l)) = rank(a) (p 411). Square matrices are similar if and only if they represent the same linear transformation (p 412). Similar matrices have the same rank. 1. (p 392). To find the matrix M T S (L) that represents L with respect to the bases S and T : (i) Find L(v) for each basis vector v in S. (ii) For each v in S, find the coordinate vectors [L(v)] T of L(v) with respect to the basis T. (iii) Put in the coordinate vectors [L(v)] T as columns of M T S (L). Shorthand notation: Row reduce [ T L(S) ] [ I M T S (L) ] 2. (p 380). To find a basis for the kernel of a linear transformation L : V W : (i) Choose bases for V and W and find the matrix A that represents L. (ii) Find a basis for the solution space of the system Ax = (p 380). To find a basis for the range of a linear transformation L : V W : (i) Choose bases for V and W and find a matrix A that represents L. (ii) Find a basis for the column space of A.

8 Chapter 7: Eigenvalues and Eigenvectors Given a linear transformation L : V V, the goal of this chapter is to choose a basis for V that makes the matrix for L as simple as possible. We will try to find a basis that gives us a diagonal matrix for L. DEFINITIONS 7.1 (pp 437, 442). Let L : V V be a linear transformation. The real number λ is an eigenvalue of L if there is a nonzero vector x in V for which L(x) = λx. In this case x is an eigenvector of L (with associated eigenvalue λ). Note: this means that L maps the line determined by x back into itself. Def (p 450, Ex #14). If λ is an eigenvalue of the linear transformation L : V V, then the eigenspace associated with λ is {x in V L(x) = λx}. The eigenspace consists of all eigenvectors for λ, together with the zero vector. 7.2 (p 444). The characteristic polynomial of a square matrix A is det(λi A). 7.3 (p 454). A linear transformation L : V V is diagonalizable if it is possible to find a basis S for V such that the matrix M S S (L) of L with respect to S is a diagonal matrix. Def (p 456). A square matrix is diagonalizable if it is similar to a diagonal matrix. 7.4 (p 466). A square matrix A is orthogonal if A 1 = A T. 7.1 (p 445). The eigenvalues of a matrix are the real roots of its characteristic polynomial. 7.2 (p 454). Similar matrices have the same eigenvalues. 7.3 (p 456). A linear transformation L : V V is diagonalizable if and only if it is possible to find a basis for V that consists of eigenvectors of L. 7.4 (p 456). The n n matrix A is diagonalizable if and only if A has n linearly independent eigenvectors. 7.5 (p 458). A square matrix is diagonalizable if its characteristic polynomial has distinct real roots. 7.6 (p 463). The characteristic polynomial of a symmetric matrix has real roots. 7.7 (p 464). For a symmetric matrix, eigenvectors associated with distinct eigenvalues are orthogonal. 7.8 (p 467). A square matrix is orthogonal if and only if its columns (rows) are orthonormal. 7.9 (p 467). Let A be a symmetric matrix. Then there exists an orthogonal matrix P such that P 1 AP is a diagonal matrix. The diagonal entries of P 1 AP are the eigenvalues of A. 1. (p 449) To find the eigenvalues and eigenvectors of the matrix A: (i) Find the real solutions of the characteristic equation det(λi a) = 0. (ii) For each eigenvalue λ from (i), find its associated eigenvectors by solving the linear system (λi A)x = 0. Note: This solutions space is the eigenspace determined by λ. 2. (p 472) To diagonalize the matrix A: (i) Find the characteristic polynomial det(λi A) of A. (ii) Find the eigenvalues of A by finding the real roots of its characteristic polynomial. (iii) For each eigenvalue λ, find a basis for its eigenspace by solving the equation (λi A)x = 0. Note: The diagonalization process fails if the characteristic polynomial has a complex root OR if it has a real root of multiplicity k whose eigenspace has dimension < k. (iv) P 1 AP = D is a diagonal matrix if the columns of P are the basis vectors found in (ii) and D is the matrix whose diagonal entries are the eigenvalues of A (in exactly the same order). 3. (p 468) To diagonalize the symmetric matrix A: Follow the steps in 2, but use the Gram-Schmidt process to find an orthonormal basis for each eigenspace. Note: The diagonalization process always works, and P 1 = P T since P is orthogonal. USEFUL EXERCISES (p 451 Ex #21). If λ is an eigenvalue of A with eigenvector x, then λ k is an eigenvalue of A k with eigenvector x. (p 451 Ex #25). If λ is an eigenvalue of A with eigenvector x, and A is invertible, then 1/λ is an eigenvalue of A 1 with eigenvector x. (p 451 Ex #24). The matrix A is singular if and only if it has 0 as an eigenvalue. (p 452 Ex #32). If A is a 2 2 matrix, then its characteristic polynomial is λ 2 tr(a) λ + det(a). (p 462 Ex #28). If B = P 1 AP and λ is an eigenvalue of A with eigenvector x, then λ is also an eigenvalue of B, with eigenvector P 1 x.

### Orthogonal Diagonalization of Symmetric Matrices

MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

### 1 Eigenvalues and Eigenvectors

Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x

### 1 Determinants. Definition 1

Determinants The determinant of a square matrix is a value in R assigned to the matrix, it characterizes matrices which are invertible (det 0) and is related to the volume of a parallelpiped described

### LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

### Determinants. Dr. Doreen De Leon Math 152, Fall 2015

Determinants Dr. Doreen De Leon Math 52, Fall 205 Determinant of a Matrix Elementary Matrices We will first discuss matrices that can be used to produce an elementary row operation on a given matrix A.

### ( % . This matrix consists of \$ 4 5 " 5' the coefficients of the variables as they appear in the original system. The augmented 3 " 2 2 # 2 " 3 4&

Matrices define matrix We will use matrices to help us solve systems of equations. A matrix is a rectangular array of numbers enclosed in parentheses or brackets. In linear algebra, matrices are important

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

### Harvard College. Math 21b: Linear Algebra and Differential Equations Formula and Theorem Review

Harvard College Math 21b: Linear Algebra and Differential Equations Formula and Theorem Review Tommy MacWilliam, 13 tmacwilliam@college.harvard.edu May 5, 2010 1 Contents Table of Contents 4 1 Linear Equations

### Preliminaries of linear algebra

Preliminaries of linear algebra (for the Automatic Control course) Matteo Rubagotti March 3, 2011 This note sums up the preliminary definitions and concepts of linear algebra needed for the resolution

### 1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each)

Math 33 AH : Solution to the Final Exam Honors Linear Algebra and Applications 1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each) (1) If A is an invertible

### Similarity and Diagonalization. Similar Matrices

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

### 2.1: Determinants by Cofactor Expansion. Math 214 Chapter 2 Notes and Homework. Evaluate a Determinant by Expanding by Cofactors

2.1: Determinants by Cofactor Expansion Math 214 Chapter 2 Notes and Homework Determinants The minor M ij of the entry a ij is the determinant of the submatrix obtained from deleting the i th row and the

### INTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL

SOLUTIONS OF THEORETICAL EXERCISES selected from INTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL Eighth Edition, Prentice Hall, 2005. Dr. Grigore CĂLUGĂREANU Department of Mathematics

### MATH36001 Background Material 2015

MATH3600 Background Material 205 Matrix Algebra Matrices and Vectors An ordered array of mn elements a ij (i =,, m; j =,, n) written in the form a a 2 a n A = a 2 a 22 a 2n a m a m2 a mn is said to be

### Linear Algebra Test 2 Review by JC McNamara

Linear Algebra Test 2 Review by JC McNamara 2.3 Properties of determinants: det(a T ) = det(a) det(ka) = k n det(a) det(a + B) det(a) + det(b) (In some cases this is true but not always) A is invertible

### Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

### MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix.

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. Inverse matrix Definition. Let A be an n n matrix. The inverse of A is an n n matrix, denoted

### Matrix Inverse and Determinants

DM554 Linear and Integer Programming Lecture 5 and Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1 2 3 4 and Cramer s rule 2 Outline 1 2 3 4 and

### (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular.

Theorem.7.: (Properties of Triangular Matrices) (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. (b) The product

### Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

### Sergei Silvestrov, Christopher Engström, Karl Lundengård, Johan Richter, Jonas Österberg. November 13, 2014

Sergei Silvestrov,, Karl Lundengård, Johan Richter, Jonas Österberg November 13, 2014 Analysis Todays lecture: Course overview. Repetition of matrices elementary operations. Repetition of solvability of

### 1 Introduction to Matrices

1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

### Topic 1: Matrices and Systems of Linear Equations.

Topic 1: Matrices and Systems of Linear Equations Let us start with a review of some linear algebra concepts we have already learned, such as matrices, determinants, etc Also, we shall review the method

### UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure

UNIT 2 MATRICES - I Matrices - I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress

### = [a ij ] 2 3. Square matrix A square matrix is one that has equal number of rows and columns, that is n = m. Some examples of square matrices are

This document deals with the fundamentals of matrix algebra and is adapted from B.C. Kuo, Linear Networks and Systems, McGraw Hill, 1967. It is presented here for educational purposes. 1 Introduction In

### 2.1: MATRIX OPERATIONS

.: MATRIX OPERATIONS What are diagonal entries and the main diagonal of a matrix? What is a diagonal matrix? When are matrices equal? Scalar Multiplication 45 Matrix Addition Theorem (pg 0) Let A, B, and

### Solution. Area(OABC) = Area(OAB) + Area(OBC) = 1 2 det( [ 5 2 1 2. Question 2. Let A = (a) Calculate the nullspace of the matrix A.

Solutions to Math 30 Take-home prelim Question. Find the area of the quadrilateral OABC on the figure below, coordinates given in brackets. [See pp. 60 63 of the book.] y C(, 4) B(, ) A(5, ) O x Area(OABC)

### Problems for Advanced Linear Algebra Fall 2012

Problems for Advanced Linear Algebra Fall 2012 Class will be structured around students presenting complete solutions to the problems in this handout. Please only agree to come to the board when you are

### 1.5 Elementary Matrices and a Method for Finding the Inverse

.5 Elementary Matrices and a Method for Finding the Inverse Definition A n n matrix is called an elementary matrix if it can be obtained from I n by performing a single elementary row operation Reminder:

### 2.5 Elementary Row Operations and the Determinant

2.5 Elementary Row Operations and the Determinant Recall: Let A be a 2 2 matrtix : A = a b. The determinant of A, denoted by det(a) c d or A, is the number ad bc. So for example if A = 2 4, det(a) = 2(5)

### Mathematics Notes for Class 12 chapter 3. Matrices

1 P a g e Mathematics Notes for Class 12 chapter 3. Matrices A matrix is a rectangular arrangement of numbers (real or complex) which may be represented as matrix is enclosed by [ ] or ( ) or Compact form

### Determinants. Chapter Properties of the Determinant

Chapter 4 Determinants Chapter 3 entailed a discussion of linear transformations and how to identify them with matrices. When we study a particular linear transformation we would like its matrix representation

### Cofactor Expansion: Cramer s Rule

Cofactor Expansion: Cramer s Rule MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Introduction Today we will focus on developing: an efficient method for calculating

### Applied Linear Algebra I Review page 1

Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

### Name: Section Registered In:

Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are

### Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

### Practice Math 110 Final. Instructions: Work all of problems 1 through 5, and work any 5 of problems 10 through 16.

Practice Math 110 Final Instructions: Work all of problems 1 through 5, and work any 5 of problems 10 through 16. 1. Let A = 3 1 1 3 3 2. 6 6 5 a. Use Gauss elimination to reduce A to an upper triangular

### NOTES on LINEAR ALGEBRA 1

School of Economics, Management and Statistics University of Bologna Academic Year 205/6 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

### 1 Sets and Set Notation.

LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

### x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7 Linear Algebra-Lab 2

Linear Algebra-Lab 1 1) Use Gaussian elimination to solve the following systems x 1 + x 2 2x 3 + 4x 4 = 5 1.1) 2x 1 + 2x 2 3x 3 + x 4 = 3 3x 1 + 3x 2 4x 3 2x 4 = 1 x + y + 2z = 4 1.4) 2x + 3y + 6z = 10

### 1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

Facts About Eigenvalues By Dr David Butler Definitions Suppose A is an n n matrix An eigenvalue of A is a number λ such that Av = λv for some nonzero vector v An eigenvector of A is a nonzero vector v

### Linear Algebra Concepts

University of Central Florida School of Electrical Engineering Computer Science EGN-3420 - Engineering Analysis. Fall 2009 - dcm Linear Algebra Concepts Vector Spaces To define the concept of a vector

### Introduction to Matrix Algebra I

Appendix A Introduction to Matrix Algebra I Today we will begin the course with a discussion of matrix algebra. Why are we studying this? We will use matrix algebra to derive the linear regression model

### by the matrix A results in a vector which is a reflection of the given

Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

### Chapter 1 - Matrices & Determinants

Chapter 1 - Matrices & Determinants Arthur Cayley (August 16, 1821 - January 26, 1895) was a British Mathematician and Founder of the Modern British School of Pure Mathematics. As a child, Cayley enjoyed

### Review: Vector space

Math 2F Linear Algebra Lecture 13 1 Basis and dimensions Slide 1 Review: Subspace of a vector space. (Sec. 4.1) Linear combinations, l.d., l.i. vectors. (Sec. 4.3) Dimension and Base of a vector space.

### Summary of week 8 (Lectures 22, 23 and 24)

WEEK 8 Summary of week 8 (Lectures 22, 23 and 24) This week we completed our discussion of Chapter 5 of [VST] Recall that if V and W are inner product spaces then a linear map T : V W is called an isometry

### Solutions to Linear Algebra Practice Problems

Solutions to Linear Algebra Practice Problems. Find all solutions to the following systems of linear equations. (a) x x + x 5 x x x + x + x 5 (b) x + x + x x + x + x x + x + 8x Answer: (a) We create the

### GRA6035 Mathematics. Eivind Eriksen and Trond S. Gustavsen. Department of Economics

GRA635 Mathematics Eivind Eriksen and Trond S. Gustavsen Department of Economics c Eivind Eriksen, Trond S. Gustavsen. Edition. Edition Students enrolled in the course GRA635 Mathematics for the academic

### CHARACTERISTIC ROOTS AND VECTORS

CHARACTERISTIC ROOTS AND VECTORS 1 DEFINITION OF CHARACTERISTIC ROOTS AND VECTORS 11 Statement of the characteristic root problem Find values of a scalar λ for which there exist vectors x 0 satisfying

### Problems. Universidad San Pablo - CEU. Mathematical Fundaments of Biomedical Engineering 1. Author: First Year Biomedical Engineering

Universidad San Pablo - CEU Mathematical Fundaments of Biomedical Engineering 1 Problems Author: First Year Biomedical Engineering Supervisor: Carlos Oscar S. Sorzano September 15, 013 1 Chapter 3 Lay,

### Linear Algebra Review. Vectors

Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

### APPLICATIONS OF MATRICES. Adj A is nothing but the transpose of the co-factor matrix [A ij ] of A.

APPLICATIONS OF MATRICES ADJOINT: Let A = [a ij ] be a square matrix of order n. Let Aij be the co-factor of a ij. Then the n th order matrix [A ij ] T is called the adjoint of A. It is denoted by adj

### DETERMINANTS. b 2. x 2

DETERMINANTS 1 Systems of two equations in two unknowns A system of two equations in two unknowns has the form a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 This can be written more concisely in

### We know a formula for and some properties of the determinant. Now we see how the determinant can be used.

Cramer s rule, inverse matrix, and volume We know a formula for and some properties of the determinant. Now we see how the determinant can be used. Formula for A We know: a b d b =. c d ad bc c a Can we

### MAT 242 Test 2 SOLUTIONS, FORM T

MAT 242 Test 2 SOLUTIONS, FORM T 5 3 5 3 3 3 3. Let v =, v 5 2 =, v 3 =, and v 5 4 =. 3 3 7 3 a. [ points] The set { v, v 2, v 3, v 4 } is linearly dependent. Find a nontrivial linear combination of these

### Matrices, Determinants and Linear Systems

September 21, 2014 Matrices A matrix A m n is an array of numbers in rows and columns a 11 a 12 a 1n r 1 a 21 a 22 a 2n r 2....... a m1 a m2 a mn r m c 1 c 2 c n We say that the dimension of A is m n (we

### Section 2.1. Section 2.2. Exercise 6: We have to compute the product AB in two ways, where , B =. 2 1 3 5 A =

Section 2.1 Exercise 6: We have to compute the product AB in two ways, where 4 2 A = 3 0 1 3, B =. 2 1 3 5 Solution 1. Let b 1 = (1, 2) and b 2 = (3, 1) be the columns of B. Then Ab 1 = (0, 3, 13) and

### 3. INNER PRODUCT SPACES

. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

### December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

### MATH 551 - APPLIED MATRIX THEORY

MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

### 1 Gaussian Elimination

Contents 1 Gaussian Elimination 1.1 Elementary Row Operations 1.2 Some matrices whose associated system of equations are easy to solve 1.3 Gaussian Elimination 1.4 Gauss-Jordan reduction and the Reduced

### ADVANCED LINEAR ALGEBRA FOR ENGINEERS WITH MATLAB. Sohail A. Dianat. Rochester Institute of Technology, New York, U.S.A. Eli S.

ADVANCED LINEAR ALGEBRA FOR ENGINEERS WITH MATLAB Sohail A. Dianat Rochester Institute of Technology, New York, U.S.A. Eli S. Saber Rochester Institute of Technology, New York, U.S.A. (g) CRC Press Taylor

### Chapter 4: Binary Operations and Relations

c Dr Oksana Shatalov, Fall 2014 1 Chapter 4: Binary Operations and Relations 4.1: Binary Operations DEFINITION 1. A binary operation on a nonempty set A is a function from A A to A. Addition, subtraction,

### Lecture 6. Inverse of Matrix

Lecture 6 Inverse of Matrix Recall that any linear system can be written as a matrix equation In one dimension case, ie, A is 1 1, then can be easily solved as A x b Ax b x b A 1 A b A 1 b provided that

### University of Ottawa

University of Ottawa Department of Mathematics and Statistics MAT 1302A: Mathematical Methods II Instructor: Alistair Savage Final Exam April 2013 Surname First Name Student # Seat # Instructions: (a)

### MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

### Lecture 11. Shuanglin Shao. October 2nd and 7th, 2013

Lecture 11 Shuanglin Shao October 2nd and 7th, 2013 Matrix determinants: addition. Determinants: multiplication. Adjoint of a matrix. Cramer s rule to solve a linear system. Recall that from the previous

### Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

### Notes on Determinant

ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

### Diagonal, Symmetric and Triangular Matrices

Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by

### Chapter 6. Orthogonality

6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

### 1 0 5 3 3 A = 0 0 0 1 3 0 0 0 0 0 0 0 0 0 0

Solutions: Assignment 4.. Find the redundant column vectors of the given matrix A by inspection. Then find a basis of the image of A and a basis of the kernel of A. 5 A The second and third columns are

### MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

### Using row reduction to calculate the inverse and the determinant of a square matrix

Using row reduction to calculate the inverse and the determinant of a square matrix Notes for MATH 0290 Honors by Prof. Anna Vainchtein 1 Inverse of a square matrix An n n square matrix A is called invertible

### 4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns

L. Vandenberghe EE133A (Spring 2016) 4. Matrix inverses left and right inverse linear independence nonsingular matrices matrices with linearly independent columns matrices with linearly independent rows

### University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

### Algebra and Linear Algebra

Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces Algebra and Linear Algebra Algebra: numbers and operations on numbers 2 + 3 = 5 3 7 = 21 Linear algebra: tuples, triples,...

### Linear Dependence Tests

Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks

### 1. For each of the following matrices, determine whether it is in row echelon form, reduced row echelon form, or neither.

Math Exam - Practice Problem Solutions. For each of the following matrices, determine whether it is in row echelon form, reduced row echelon form, or neither. (a) 5 (c) Since each row has a leading that

### 13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

### Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible:

Cramer s Rule and the Adjugate Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible: Theorem [Cramer s Rule] If A is an invertible

### MA 242 LINEAR ALGEBRA C1, Solutions to Second Midterm Exam

MA 4 LINEAR ALGEBRA C, Solutions to Second Midterm Exam Prof. Nikola Popovic, November 9, 6, 9:3am - :5am Problem (5 points). Let the matrix A be given by 5 6 5 4 5 (a) Find the inverse A of A, if it exists.

### T ( a i x i ) = a i T (x i ).

Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

### B such that AB = I and BA = I. (We say B is an inverse of A.) Definition A square matrix A is invertible (or nonsingular) if matrix

Matrix inverses Recall... Definition A square matrix A is invertible (or nonsingular) if matrix B such that AB = and BA =. (We say B is an inverse of A.) Remark Not all square matrices are invertible.

### 8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

### Chapter 8. Matrices II: inverses. 8.1 What is an inverse?

Chapter 8 Matrices II: inverses We have learnt how to add subtract and multiply matrices but we have not defined division. The reason is that in general it cannot always be defined. In this chapter, we

### Sec 4.1 Vector Spaces and Subspaces

Sec 4. Vector Spaces and Subspaces Motivation Let S be the set of all solutions to the differential equation y + y =. Let T be the set of all 2 3 matrices with real entries. These two sets share many common

### Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

### Advanced Techniques for Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz

Advanced Techniques for Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Vectors Arrays of numbers Vectors represent a point in a n dimensional

### Math 315: Linear Algebra Solutions to Midterm Exam I

Math 35: Linear Algebra s to Midterm Exam I # Consider the following two systems of linear equations (I) ax + by = k cx + dy = l (II) ax + by = 0 cx + dy = 0 (a) Prove: If x = x, y = y and x = x 2, y =

### Definition A square matrix M is invertible (or nonsingular) if there exists a matrix M 1 such that

0. Inverse Matrix Definition A square matrix M is invertible (or nonsingular) if there exists a matrix M such that M M = I = M M. Inverse of a 2 2 Matrix Let M and N be the matrices: a b d b M =, N = c

### Similar matrices and Jordan form

Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

### Solutions to Review Problems

Chapter 1 Solutions to Review Problems Chapter 1 Exercise 42 Which of the following equations are not linear and why: (a x 2 1 + 3x 2 2x 3 = 5. (b x 1 + x 1 x 2 + 2x 3 = 1. (c x 1 + 2 x 2 + x 3 = 5. (a

### LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

### Lecture Notes 2: Matrices as Systems of Linear Equations

2: Matrices as Systems of Linear Equations 33A Linear Algebra, Puck Rombach Last updated: April 13, 2016 Systems of Linear Equations Systems of linear equations can represent many things You have probably