Definition: A vector space V is a non-empty set of objects, called vectors, on which the operations addition and scalar multiplication have been defined. The operations are subject to ten axioms: For any u, v V and any scalar c: 1. u + v V 4. There is a zero vector 0 in V such that u + 0 = u 6. The scalar multiple of u by c, denoted by cu, is in V The remaining seven axioms may be found in the textbook in Chapter 4.1. I will not expect you to memorize the remaining seven axioms. Definition: W is a vector subspace of the vector space V if W is a subset of V that is itself a vector space. Only need to check the axioms 1, 4, and 6. Definition: A subspace W of the vector space V is a proper subspace if W V. Definition: A linear combination of the vectors v 1, v 2,..., v k is any vector of the form where c 1, c 2,..., c k are scalars. Definition: The span of the vectors v 1,..., v k is c 1 v 1 + c 2 v 2 + + c k v k Span{v 1,..., v k } = {all linear combinations of v 1,..., v k } = {c 1 v 1 + + c k v k c 1,..., c k R} Definition: For a vector space V, if V = Span{v 1,..., v k }, then {v 1,..., v k } is a spanning set of V. Theorem: If v 1,..., v k are vectors in a vector space V, then Span{v 1,..., v k } is a subspace of V. Definition: If A is an m n matrix, then the null space of A, denoted by N(A) or NulA, is the set of vectors {x R n Ax = 0}. N(A) is a subspace of R n. Definition: If A is an m n matrix, then the column space of A, denoted by ColA, is the set of vectors which are linear combinations of the columns of A. So, if A = (v 1 v 2 v n ), then ColA = {c 1 v 1 + c 2 v 2 + + c n v n c 1, c 2,..., c n R} = Span{v 1, v 2,..., v n } = {Ax x R n } = {b R m b = Ax for some x R n } 1
Recall that if A = (v 1 v 2 v n ), then ColA is a subspace of R m. c 1 c 2 A. = c 1v 1 + c 2 v 2 + + c n v n. c n Definition: A map T from the vector space V to the vector space W is a linear transformation if it satisfies the properties 1. T (u + v) = T (u) + T (v) 2. T (cu) = ct (u) for any vectors u, v V and any scalar c. An equivalent definition: For any vectors u, v V and any scalars c, d: T (cu + dv) = ct (u) + dt (v). Another equivalent definition: For any vectors u 1, u 2,..., u k c 1, c 2,..., c k : V and any scalars T (c 1 u 1 + c 2 u 2 + + c k u k ) = c 1 T (u 1 ) + c 2 T (u 2 ) + + c k T (u k ). Definition: The kernel or null space of a linear transformation T : V W is Ker T is a subspace of V. Ker T = {x V T (x) = 0}. Definition: The range of a linear transformation T : V W is Range T is a subspace of W. Range T = {w W w = T (x) for some x V }. Theorem: If T : R n R m is the linear transformation determined by the m n matrix A by T (x) = Ax, then Ker T = N(A) and Range T = ColA. Definition: {v 1, v 2,..., v k } is a linearly independent set if the only scalars c 1, c 2,..., c k which c 1 v 1 + c 2 v 2 + + c k v k = 0 are c 1 = c 2 = = c k = 0. The set is linearly dependent otherwise. for Theorem: If {v 1, v 2,..., v k } is a linearly independent set, then for any vector v Span{v 1, v 2,..., v k }, there is only one way to write v as a linear combination of v 1, v 2,..., v k. 2
Theorem: A set {v 1, v 2,..., v k } is lineaerly dependent if and only if some v j with j > 1 is a linear combination of the preceding vectors v 1,..., v j 1. Definition: A basis for a vector space V is a set {v 1, v 2,..., v k } that: 1. is linearly independent 2. is a spanning set for V : i.e. Span{v 1, v 2,..., v k } = V. Theorem: (Spanning Set Theorem) Let S = {v 1,..., v k } be a set in V and let H = Span S. 1. If one of the vectors in S v j say is a linear combination of the remaining vectors in S, then the span of S with v j removed is still H. 2. If H {0}, some subset of S is a basis for H. Theorem: A basis for ColA is the set of pivot columns. Definition: The coordinate vector of a vector v relative to a basis B = {v 1, v 2,..., v k } is c 1 c 2 (v) B =. Rk where v = c 1 v 1 + c 2 v 2 + + c k v k. (v) B is also called the B-coordinate vector of v. The map sending v to (v) B is the coordinate mapping determined by B. Theorem: The coordinate mapping is linear. Definition: If B = {v 1, v 2,..., v n } is a basis for R n, then the n n matrix c k (v 1 v 2 v n ) = P B is the change of basis matrix from B to the standard basis. Theorem: v = P B (v) B and (v) B = P 1 B v. Theorem: Let B be a basis for V and {u 1,..., u k } be vectors in V. {u 1,..., u k } is linearly independent (in V ) if and only if the coordinates {(u 1 ) B,..., (u k ) B } is linearly independent (in R n where the size of B is n). Theorem: If V has a basis B = {b 1,..., b n }, then any set in V of size greater than n is linearly dependent. Theorem: If V has a basis B of size n, then any basis of V has size n. Definition: The dimension of a vector space V is the size of a basis for V. Theorem: (The Basis Theorem) Let V be an n-dimensional vector space. Any linearly independent set of n vectors in V is a basis for V. 3
Theorem: For a matrix A, dim N(A) = # free variables. Theorem: For a matrix A, dim ColA = # pivot variables since the pivot columns give a basis for ColA. Definition: dim{0} = 0. Definition: The row space of an m n matrix A, denoted by RowA, is the set of linear combinations of the rows of A. If A = r 1. r m then RowA = {c 1 r 1 + + c m r m c 1,..., c m R} r ( ) 1 = c1 c m. c 1,..., c m R r m = {x t A x R m } Theorem: If A and B are row equivalent, i.e. may be obtained from one another via elementary row operations, then RowA = RowB. Theorem: Pivot rows of A in row echelon form form a basis for RowA. Definition: The rank of an m n matrix A is the dimension of ColA. We denote the rank of A by RankA. Theorem: (The Rank Theorem) Let A be an m n matrix. Then: 1. RankA = dim ColA = dim RowA = # pivots of A 2. RankA + dim N(A) = n (Follows from # pivot variables + # free variables = # variables = # columns = n.) Theorem: (The Invertible Matrix Theorem) Let A be an n n matrix. Then A is an invertible matrix is equivalent to any of the following: 1. The columns of A form a basis of R n 2. ColA = R n 3. dim ColA = n 4. Rank A = n 5. N(A) = {0} 6. dim N(A) = 0 4
Definition: If B = {v 1, v 2,..., v n } and C are different bases for a vector space V, the n n matrix P C B = ( (v 1 ) C (v 2 ) C (v n ) C ) is the change of corrdinates matrix from B to C. Theorem: (x) C = P C B (x) B Theorem: P B C = P 1 C B Theorem: P D B = P D C P C B Theorem: If B = {b 1, b 2,..., b n } and C = {c 1, c 2,..., c n } are bases for R n, then (c 1 c 2 c n b 1 b 2 b n ) (I P C B ). Definition: An eigenvector of an n n matrix A is a non-zero vector v such that Av = λv for some scalar λ. λ is called an eigenvalue of A. v is an eigenvector corresponding to λ. Theorem: N(A λi) \ {0} is the set of eigenvectors of A of eigenvalue λ. Definition: (Equivalent definition.) λ is an eigenvalue of A if N(A λi) is non-trivial. Definition: N(A λi) = {eigenvectors of A of eigenvalue λ} {0} is the eigenspace of A corresponding to λ. Theorem: The eigenvalues of a diagonal matrix are its diagonal entries. Theorem: The eigenvalues of a triangular matrix are its diagonal entries. Theorem: λ is an eigenvalue of A N(A λi) is non-trivial A λi is not invertible det(a λi) = 0. Definition: For an n n matrix A, det(a λi) is a polynomial of degree n in the variable λ. It is called the characteristic polynomial of A. We may write p A (λ) for det(a λi). det(a λi) = 0 is the characteristic equation. Theorem: The eigenvalues of A are the roots of the characteristic polynomial of A. i.e. they satisfy the characteristic equation. Definition: If A, B are two n n matrices, then A is similar to B if there is an invertible matrix P such that P 1 AP = B. If A is similar to B, then B is similar to A. We may say: A and B are similar. Theorem: If A and B are similar, then their characteristic polynomials are the same. Theorem: If v 1, v 2,..., v k are eigenvectors of a matrix A with corresponding eigenvalues λ 1, λ 2,..., λ k distinct, then {v 1, v 2,..., v k } is a linearly independent set. 5
Definition: A matrix A is diagonalizable if there is some invertible matrix P and a diagonal matrix D such that A = P DP 1. Theorem: (The Diagonalization Theorem) An n n matrix A is diagonalizable, A = P DP 1, if and only if there exists a basis of R n (namely the columns of P ) consisting of eigenvectors of A, the eigenvalues being given by corresponding entries in D. Theorem: If A = P DP 1, then A k = P D k P 1. Theorem: Let A be an n n matrix with k distinct eigenvalues λ 1,..., λ k. Then: 1. The dimension of the eigenspace corresponding to each λ j is less than or equal to the multiplicity of λ j as a root of the characteristic polynomial of A. 2. A is diagonalizable if and only if for each j, the dimension of the eigenspace corresponding to λ j is equal to the multiplicity of λ j as a root of the characteristic polynomial of A. In this case, the sum of the dimensions of the eigenspaces is n and we have enough linearly independent eigenvectors to diagonalize A. 6