c 1 v 1 + c 2 v c k v k

Similar documents
Orthogonal Diagonalization of Symmetric Matrices

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

Similarity and Diagonalization. Similar Matrices

by the matrix A results in a vector which is a reflection of the given

MAT 242 Test 2 SOLUTIONS, FORM T

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Name: Section Registered In:

NOTES ON LINEAR TRANSFORMATIONS

Linearly Independent Sets and Linearly Dependent Sets

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

Examination paper for TMA4115 Matematikk 3

Chapter 6. Orthogonality

MATH APPLIED MATRIX THEORY

Recall that two vectors in are perpendicular or orthogonal provided that their dot

MATH1231 Algebra, 2015 Chapter 7: Linear maps

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

University of Lille I PC first year list of exercises n 7. Review

LINEAR ALGEBRA. September 23, 2010

Methods for Finding Bases

1 Sets and Set Notation.

Inner Product Spaces and Orthogonality

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

LINEAR ALGEBRA W W L CHEN

Similar matrices and Jordan form

160 CHAPTER 4. VECTOR SPACES

T ( a i x i ) = a i T (x i ).

Linear Algebra Notes

A =

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

Lecture Notes 2: Matrices as Systems of Linear Equations

Solutions to Math 51 First Exam January 29, 2015

x + y + z = 1 2x + 3y + 4z = 0 5x + 6y + 7z = 3

Mathematics Course 111: Algebra I Part IV: Vector Spaces

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

Section Inner Products and Norms

Applied Linear Algebra I Review page 1

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

Linear Algebra Review. Vectors

( ) which must be a vector

Vector Spaces 4.4 Spanning and Independence

Section Continued

Systems of Linear Equations

Solving Systems of Linear Equations

Subspaces of R n LECTURE Subspaces

Matrix Representations of Linear Transformations and Changes of Coordinates

Math 312 Homework 1 Solutions

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Numerical Analysis Lecture Notes

1 VECTOR SPACES AND SUBSPACES

4.5 Linear Dependence and Linear Independence

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

Notes on Determinant

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

4 MT210 Notebook Eigenvalues and Eigenvectors Definitions; Graphical Illustrations... 3

8 Square matrices continued: Determinants

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Row Echelon Form and Reduced Row Echelon Form

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

Inner Product Spaces

Solving Linear Systems, Continued and The Inverse of a Matrix

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in Total: 175 points.

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

Solving Systems of Linear Equations

x = + x 2 + x

Chapter 12 Modal Decomposition of State-Space Models 12.1 Introduction The solutions obtained in previous chapters, whether in time domain or transfor

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

Vector and Matrix Norms

LS.6 Solution Matrices

1.5 SOLUTION SETS OF LINEAR SYSTEMS

Factorization Theorems

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

SALEM COMMUNITY COLLEGE Carneys Point, New Jersey COURSE SYLLABUS COVER SHEET. Action Taken (Please Check One) New Course Initiated

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) Write the given system in matrix form x = Ax + f ( ) sin(t) x y z = dy cos(t)

Using row reduction to calculate the inverse and the determinant of a square matrix

Solutions to Assignment 10

is in plane V. However, it may be more convenient to introduce a plane coordinate system in V.

The Characteristic Polynomial

1 Introduction to Matrices

THE DIMENSION OF A VECTOR SPACE

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

[1] Diagonal factorization

Linear Algebra. A vector space (over R) is an ordered quadruple. such that V is a set; 0 V ; and the following eight axioms hold:

Chapter 17. Orthogonal Matrices and Symmetries of Space

Inner products on R n, and more

Chapter 20. Vector Spaces and Bases

Modélisation et résolutions numérique et symbolique

2 Polynomials over a field

Lecture 5: Singular Value Decomposition SVD (1)

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

CONTROLLABILITY. Chapter Reachable Set and Controllability. Suppose we have a linear system described by the state equation

Brief Introduction to Vectors and Matrices

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Math 215 HW #6 Solutions

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

Orthogonal Projections

Transcription:

Definition: A vector space V is a non-empty set of objects, called vectors, on which the operations addition and scalar multiplication have been defined. The operations are subject to ten axioms: For any u, v V and any scalar c: 1. u + v V 4. There is a zero vector 0 in V such that u + 0 = u 6. The scalar multiple of u by c, denoted by cu, is in V The remaining seven axioms may be found in the textbook in Chapter 4.1. I will not expect you to memorize the remaining seven axioms. Definition: W is a vector subspace of the vector space V if W is a subset of V that is itself a vector space. Only need to check the axioms 1, 4, and 6. Definition: A subspace W of the vector space V is a proper subspace if W V. Definition: A linear combination of the vectors v 1, v 2,..., v k is any vector of the form where c 1, c 2,..., c k are scalars. Definition: The span of the vectors v 1,..., v k is c 1 v 1 + c 2 v 2 + + c k v k Span{v 1,..., v k } = {all linear combinations of v 1,..., v k } = {c 1 v 1 + + c k v k c 1,..., c k R} Definition: For a vector space V, if V = Span{v 1,..., v k }, then {v 1,..., v k } is a spanning set of V. Theorem: If v 1,..., v k are vectors in a vector space V, then Span{v 1,..., v k } is a subspace of V. Definition: If A is an m n matrix, then the null space of A, denoted by N(A) or NulA, is the set of vectors {x R n Ax = 0}. N(A) is a subspace of R n. Definition: If A is an m n matrix, then the column space of A, denoted by ColA, is the set of vectors which are linear combinations of the columns of A. So, if A = (v 1 v 2 v n ), then ColA = {c 1 v 1 + c 2 v 2 + + c n v n c 1, c 2,..., c n R} = Span{v 1, v 2,..., v n } = {Ax x R n } = {b R m b = Ax for some x R n } 1

Recall that if A = (v 1 v 2 v n ), then ColA is a subspace of R m. c 1 c 2 A. = c 1v 1 + c 2 v 2 + + c n v n. c n Definition: A map T from the vector space V to the vector space W is a linear transformation if it satisfies the properties 1. T (u + v) = T (u) + T (v) 2. T (cu) = ct (u) for any vectors u, v V and any scalar c. An equivalent definition: For any vectors u, v V and any scalars c, d: T (cu + dv) = ct (u) + dt (v). Another equivalent definition: For any vectors u 1, u 2,..., u k c 1, c 2,..., c k : V and any scalars T (c 1 u 1 + c 2 u 2 + + c k u k ) = c 1 T (u 1 ) + c 2 T (u 2 ) + + c k T (u k ). Definition: The kernel or null space of a linear transformation T : V W is Ker T is a subspace of V. Ker T = {x V T (x) = 0}. Definition: The range of a linear transformation T : V W is Range T is a subspace of W. Range T = {w W w = T (x) for some x V }. Theorem: If T : R n R m is the linear transformation determined by the m n matrix A by T (x) = Ax, then Ker T = N(A) and Range T = ColA. Definition: {v 1, v 2,..., v k } is a linearly independent set if the only scalars c 1, c 2,..., c k which c 1 v 1 + c 2 v 2 + + c k v k = 0 are c 1 = c 2 = = c k = 0. The set is linearly dependent otherwise. for Theorem: If {v 1, v 2,..., v k } is a linearly independent set, then for any vector v Span{v 1, v 2,..., v k }, there is only one way to write v as a linear combination of v 1, v 2,..., v k. 2

Theorem: A set {v 1, v 2,..., v k } is lineaerly dependent if and only if some v j with j > 1 is a linear combination of the preceding vectors v 1,..., v j 1. Definition: A basis for a vector space V is a set {v 1, v 2,..., v k } that: 1. is linearly independent 2. is a spanning set for V : i.e. Span{v 1, v 2,..., v k } = V. Theorem: (Spanning Set Theorem) Let S = {v 1,..., v k } be a set in V and let H = Span S. 1. If one of the vectors in S v j say is a linear combination of the remaining vectors in S, then the span of S with v j removed is still H. 2. If H {0}, some subset of S is a basis for H. Theorem: A basis for ColA is the set of pivot columns. Definition: The coordinate vector of a vector v relative to a basis B = {v 1, v 2,..., v k } is c 1 c 2 (v) B =. Rk where v = c 1 v 1 + c 2 v 2 + + c k v k. (v) B is also called the B-coordinate vector of v. The map sending v to (v) B is the coordinate mapping determined by B. Theorem: The coordinate mapping is linear. Definition: If B = {v 1, v 2,..., v n } is a basis for R n, then the n n matrix c k (v 1 v 2 v n ) = P B is the change of basis matrix from B to the standard basis. Theorem: v = P B (v) B and (v) B = P 1 B v. Theorem: Let B be a basis for V and {u 1,..., u k } be vectors in V. {u 1,..., u k } is linearly independent (in V ) if and only if the coordinates {(u 1 ) B,..., (u k ) B } is linearly independent (in R n where the size of B is n). Theorem: If V has a basis B = {b 1,..., b n }, then any set in V of size greater than n is linearly dependent. Theorem: If V has a basis B of size n, then any basis of V has size n. Definition: The dimension of a vector space V is the size of a basis for V. Theorem: (The Basis Theorem) Let V be an n-dimensional vector space. Any linearly independent set of n vectors in V is a basis for V. 3

Theorem: For a matrix A, dim N(A) = # free variables. Theorem: For a matrix A, dim ColA = # pivot variables since the pivot columns give a basis for ColA. Definition: dim{0} = 0. Definition: The row space of an m n matrix A, denoted by RowA, is the set of linear combinations of the rows of A. If A = r 1. r m then RowA = {c 1 r 1 + + c m r m c 1,..., c m R} r ( ) 1 = c1 c m. c 1,..., c m R r m = {x t A x R m } Theorem: If A and B are row equivalent, i.e. may be obtained from one another via elementary row operations, then RowA = RowB. Theorem: Pivot rows of A in row echelon form form a basis for RowA. Definition: The rank of an m n matrix A is the dimension of ColA. We denote the rank of A by RankA. Theorem: (The Rank Theorem) Let A be an m n matrix. Then: 1. RankA = dim ColA = dim RowA = # pivots of A 2. RankA + dim N(A) = n (Follows from # pivot variables + # free variables = # variables = # columns = n.) Theorem: (The Invertible Matrix Theorem) Let A be an n n matrix. Then A is an invertible matrix is equivalent to any of the following: 1. The columns of A form a basis of R n 2. ColA = R n 3. dim ColA = n 4. Rank A = n 5. N(A) = {0} 6. dim N(A) = 0 4

Definition: If B = {v 1, v 2,..., v n } and C are different bases for a vector space V, the n n matrix P C B = ( (v 1 ) C (v 2 ) C (v n ) C ) is the change of corrdinates matrix from B to C. Theorem: (x) C = P C B (x) B Theorem: P B C = P 1 C B Theorem: P D B = P D C P C B Theorem: If B = {b 1, b 2,..., b n } and C = {c 1, c 2,..., c n } are bases for R n, then (c 1 c 2 c n b 1 b 2 b n ) (I P C B ). Definition: An eigenvector of an n n matrix A is a non-zero vector v such that Av = λv for some scalar λ. λ is called an eigenvalue of A. v is an eigenvector corresponding to λ. Theorem: N(A λi) \ {0} is the set of eigenvectors of A of eigenvalue λ. Definition: (Equivalent definition.) λ is an eigenvalue of A if N(A λi) is non-trivial. Definition: N(A λi) = {eigenvectors of A of eigenvalue λ} {0} is the eigenspace of A corresponding to λ. Theorem: The eigenvalues of a diagonal matrix are its diagonal entries. Theorem: The eigenvalues of a triangular matrix are its diagonal entries. Theorem: λ is an eigenvalue of A N(A λi) is non-trivial A λi is not invertible det(a λi) = 0. Definition: For an n n matrix A, det(a λi) is a polynomial of degree n in the variable λ. It is called the characteristic polynomial of A. We may write p A (λ) for det(a λi). det(a λi) = 0 is the characteristic equation. Theorem: The eigenvalues of A are the roots of the characteristic polynomial of A. i.e. they satisfy the characteristic equation. Definition: If A, B are two n n matrices, then A is similar to B if there is an invertible matrix P such that P 1 AP = B. If A is similar to B, then B is similar to A. We may say: A and B are similar. Theorem: If A and B are similar, then their characteristic polynomials are the same. Theorem: If v 1, v 2,..., v k are eigenvectors of a matrix A with corresponding eigenvalues λ 1, λ 2,..., λ k distinct, then {v 1, v 2,..., v k } is a linearly independent set. 5

Definition: A matrix A is diagonalizable if there is some invertible matrix P and a diagonal matrix D such that A = P DP 1. Theorem: (The Diagonalization Theorem) An n n matrix A is diagonalizable, A = P DP 1, if and only if there exists a basis of R n (namely the columns of P ) consisting of eigenvectors of A, the eigenvalues being given by corresponding entries in D. Theorem: If A = P DP 1, then A k = P D k P 1. Theorem: Let A be an n n matrix with k distinct eigenvalues λ 1,..., λ k. Then: 1. The dimension of the eigenspace corresponding to each λ j is less than or equal to the multiplicity of λ j as a root of the characteristic polynomial of A. 2. A is diagonalizable if and only if for each j, the dimension of the eigenspace corresponding to λ j is equal to the multiplicity of λ j as a root of the characteristic polynomial of A. In this case, the sum of the dimensions of the eigenspaces is n and we have enough linearly independent eigenvectors to diagonalize A. 6