Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length of x, a.k.a. the norm or 2-norm of x, is x = x 2 + x 2 2 +L+ x n 2 e.g., x = 3 2 + 2 2 + 5 2 = 38
Good Review Materials http://www.imageprocessingbook.com/dip2e/dip2e_downloads/review_material_downloads.htm (Gonzales Woods review materials) Chapt. : Linear Algebra Review Chapt. 2: Probability, Random Variables, Random Vectors Online vector addition demo: http://www.pa.uky.edu/~phy2/vecarith/index.html 2
Vector Addition v u+v u Vector Subtraction u u-v v 3
4 Example (on board) Inner product (dot product) of two vectors a = 6 2 3 ( ( ( b = 4 5 a b = a T b = 6 2 3 [ ] 4 5 ( ( ( = 6 4 + 2 + (3) 5 =
Inner (dot) Product v α u The inner product is a SCALAR. 5
Transpose: Transpose of a Matrix Examples: If, we say A is symmetric. Example of symmetric matrix 6
7
8
9
Matrix Product Product: A and B must have compatible dimensions In Matlab: >> A*B Examples: Matrix Multiplication is not commutative:
Matrix Sum Sum: Example: A and B must have the same dimensions Determinant of a Matrix Determinant: A must be square Example:
Determinant in Matlab Inverse of a Matrix If A is a square matrix, the inverse of A, called A -, satisfies AA - = I and A - A = I, Where I, the identity matrix, is a diagonal matrix with all s on the diagonal. I 2 = I 3 = 2
Inverse of a 2D Matrix Example: Inverses in Matlab 3
Other Terms 4
Matrix Transformation: Scale A square diagonal matrix scales each dimension by the corresponding diagonal element. Example: 2 6.5 8 3 = 2 4 3 5
http://www.math.ubc.ca/~cass/courses/m39-8a/java/m39gfx/eigen.html 6
7 Some Properties of Eigenvalues and Eigenvectors If λ,, λ n are distinct eigenvalues of a matrix, then the corresponding eigenvectors e,, e n are linearly independent. A real, symmetric square matrix has real eigenvalues, with eigenvectors that can be chosen to be orthonormal. Linear Independence A set of vectors is linearly dependent if one of the vectors can be expressed as a linear combination of the other vectors. Example:,, 2 A set of vectors is linearly independent if none of the vectors can be expressed as a linear combination of the other vectors. Example:,, 2 3
Rank of a matrix The rank of a matrix is the number of linearly independent columns of the matrix. Examples: 2 has rank 2 2 has rank 3 Note: the rank of a matrix is also the number of linearly independent rows of the matrix. Singular Matrix All of the following conditions are equivalent. We say a square (n n) matrix is singular if any one of these conditions (and hence all of them) is satisfied. The columns are linearly dependent The rows are linearly dependent The determinant = The matrix is not invertible The matrix is not full rank (i.e., rank < n) 8
9 Linear Spaces A linear space is the set of all vectors that can be expressed as a linear combination of a set of basis vectors. We say this space is the span of the basis vectors. Example: R 3, 3-dimensional Euclidean space, is spanned by each of the following two bases:,,, 2, Linear Subspaces A linear subspace is the space spanned by a subset of the vectors in a linear space. The space spanned by the following vectors is a two-dimensional subspace of R 3.,, What does it look like? What does it look like? The space spanned by the following vectors is a two-dimensional subspace of R 3.
Orthogonal and Orthonormal Bases n linearly independent real vectors span R n, n-dimensional Euclidean space They form a basis for the space. An orthogonal basis, a,, a n satisfies a i a j = if i j An orthonormal basis, a,, a n satisfies a i a j = if i j a i a j = if i = j Examples. Orthonormal Matrices A square matrix is orthonormal (also called unitary) if its columns are orthonormal vectors. A matrix A is orthonormal iff AA T = I. If A is orthonormal, A - = A T AA T = A T A = I. A rotation matrix is an orthonormal matrix with determinant =. It is also possible for an orthonormal matrix to have determinant = -. This is a rotation plus a flip (reflection). 2
SVD: Singular Value Decomposition Any matrix A (m n) can be written as the product of three matrices: A = UDV T where U is an m m orthonormal matrix D is an m n diagonal matrix. Its diagonal elements, σ, σ 2,, are called the singular values of A, and satisfy σ σ 2. V is an n n orthonormal matrix Example: if m > n A U D V T ( ( ( ( * * 2 T + v, = u u 2 u 3 L u m * n M M M T + v n, ) ) ) ) >> x = [ 2 3; 2 7 4; -3 6; 2 4 9; 5-8 ] x = 2 3 2 7 4-3 6 2 4 9 5-8 SVD in Matlab >> [u,s,v] = svd(x) u = -.24538.78 -.29 -.4742 -.82963 -.53253 -.684 -.5286 -.4536.472 -.3668.24939.79767 -.38766.2395 -.64223.4422 -.5795.6667 -.9874.3869.84546 -.26226 -.2428.589 s = 4.42 8.8258 5.6928 v =.82.4826 -.87639 -.68573 -.6395 -.362 -.72763.6748.3863 2
Some Properties of SVD The rank of matrix A is equal to the number of nonzero singular values σ i A square (n n) matrix A is singular iff at least one of its singular values σ,, σ n is zero. Geometric Interpretation of SVD If A is a square (n n) matrix, A U D V T = ( L ( u L u n ) L ) * T + v, * 2 M M M T * n + v n, U is a unitary matrix: rotation (possibly plus flip) D is a scale matrix V (and thus V T ) is a unitary matrix Punchline: An arbitrary n-d linear transformation is equivalent to a rotation (plus perhaps a flip), followed by a scale transformation, followed by a rotation Advanced: y = Ax = UDV T x V T expresses x in terms of the basis V. D rescales each coordinate (each dimension) The new coordinates are the coordinates of y in terms of the basis U 22