Lecture 5: Singular Value Decomposition SVD (1)



Similar documents
Linear Algebra Review. Vectors

Chapter 6. Orthogonality

Similar matrices and Jordan form

by the matrix A results in a vector which is a reflection of the given

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

MATH APPLIED MATRIX THEORY

3 Orthogonal Vectors and Matrices

Similarity and Diagonalization. Similar Matrices

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

Numerical Methods I Eigenvalue Problems

Orthogonal Diagonalization of Symmetric Matrices

α = u v. In other words, Orthogonal Projection

CS3220 Lecture Notes: QR factorization and orthogonal transformations

[1] Diagonal factorization

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

6. Cholesky factorization

Linear Algebraic Equations, SVD, and the Pseudo-Inverse

x = + x 2 + x

Typical Linear Equation Set and Corresponding Matrices

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Applied Linear Algebra I Review page 1

LINEAR ALGEBRA. September 23, 2010

( ) which must be a vector

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

Eigenvalues and Eigenvectors

160 CHAPTER 4. VECTOR SPACES

SALEM COMMUNITY COLLEGE Carneys Point, New Jersey COURSE SYLLABUS COVER SHEET. Action Taken (Please Check One) New Course Initiated

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

MAT 242 Test 2 SOLUTIONS, FORM T

Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree of PhD of Engineering in Informatics

Vector and Matrix Norms

8 Square matrices continued: Determinants

Linear Algebra Methods for Data Mining

Linear Algebra: Determinants, Inverses, Rank

1 Introduction to Matrices

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

MAT 242 Test 3 SOLUTIONS, FORM A

Question 2: How do you solve a matrix equation using the matrix inverse?

The Image Deblurring Problem

Introduction to Matrix Algebra

Computational Optical Imaging - Optique Numerique. -- Deconvolution --

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

Lecture Topic: Low-Rank Approximations

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Orthogonal Projections

Lecture 2 Matrix Operations

Solving Linear Systems, Continued and The Inverse of a Matrix

Numerical Analysis Lecture Notes

Inner Product Spaces and Orthogonality

October 3rd, Linear Algebra & Properties of the Covariance Matrix

Methods for Finding Bases

18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in Total: 175 points.

Notes on Symmetric Matrices

Using row reduction to calculate the inverse and the determinant of a square matrix

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

Linear Algebra and TI 89

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems

CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #18: Dimensionality Reduc7on

Section Inner Products and Norms

5. Orthogonal matrices

1 Review of Least Squares Solutions to Overdetermined Systems

Inner product. Definition of inner product

Finite Dimensional Hilbert Spaces and Linear Inverse Problems

DATA ANALYSIS II. Matrix Algorithms

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Solutions to Math 51 First Exam January 29, 2015

Text Analytics (Text Mining)

Chapter 17. Orthogonal Matrices and Symmetries of Space

A =

Name: Section Registered In:

1 VECTOR SPACES AND SUBSPACES

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

Lecture 14: Section 3.3

Constrained Least Squares

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Operation Count; Numerical Linear Algebra

Inner Product Spaces

General Framework for an Iterative Solution of Ax b. Jacobi s Method

Lecture 1: Schur s Unitary Triangularization Theorem

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.

Nonlinear Iterative Partial Least Squares Method

3. INNER PRODUCT SPACES

ASEN Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1

LINEAR ALGEBRA W W L CHEN

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

Data Mining: Algorithms and Applications Matrix Math Review

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

Least-Squares Intersection of Lines

1. Introduction. Consider the computation of an approximate solution of the minimization problem

University of Lille I PC first year list of exercises n 7. Review

Linear Algebra Notes

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Matrices and Linear Algebra

Direct Methods for Solving Linear Systems. Matrix Factorization

Notes on Determinant

Transcription:

EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02

Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system of linear equations Ax = b Suppose b is perturbed to b+δb Solution becomes x = A -1 b + A -1 δb The consequent change in x is therefore A -1 δb For what perturbation δb will the error be biggest? How big can the norm of the error be, in terms of δb? The norm of the error, relative to δb can be expressed in terms of a number called the smallest singular value of A EE3L1, slide 2, Version 4: 25-Sep-02

Motivation for SVD (2) Which direction b must be perturbed in to give the biggest error? If cond(a) is large. How can we then find an accurate solution to Ax = b? Both of these questions can also be addressed using Singular Value Decomposition The remainder of the section of linear algebra will be taken up with Singular Value Decomposition (SVD) EE3L1, slide 3, Version 4: 25-Sep-02

Orthogonal Matrices revisited Remember that an m x n matrix U is called column orthogonal if U T U = I, where I is the identity matrix In other words, the column vectors in U are orthogonal to each other and each of them are of unit norm If n = m then U is called orthogonal. In this case UU T =I also EE3L1, slide 4, Version 4: 25-Sep-02

SVD of a Matrix Let A be an m x n matrix such that the number of rows m is greater than or equal to the number of columns n. Then there exists: (i) an m x n column orthogonal matrix U (ii) an n x n diagonal matrix S, with positive or zero elements, and (iii) an n x n orthogonal matrix V such that: A = USV T This is the Singular Value Decomposition (SVD) of A EE3L1, slide 5, Version 4: 25-Sep-02

The Singular Values of A Suppose S =diag{σ 1, σ 2,, σ- n }. By convention it is assumed that σ 1 σ 2 σ n 0 The values σ 1, σ 2,, σ- n are called the singular values of A EE3L1, slide 6, Version 4: 25-Sep-02

SVD of a square matrix The case where A is an n x n square matrix is of partiicular interest. In this case, the Singular Value Decomposition of A is given A = USV T Where V and U are orthogonal matrices EE3L1, slide 7, Version 4: 25-Sep-02

Example 0.9501 0.2311 Let A = 0.6068 0.4860 0.8913 0.7621 0.4565 0.0185 0.8214 0.4447 0.6154 0.7919 0.9218 0.7382 0.1763 0.4057 In MATLAB, >> [U,S,V]=svd(A); returns the SVD of A 0.7301 0.4413 U = 0.3809 0.3564 S = 2.4479 0 0 0 0.1242 0.6334 0.3254 0.6910 0 0.6716 0 0 0.1899 0.3788 0.6577 0.6229 0 0 0.3646 0 0.6445 0.5104 0.5626 0.0871 0 0 0 0.1927 V 0.4903 0.4770 = 0.5362 0.4945 0.4004 0.6433 0.5417 0.3638 0.5191 0.4642 0.2770 0.6620 0.5743 0.3783 0.5850 0.4299 EE3L1, slide 8, Version 4: 25-Sep-02

Singular values and Eigenvalues The singular values of A are not the same as its eigenvalues >> eig(a) ans = 2.3230 0.0914+0.4586i 0.0914-0.4586i 0.2275 For any matrix A the matrix A H A is normal with non-negative eigenvalues. The singular values of A are the square roots of the eigenvalues of A H A EE3L1, slide 9, Version 4: 25-Sep-02

Calculating Inverses with SVD Let A be an n x n matrix. Then U, S and V are also n x n. U and V are orthogonal, and so their inverses are equal to their transposes. S is diagonal, and so its inverse is the diagonal matrix whose elements are the inverses of the elements of S. [ ] 1 1 1 diag( σ, σ,..., U T A = V σ ) 1 1 2 n EE3L1, slide 10, Version 4: 25-Sep-02

Calculating Inverses (contd) If one of the σ i s is zero, or so small that its value is dominated by round-off error, then there is a problem! The more of the σ i s that have this problem, the more singular A is. SVD gives a way of determining how singular A is. The concept of how singular A is is linked with the condition number of A The condition number of A is the ration of its largest singular value to its smallest singular value EE3L1, slide 11, Version 4: 25-Sep-02

The Null Space of A Let A be an n x n matrix Consider the linear equations Ax=b, where x and b are vectors. The set of vectors x such that Ax=0 is a linear vector space, called the null space of A If A is invertible, the null space of A is the zero vector If A is singular, the null space will contain non-zero vectors The dimension of the null space of A is called the nullity of A EE3L1, slide 12, Version 4: 25-Sep-02

The Range of A The set of vectors which are targets for A, i.e. the set of all vectors b for which there exists a vector x such that Ax=b is called the range of A The range of A is a linear vector space whose dimension is the rank of A If A is singular, then the rank of A will be less than n n = Rank(A)+Nullity(A) EE3L1, slide 13, Version 4: 25-Sep-02

SVD, Range and Null Space Singular Valued Decomposition constructs orthonormal bases for the range and null space of a matrix The columns of U which correspond to non-zero singular values of A are an orthonormal set of basis vectors for the range of A The columns of V which correspond to zero singular values form an orthonormal basis for the null space of A EE3L1, slide 14, Version 4: 25-Sep-02

Solving linear equations with SVD Consider a set of homogeneous equations Ax=0. Any vector x in the null space of A is a solution. Hence any column of V whose corresponding singular value is zero is a solution Now consider Ax=b and b 0, A solution only exists if b lies in the range of A If so, then the set of equations does have a solution. In fact, it has infinitely many solutions because if x is a solution and y is in the null space of A, then x+y is also a EE3L1, slide 15, Version 4: 25-Sep-02

Solving Ax=b 0 using SVD If we want a particular solution, then we might want to pick the solution x with the smallest length x 2 Solution is x [ ] 1 1 1 T diag( σ, σ,..., ) ( U ) = V σ 1 2 n b where, for each singular value σ j such that σ j =0, σ j -1 is replaced by 0 EE3L1, slide 16, Version 4: 25-Sep-02

Least Squares Estimation If b is not in the range of A then there is no vector x such that Ax=b. So x [ ] 1 1 1 T diag( σ, σ,..., ) ( U ) = V σ 1 2 n b cannot be used to obtain an exact solution. However, the vector returned will do the closest possible job in the least squares sense. It will find the vector x which minimises R = Ax b R is called the residual of the solution EE3L1, slide 17, Version 4: 25-Sep-02