AMS526: Numerical Analysis I (Numerical Linear Algebra)



Similar documents
Numerical Methods I Eigenvalue Problems

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

Lecture 5: Singular Value Decomposition SVD (1)

CS3220 Lecture Notes: QR factorization and orthogonal transformations

Examination paper for TMA4205 Numerical Linear Algebra

Linear Algebra Review. Vectors

Similar matrices and Jordan form

3 Orthogonal Vectors and Matrices

6. Cholesky factorization

The Assessment of Benchmarks Executed on Bare-Metal and Using Para-Virtualisation

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems

Orthogonal Diagonalization of Symmetric Matrices

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

Linear Algebra: Determinants, Inverses, Rank

7 Gaussian Elimination and LU Factorization

HSL and its out-of-core solver

Notes on Cholesky Factorization

SOLVING LINEAR SYSTEMS

Matrix Multiplication

Eigenvalues and Eigenvectors

DATA ANALYSIS II. Matrix Algorithms

LINEAR ALGEBRA. September 23, 2010

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

October 3rd, Linear Algebra & Properties of the Covariance Matrix

Using row reduction to calculate the inverse and the determinant of a square matrix

Constrained Least Squares

ASEN Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1

Direct Methods for Solving Linear Systems. Matrix Factorization

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Data Mining: Algorithms and Applications Matrix Math Review

Chapter 6. Orthogonality

Applied Linear Algebra I Review page 1

Mathematical Libraries on JUQUEEN. JSC Training Course

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

[1] Diagonal factorization

Linear Algebraic Equations, SVD, and the Pseudo-Inverse

Operation Count; Numerical Linear Algebra

Solving Linear Systems of Equations. Gerald Recktenwald Portland State University Mechanical Engineering Department

Solution of Linear Systems

Elementary Matrices and The LU Factorization

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

ALGEBRAIC EIGENVALUE PROBLEM

x = + x 2 + x

1 Introduction to Matrices

Mathematical Libraries and Application Software on JUROPA and JUQUEEN

Lecture 2 Matrix Operations

Preconditioning Sparse Matrices for Computing Eigenvalues and Solving Linear Systems of Equations. Tzu-Yi Chen

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

Numerical Analysis. Gordon K. Smyth in. Encyclopedia of Biostatistics (ISBN ) Edited by. Peter Armitage and Theodore Colton

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Solving Linear Systems, Continued and The Inverse of a Matrix

Similarity and Diagonalization. Similar Matrices

Linear Algebra Methods for Data Mining

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Inner Product Spaces and Orthogonality

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

P164 Tomographic Velocity Model Building Using Iterative Eigendecomposition

NUMERICAL METHODS TOPICS FOR RESEARCH PAPERS

by the matrix A results in a vector which is a reflection of the given

Introduction to Matrix Algebra

Text Analytics (Text Mining)

FAST EXACT AFFINE PROJECTION ALGORITHM USING DISPLACEMENT STRUCTURE THEORY. Manolis C. Tsakiris and Patrick A. Naylor

ST-HEC: Reliable and Scalable Software for Linear Algebra Computations on High End Computers Introduction Putting more of LAPACK into ScaLAPACK.

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #18: Dimensionality Reduc7on

Section Inner Products and Norms

Lecture 1: Schur s Unitary Triangularization Theorem

Solving Systems of Linear Equations

Orthogonal Bases and the QR Algorithm

NUMERICAL METHODS FOR LARGE EIGENVALUE PROBLEMS

MAT 242 Test 2 SOLUTIONS, FORM T

APPM4720/5720: Fast algorithms for big data. Gunnar Martinsson The University of Colorado at Boulder

A note on companion matrices

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

1 VECTOR SPACES AND SUBSPACES

Vector and Matrix Norms

LU Factoring of Non-Invertible Matrices

Factorization Theorems

AN OUT-OF-CORE SPARSE SYMMETRIC INDEFINITE FACTORIZATION METHOD

It s Not A Disease: The Parallel Solver Packages MUMPS, PaStiX & SuperLU

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Finite Dimensional Hilbert Spaces and Linear Inverse Problems

Data Structures and Performance for Scientific Computing with Hadoop and Dumbo

Matrix Differentiation

Inner products on R n, and more

Mean value theorem, Taylors Theorem, Maxima and Minima.

Numerical Analysis Lecture Notes

Math 312 Homework 1 Solutions

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

u = [ 2 4 5] has one row with three components (a 3 v = [2 4 5] has three rows separated by semicolons (a 3 w = 2:5 generates the row vector w = [ 2 3

Transcription:

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: SVD revisited; Software for Linear Algebra Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 9

Outline 1 Computing SVD 2 Software for Linear Algebra Xiangmin Jiao Numerical Analysis I 2 / 9

Computing the SVD Intuitive idea for computing SVD of A R m n : Form A A and compute its eigenvalue decomposition A A = V ΛV Let Σ = Λ, i.e., diag( λ 1, λ 2,..., λ n ) Solve system UΣ = AV to obtain U This method can be very efficient if m n. However, it is not very stable, especially for smaller singular values because of the squaring of the condition number For SVD of A, σk σ k = O(ɛ machine A ), where σ k and σ k denote the computed and exact kth singular value If computed from eigenvalue decomposition of A A, σ k σ k = O(ɛ machine A 2 /σ k ), which is problematic if σ k A If one is interested in only relatively large singular values, then using eigenvalue decomposition is not a problem. For general situations, a more stable algorithm is desired. Xiangmin Jiao Numerical Analysis I 3 / 9

Computing the SVD Typical algorithm for computing SVD are similar to computation of eigenvalues [ ] 0 A Consider A R m m, then hermitian matrix H = has A 0 eigenvalue decomposition [ ] [ ] [ ] V V V V Σ 0 H =, U U U U 0 Σ where A = UΣV gives the SVD. This approach is stable. In practice, such a reduction is done implicitly without forming the large matrix Typically done in two or more stages: First, reduce to bidiagonal form by applying different orthogonal transformations on left and right, Second, reduce to diagonal form using a variant of QR algorithm or divide-and-conquer algorithm Xiangmin Jiao Numerical Analysis I 4 / 9

Generalized Eigenvalue Problem Generalized eigenvalue problem has the form Ax = λbx, where A and B are m m matrices For example, in structural vibration problems, A represents the stiffness matrix, B the mass matrix, and eigenvalues and eigenvectors determine natural frequencies and modes of vibration of structures If A or B is nonsingular, then it can be converted into standard eigenvalue problem (B 1 A)x = λx or (A 1 B)x = (1/λ)x If A and B are both symmetric, preceding transformation loses symmetry and in turn may lose orthogonality of generalized eigenvectors. If B is positive definite, alternative transformation is (L 1 AL T )y = λy, where B = LL T and y = L T x If A and B are both singular or indefinite, then use QZ algorithm to reduce A and B into triangular matrices simultaneously by orthogonal transformation (see Golub and van Loan for detail) Xiangmin Jiao Numerical Analysis I 5 / 9

Outline 1 Computing SVD 2 Software for Linear Algebra Xiangmin Jiao Numerical Analysis I 6 / 9

Software for Linear Algebra LAPACK: Linear Algebra PACKage (www.netlib.org/lapack/lug) Standard library for solving linear systems and eigenvalue problems Successor of LINPACK (www.netlib.org/linpack) and EISPACK (www.netlib.org/eispack) Depends on BLAS (Basic Linear Algebra Subprograms) Parallel extensions include ScaLAPACK and PLAPACK Note: Uses Fortran conventions for matrix arrangements MATLAB Factorization A: lu(a) and chol(a) Solve Ax = b: x = A\b Uses back/forward substitution for triangular matrices Uses Cholesky factorization for positive-definite matrices Uses LU factorization with column pivoting for nonsymmetric matrices Uses Householder QR for least squares problems Uses some special routines for matrices with special sparsity patterns Uses LAPACK and other packages internally Serial and parallel solvers for sparse matrices (e.g., SuperLU, TAUCS) Xiangmin Jiao Numerical Analysis I 7 / 9

Some Commonly Used Functions Example BLAS routines: Matrix-vector multip.: dgemv; Matrix-matrix multip: dgemm LU Factorization Solve linear system Est. cond General Symmetric General Symmetric LAPACK dgetrf dpotrf/dsytrf dgesv dposv/dposvx dgecon LINPACK dgefa dpofa/dsifa dgesl dposl/dsisl dgeco MATLAB lu chol \ \ rcond Linear least squares Eigenvalue/vector SVD QR Solve Rank-deficient General Sym. LAPACK dgeqrf dgels dgelsy dgeev dsyev dgesvd LINPACK dqrdc dqrsl dqrst - - dsvdc MATLAB qr \ \ eig eig svd For BLAS, LINPACK, and LAPACK, first letter s stands for single-precision real, d for double-precision real, c for single-precision complex, and z for double-precision complex. Boldface LAPACK routines are driver routines; others are computational routines. Xiangmin Jiao Numerical Analysis I 8 / 9

Using LAPACK Routines in C Programs LAPACK was written in Fortran 77. Special attention is required when calling from C. Key differences between C and Fortran 1 Storage of matrices: column major (Fortran) versus row major (C/C++) 2 Argument passing for subroutines in C and Fortran: pass by reference (Fortran) and pass by value (C/C++) Simple example C code, example.c, for solving linear system using sgesv. See class website for sample code. To compile, issue command cc -o example example.c -llapack -lblas Hint: To find a function name, refer to LAPACK Users Guide. To find out arguments for a given function, search on netlib.org Xiangmin Jiao Numerical Analysis I 9 / 9