Eigenvalue Solvers. Derivation of Methods

Size: px
Start display at page:

Download "Eigenvalue Solvers. Derivation of Methods"

Transcription

1 Eigenvalue Solvers Derivation of Methods

2 Similarity Transformation Similarity transformation for A: BAB ; this can be done with any nonsingular B. Let Ax = λx, then BAB Bx BAx λbx = =. BAB has the same eigenvalues as A, and eigenvectors Bx where x is an eigenvector of A. Although any nonsingular B possible, most stable and accurate algorithms with orthogonal (unitary) matrix. For example used in the QR algorithm. 2

3 Similarity Transformation Orthogonal similarity transformation for A: * where QQ= I. * QAQ, If * QAQ L F = 0 L 2, then ( A) = ( L ) ( L ) L L L. 2 If we can find Q QQ 2 that yields such a decomposition we have reduced the problem to two smaller problems. Moreover, AQ QL range Q is invariant subspace. Eigenpair Lz = λz gives eigenpair AQz = QLz = λq z. = and ( ) 3

4 Approximation over Search Space For large matrices we cannot use full transformations. Often we do not need all eigenvalues/vectors. Look for proper basis Q that captures relevant eigenpairs. We do not need Q 2. Approximations over subspace range( Q ): L = Q AQ * When is an approximation good (enough)? We will rarely find AQ QL = 0 unless we do huge amount of work. Not necessary. We are working with approximations and we must deal with numerical error anyway. 4

5 Approximation over Search Space Let AQ QL = R with R small relative to A. Now, ( ) A RQ Q QL AQ R QL * = = 0. range( Q ) is exact invariant subspace of perturbed matrix, Â. * Â= A RQ and  A A = R A If R A sufficiently small, then Q is acceptable. In fact, this is as good as we can expect (unless we're lucky). Any numerical operation involves perturbed operands! Note that we cannot always say that Q is accurate. 5

6 Power method So, what kind of methods lead to good Q? In many cases matrices are sparse and/or it is possible to do the matrix-vector product,ax, very cheap. First consider very small subspace: vector Well-known method is the Power method. ( k ) ( k) ( k) v + = Av / Av, ( ) ( ) ( ) k k k λ + + vj / vj = until convergence. Converges to eigenpair with largest absolute eigenvalue, if unique. May converge very slowly if dominant eigenvalue not well-separated (next one is close). 6

7 Power method Assume A diagonalizable: Decompose v ( 0) along eigenvectors: v ( ) A= XΛX and λ > λj = x α 0 j j j k v ( k) A k v ( 0) x k k x x λ j = = j jλα j j = λ α j j α + > j λ So, ( k ) k k v x since λj / λ 0. If λ > λ2 λj, then λ λ Convergence may be slow if 2 determines rate of convergence. λ λ 2, and linearly at best. 7

8 Work with search space Improve on the Power method by keeping all iterates. m 2 m Krylov space: K ( A, v) = span { v, Av, A v,, A v} Let Qm = qq2 q m m give orthonormal basis for (, ) Approximate eigenpair ( Qw), m * m μ m m m m μ is called Ritz pair if AQ w Q w Q Q AQ w μw = 0 K A v. We find Ritz pairs from eigenpairs ( μ,w) of H Q AQ. * m m This is a small eigenvalue problem that we can solve using standard software (e.g., QR algorithm in LAPACK). 8

9 Arnoldi Generate an orthogonal basis for the Krylov subspace: q = v (0) /æv (0) æ 2 ; for k = :m, q = Aq k ; for j = :k, h j,k = q & k q ; q = q q k h k,j ; h k+,k = æq æ 2 ; q k+ = q /h k+,k ; end Accurate orthogonalization required: (partial) reorthogonalization AQ m = Q m+ H m, where Q m+ = [q q 2 q m+ ] and H m = [ h j,k ] j=:m+,k=:m & with Q m+ Q m+ = I m+ and range(q m+ )=K m+ (A, v (0) ). 9

10 Arnoldi Compute m approximate eigenpairs from H m = Q & m AQ m : H m y = y which gives approximate pairs: (,Q m y) of A. To reduce costs we select after m steps kapproximate eigenvectors, get new Q k (k < m), and discard the rest. All Ritz vectors (Krylov space) have same residual:. q m+ AQ m y = Q m+ H m y = Q m H m y + q m+ e m T yh m+,m = Q m y m + q m+ y m h m+,m We can use to continue generating vectors from the Krylov q m+ space. 0

11 Lanczos The Lanzcos algorithm is Arnoldi applied to a symmetric system using short recurrence (two previous vectors). AQ m = Q m+ T m, where T m = [ t j,k ] j=:m+,k=j :j+ is tridiagonal and symmetric (even cheaper) Use T m = Q & m AQ m to compute approximate eigenpairs. T m y = y which gives approximate pairs: (, Q m y) of A. If we only need eigenvalues (and do not re-orthogonalize) we can discard the vectors and restarting is not necessary. Special care required to avoid spurious copies of eigenvalues. Otherwise, techniques for Arnoldi can be used.

12 IRA We need to restart from time to time to save memory and CPU time. This also holds for Lanczos if we want to compute eigenvectors or reorthogonalize (part.) for accuracy. Efficient and accurate implementation is so-called Implicitly Restarted Arnoldi (Lanczos). This method discards unwanted Ritz pairs using polynomial filter applied to Hessenberg matrix (tridiagonal). 2

13 IRA Discard unwanted eigenpairs with polynomial filter. (Sorensen 92) The remaining vectors span new Krylov subspace with different starting vector closer to wanted invariant subspace. Method builds search space of dimension m, discards m k vectors, and extends again to m vectors using the residual of the Ritz pairs (same vector; w m + fromawm = Wm + H m) ARPACK from user guide from SIAM (Lehoucq, Sorensen, and Yang 98) 3

14 Inverse Iteration What if we want the smallest (magnitude) eigenvalue or any other one that is not the largest in magnitude? Inverse iteration: x k = A x k w solve Ax k = x k or x k =(A si) x k w solve (A si)x k = x k Inverse iteration with A converges to smallest (in magnitude) eigenvalue if unique, and inverse iteration with (A si) converges to eigenvalue/vector closest to s. By choosing s more accurate we can improve convergence speed. (A si) has eigenvalues ( j s) and so, assuming we want to approximate the eigenpair with (simple) eigenvalue the rate of convergence is determined by max j!k k s j s k This can be made arbitrarily small if we know k sufficiently accurate. 4

15 Rayleigh Quotient Iteration Let x be an approximate eigenvector for a real matrix A. Then approximate eigenvalue is given by n % linear least squares problem: x ˆ l Ax e x T x ˆ = x T Ax e ˆ = xt Ax x T x (Rayleigh quotient) Let Av = v and A be symmetric then æx væ = O( ) e ˆ = O( 2 ) Let x = v + p, where vz p, ævæ 2 = æpæ 2 = x T Ax x T x = (v+ p) T A(v+ p) (v+ p) T (v+ p) = v T Av+2 p T Av+ 2 p T Ap v T v+2 p T v+ 2 p T p = + 2 p T Ap + 2 l ( + 2 p T Ap)( 2 )= + O( 2 ) 5

16 Rayleigh Quotient Iteration So given O( ) approximation to v we get O( 2 ) approximation to. Cunning plan: Improve eigenvalue estimate s every step using Rayleigh quotient. x k =(A r k I) x k w solve (A r k I) x k = x k with r k = x k This is called Rayleigh Quotient Iteration For symmetrix matrices, close to solution, we have cubic convergence. Let x = v k + p = v k + j!k v j j, where v z p, ævæ 2 = æpæ 2 = T Ax k x T k x k x =(A ˆI) x =( k ˆ) v k + j! v j j ( j ˆ) normalization gives x( k ˆ)=v k +( k ˆ) j! v j j ( j ˆ) and the norm of the error now gives æx( k ˆ) v kæ 2 = æ( k ˆ) j! v j j ( j ˆ) æ 2 [ æ( k ˆ) j! v j jæ = O( 3 ) 6

17 Rayleigh Quotient Iteration The Rayleigh Quotient iteration r k = x k T x k Ax k T x k solve (A si)x k = x k requires repeatedly solving a (new) linear system. We can make this cheaper by first reducing A to tridiagonal or Hessenberg form using unitary similarity transformations: A = Q T AQ This gives T = Q T AQ is tridiagonal if A is symmetric, and H = Q T AQ is (upper) Hessenberg if A is nonsymmetric. Solving a linear tridiagonal system (order n) takes only O(n) operations. Solving an upper Hessenberg system (order n) takes O(n 2 ) operations, for n Givens rotations and backsubstitution O(n 2 ). 7

18 (Approximate) Shift Invert Methods Inverse iteration with shift and Rayleigh quotient iteration are examples of shift-invert methods Very fast convergence, but require inverse of matrix Expensive for large matrix, especially if shift changes every iteration Hence, approximate shift invert methods (really, shift, approximate invert...) Approximate solution by one step approximation or few steps of iterative method Davidson s method, Jacobi-Davidson method 8

19 Davidson Davidson s method also uses a search space and Ritz values/vectors from that space, but extension of search space is different. Let H y m n m * *, such that QQ m m = I, QmAQm = Hm and = λy. We have residual r= AQy m λqy m. Q m Then extend search space as follows: Solve( DA λi) t = r where D A is diagonal ofa. * ( m m) t = I Q Q t (orthog) and set Qm+ = Qm t. Compute H m + and its eigenpairs, continue. 9

20 Davidson Very effective for outer most (esp.) dominant eigenvalues. Various explanations for this have been given. In many cases D λi A λi has only few outlying the matrix ( ) ( ) A eigenvalues (). Another interprets Davidson as an improvement to an old method by Jacobi (2). This method discards unwanted Ritz pairs using polynomial filter applied to Hessenberg matrix (tridiagonal). 20

21 Jacobi-Davidson Linear eigenvalue problem: Ax = λx * Orthon. basis for search space: Q k, Hk = QkAQk ( λ, x), Ritz pair: ( λ,u) where u = Qky and residual r k = Au λu Correction equation: I uu * A λi I uu * t = r, t u Solve: ( )( )( ) * * t = ( I QQ k k) t/ ( I QQ k k) t Qk+ = Qk t, * Hk+ = Qk+ AQk+ Continue Discard unwanted eigenpairs with polynomial filter or using Schur decomposition of H k +. (Sleijpen and van der Vorst 96) 2

22 Some literature.r.b. Morgan, Generalizations of Davidson s method for computing eigenvalues of large nonsymmetric matrices, J. Comput. Physics 0, pp , Sleijpen and van der Vorst, A Jacobi-Davidson iteration method for linear eigenvalue problems, SIAM Review 42(2), pp , 2000 (extension of SIMAX 996). (2) Provides many good references 22

23 Quick Comparison Implicitly restarted Arnoldi (IRA) and Davidson work well for eigenvalues on the boundary of the spectrum. Jacobi-Davidson also works for these, but may be less efficient. Jacobi-Davidson (JD) is superior for interior eigenvalues. Probably, because it is an approximate shift and invert technique and more effectively so than Davidson. To improve the 'invert' part preconditioning can be used in the linear solver. This may improve the effectiveness dramatically. (One can also use various linear solvers, e.g. multigrid to improve convergence) IRA has some advantage in looking for several eigenvalues at once. 23

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001, Fall 2010 September 30th, 2010 A. Donev (Courant Institute)

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

ALGEBRAIC EIGENVALUE PROBLEM

ALGEBRAIC EIGENVALUE PROBLEM ALGEBRAIC EIGENVALUE PROBLEM BY J. H. WILKINSON, M.A. (Cantab.), Sc.D. Technische Universes! Dsrmstedt FACHBEREICH (NFORMATiK BIBL1OTHEK Sachgebieto:. Standort: CLARENDON PRESS OXFORD 1965 Contents 1.

More information

Applied Linear Algebra I Review page 1

Applied Linear Algebra I Review page 1 Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

More information

Similar matrices and Jordan form

Similar matrices and Jordan form Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

Lecture 5: Singular Value Decomposition SVD (1)

Lecture 5: Singular Value Decomposition SVD (1) EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system

More information

Model order reduction via dominant poles

Model order reduction via dominant poles Model order reduction via dominant poles NXP PowerPoint template (Title) Template for presentations (Subtitle) Joost Rommes [joost.rommes@nxp.com] NXP Semiconductors/Corp. I&T/DTF/Mathematics Joint work

More information

Examination paper for TMA4205 Numerical Linear Algebra

Examination paper for TMA4205 Numerical Linear Algebra Department of Mathematical Sciences Examination paper for TMA4205 Numerical Linear Algebra Academic contact during examination: Markus Grasmair Phone: 97580435 Examination date: December 16, 2015 Examination

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Review Jeopardy. Blue vs. Orange. Review Jeopardy Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?

More information

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices. Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R

More information

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every

More information

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001,

More information

Lecture 1: Schur s Unitary Triangularization Theorem

Lecture 1: Schur s Unitary Triangularization Theorem Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections

More information

CS3220 Lecture Notes: QR factorization and orthogonal transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss

More information

3 Orthogonal Vectors and Matrices

3 Orthogonal Vectors and Matrices 3 Orthogonal Vectors and Matrices The linear algebra portion of this course focuses on three matrix factorizations: QR factorization, singular valued decomposition (SVD), and LU factorization The first

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

On the Subspace Projected Approximate Matrix method. J. H. Brandts and R. Reis da Silva

On the Subspace Projected Approximate Matrix method. J. H. Brandts and R. Reis da Silva NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2011; 00:1 23 Published online in Wiley InterScience (www.interscience.wiley.com). On the Subspace Projected Approximate Matrix method

More information

Operation Count; Numerical Linear Algebra

Operation Count; Numerical Linear Algebra 10 Operation Count; Numerical Linear Algebra 10.1 Introduction Many computations are limited simply by the sheer number of required additions, multiplications, or function evaluations. If floating-point

More information

Towards a deeper understanding of Chris Paige s error analysis of the finite precision Lanczos process

Towards a deeper understanding of Chris Paige s error analysis of the finite precision Lanczos process Towards a deeper understanding of Chris Paige s error analysis of the finite precision Lanczos process Jens-Peter M. Zemke zemke@tu-harburg.de Institut für Numerische Simulation Technische Universität

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

More information

MATH 551 - APPLIED MATRIX THEORY

MATH 551 - APPLIED MATRIX THEORY MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

More information

Nonlinear Iterative Partial Least Squares Method

Nonlinear Iterative Partial Least Squares Method Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

MAT 242 Test 3 SOLUTIONS, FORM A

MAT 242 Test 3 SOLUTIONS, FORM A MAT Test SOLUTIONS, FORM A. Let v =, v =, and v =. Note that B = { v, v, v } is an orthogonal set. Also, let W be the subspace spanned by { v, v, v }. A = 8 a. [5 points] Find the orthogonal projection

More information

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS Revised Edition James Epperson Mathematical Reviews BICENTENNIAL 0, 1 8 0 7 z ewiley wu 2007 r71 BICENTENNIAL WILEY-INTERSCIENCE A John Wiley & Sons, Inc.,

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

5. Orthogonal matrices

5. Orthogonal matrices L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

More information

7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization 7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

More information

6. Cholesky factorization

6. Cholesky factorization 6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix

More information

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1 (d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

NUMERICAL METHODS FOR LARGE EIGENVALUE PROBLEMS

NUMERICAL METHODS FOR LARGE EIGENVALUE PROBLEMS NUMERICAL METHODS FOR LARGE EIGENVALUE PROBLEMS Second edition Yousef Saad Copyright c 2011 by the Society for Industrial and Applied Mathematics Contents Preface to Classics Edition Preface xiii xv 1

More information

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

[1] Diagonal factorization

[1] Diagonal factorization 8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

More information

Continuity of the Perron Root

Continuity of the Perron Root Linear and Multilinear Algebra http://dx.doi.org/10.1080/03081087.2014.934233 ArXiv: 1407.7564 (http://arxiv.org/abs/1407.7564) Continuity of the Perron Root Carl D. Meyer Department of Mathematics, North

More information

Orthogonal Bases and the QR Algorithm

Orthogonal Bases and the QR Algorithm Orthogonal Bases and the QR Algorithm Orthogonal Bases by Peter J Olver University of Minnesota Throughout, we work in the Euclidean vector space V = R n, the space of column vectors with n real entries

More information

P164 Tomographic Velocity Model Building Using Iterative Eigendecomposition

P164 Tomographic Velocity Model Building Using Iterative Eigendecomposition P164 Tomographic Velocity Model Building Using Iterative Eigendecomposition K. Osypov* (WesternGeco), D. Nichols (WesternGeco), M. Woodward (WesternGeco) & C.E. Yarman (WesternGeco) SUMMARY Tomographic

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

Nested iteration methods for nonlinear matrix problems

Nested iteration methods for nonlinear matrix problems Nested iteration methods for nonlinear matrix problems Geneste iteratie methoden voor niet-lineaire matrix problemen (met een samenvatting in het Nederlands) Proefschrift ter verkrijging van de graad van

More information

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010 Math 550 Notes Chapter 7 Jesse Crawford Department of Mathematics Tarleton State University Fall 2010 (Tarleton State University) Math 550 Chapter 7 Fall 2010 1 / 34 Outline 1 Self-Adjoint and Normal Operators

More information

Anasazi software for the numerical solution of large-scale eigenvalue problems

Anasazi software for the numerical solution of large-scale eigenvalue problems Anasazi software for the numerical solution of large-scale eigenvalue problems C. G. Baker and U. L. Hetmaniuk and R. B. Lehoucq and H. K. Thornquist Sandia National Laboratories Anasazi is a package within

More information

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n. ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

More information

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions. 3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: SVD revisited; Software for Linear Algebra Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 9 Outline 1 Computing

More information

DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II. Matrix Algorithms DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

More information

Solving linear equations on parallel distributed memory architectures by extrapolation Christer Andersson Abstract Extrapolation methods can be used to accelerate the convergence of vector sequences. It

More information

A Parallel Lanczos Algorithm for Eigensystem Calculation

A Parallel Lanczos Algorithm for Eigensystem Calculation A Parallel Lanczos Algorithm for Eigensystem Calculation Hans-Peter Kersken / Uwe Küster Eigenvalue problems arise in many fields of physics and engineering science for example in structural engineering

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix 7. LU factorization EE103 (Fall 2011-12) factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization algorithm effect of rounding error sparse

More information

A matrix-free preconditioner for sparse symmetric positive definite systems and least square problems

A matrix-free preconditioner for sparse symmetric positive definite systems and least square problems A matrix-free preconditioner for sparse symmetric positive definite systems and least square problems Stefania Bellavia Dipartimento di Ingegneria Industriale Università degli Studi di Firenze Joint work

More information

General Framework for an Iterative Solution of Ax b. Jacobi s Method

General Framework for an Iterative Solution of Ax b. Jacobi s Method 2.6 Iterative Solutions of Linear Systems 143 2.6 Iterative Solutions of Linear Systems Consistent linear systems in real life are solved in one of two ways: by direct calculation (using a matrix factorization,

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10 Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 10 Boundary Value Problems for Ordinary Differential Equations Copyright c 2001. Reproduction

More information

HIDING GLOBAL COMMUNICATION LATENCY IN THE GMRES ALGORITHM ON MASSIVELY PARALLEL MACHINES

HIDING GLOBAL COMMUNICATION LATENCY IN THE GMRES ALGORITHM ON MASSIVELY PARALLEL MACHINES HIDING GLOBAL COMMUNICATION LATENCY IN THE GMRES ALGORITHM ON MASSIVELY PARALLEL MACHINES P. GHYSELS, T.J. ASHBY, K. MEERBERGEN, AND W. VANROOSE Abstract. In the Generalized Minimal Residual Method GMRES),

More information

SALEM COMMUNITY COLLEGE Carneys Point, New Jersey 08069 COURSE SYLLABUS COVER SHEET. Action Taken (Please Check One) New Course Initiated

SALEM COMMUNITY COLLEGE Carneys Point, New Jersey 08069 COURSE SYLLABUS COVER SHEET. Action Taken (Please Check One) New Course Initiated SALEM COMMUNITY COLLEGE Carneys Point, New Jersey 08069 COURSE SYLLABUS COVER SHEET Course Title Course Number Department Linear Algebra Mathematics MAT-240 Action Taken (Please Check One) New Course Initiated

More information

Derivative Free Optimization

Derivative Free Optimization Department of Mathematics Derivative Free Optimization M.J.D. Powell LiTH-MAT-R--2014/02--SE Department of Mathematics Linköping University S-581 83 Linköping, Sweden. Three lectures 1 on Derivative Free

More information

Examination paper for TMA4115 Matematikk 3

Examination paper for TMA4115 Matematikk 3 Department of Mathematical Sciences Examination paper for TMA45 Matematikk 3 Academic contact during examination: Antoine Julien a, Alexander Schmeding b, Gereon Quick c Phone: a 73 59 77 82, b 40 53 99

More information

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A = MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the

More information

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

Manifold Learning Examples PCA, LLE and ISOMAP

Manifold Learning Examples PCA, LLE and ISOMAP Manifold Learning Examples PCA, LLE and ISOMAP Dan Ventura October 14, 28 Abstract We try to give a helpful concrete example that demonstrates how to use PCA, LLE and Isomap, attempts to provide some intuition

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 Lecture 3: QR, least squares, linear regression Linear Algebra Methods for Data Mining, Spring 2007, University

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

Automatic parameter setting for Arnoldi-Tikhonov methods

Automatic parameter setting for Arnoldi-Tikhonov methods Automatic parameter setting for Arnoldi-Tikhonov methods S. Gazzola, P. Novati Department of Mathematics University of Padova, Italy March 16, 2012 Abstract In the framework of iterative regularization

More information

AN INTERFACE STRIP PRECONDITIONER FOR DOMAIN DECOMPOSITION METHODS

AN INTERFACE STRIP PRECONDITIONER FOR DOMAIN DECOMPOSITION METHODS AN INTERFACE STRIP PRECONDITIONER FOR DOMAIN DECOMPOSITION METHODS by M. Storti, L. Dalcín, R. Paz Centro Internacional de Métodos Numéricos en Ingeniería - CIMEC INTEC, (CONICET-UNL), Santa Fe, Argentina

More information

Linear Algebra: Determinants, Inverses, Rank

Linear Algebra: Determinants, Inverses, Rank D Linear Algebra: Determinants, Inverses, Rank D 1 Appendix D: LINEAR ALGEBRA: DETERMINANTS, INVERSES, RANK TABLE OF CONTENTS Page D.1. Introduction D 3 D.2. Determinants D 3 D.2.1. Some Properties of

More information

Journal of Computational and Applied Mathematics

Journal of Computational and Applied Mathematics Journal of Computational and Applied Mathematics 226 (2009) 92 102 Contents lists available at ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam

More information

Constrained Least Squares

Constrained Least Squares Constrained Least Squares Authors: G.H. Golub and C.F. Van Loan Chapter 12 in Matrix Computations, 3rd Edition, 1996, pp.580-587 CICN may05/1 Background The least squares problem: min Ax b 2 x Sometimes,

More information

Universiteit-Utrecht. Department. of Mathematics. The convergence of Jacobi-Davidson for. Hermitian eigenproblems. Jasper van den Eshof.

Universiteit-Utrecht. Department. of Mathematics. The convergence of Jacobi-Davidson for. Hermitian eigenproblems. Jasper van den Eshof. Universiteit-Utrecht * Department of Mathematics The convergence of Jacobi-Davidson for Hermitian eigenproblems by Jasper van den Eshof Preprint nr. 1165 November, 2000 THE CONVERGENCE OF JACOBI-DAVIDSON

More information

Yousef Saad University of Minnesota Computer Science and Engineering. CRM Montreal - April 30, 2008

Yousef Saad University of Minnesota Computer Science and Engineering. CRM Montreal - April 30, 2008 A tutorial on: Iterative methods for Sparse Matrix Problems Yousef Saad University of Minnesota Computer Science and Engineering CRM Montreal - April 30, 2008 Outline Part 1 Sparse matrices and sparsity

More information

Linköping University Electronic Press

Linköping University Electronic Press Linköping University Electronic Press Report A Preconditioned GMRES Method for Solving a 1D Sideways Heat Equation Zohreh Ranjbar and Lars Eldén LiTH-MAT-R, 348-296, No. 6 Available at: Linköping University

More information

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

More information

Hands-On Exercises for SLEPc

Hands-On Exercises for SLEPc Scalable Library for Eigenvalue Problem Computations http://slepc.upv.es Hands-On Exercises for SLEPc June 2015 Intended for use with version 3.6 of SLEPc These exercises introduce application development

More information

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two

More information

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0. Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability

More information

USE OF EIGENVALUES AND EIGENVECTORS TO ANALYZE BIPARTIVITY OF NETWORK GRAPHS

USE OF EIGENVALUES AND EIGENVECTORS TO ANALYZE BIPARTIVITY OF NETWORK GRAPHS USE OF EIGENVALUES AND EIGENVECTORS TO ANALYZE BIPARTIVITY OF NETWORK GRAPHS Natarajan Meghanathan Jackson State University, 1400 Lynch St, Jackson, MS, USA natarajan.meghanathan@jsums.edu ABSTRACT This

More information

MAT 242 Test 2 SOLUTIONS, FORM T

MAT 242 Test 2 SOLUTIONS, FORM T MAT 242 Test 2 SOLUTIONS, FORM T 5 3 5 3 3 3 3. Let v =, v 5 2 =, v 3 =, and v 5 4 =. 3 3 7 3 a. [ points] The set { v, v 2, v 3, v 4 } is linearly dependent. Find a nontrivial linear combination of these

More information

Lecture 5 Least-squares

Lecture 5 Least-squares EE263 Autumn 2007-08 Stephen Boyd Lecture 5 Least-squares least-squares (approximate) solution of overdetermined equations projection and orthogonality principle least-squares estimation BLUE property

More information

Lecture Topic: Low-Rank Approximations

Lecture Topic: Low-Rank Approximations Lecture Topic: Low-Rank Approximations Low-Rank Approximations We have seen principal component analysis. The extraction of the first principle eigenvalue could be seen as an approximation of the original

More information

Chapter 12 Modal Decomposition of State-Space Models 12.1 Introduction The solutions obtained in previous chapters, whether in time domain or transfor

Chapter 12 Modal Decomposition of State-Space Models 12.1 Introduction The solutions obtained in previous chapters, whether in time domain or transfor Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter

More information

Orthogonal Projections

Orthogonal Projections Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors

More information

160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

More information

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8 Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e

More information

Linear Algebraic Equations, SVD, and the Pseudo-Inverse

Linear Algebraic Equations, SVD, and the Pseudo-Inverse Linear Algebraic Equations, SVD, and the Pseudo-Inverse Philip N. Sabes October, 21 1 A Little Background 1.1 Singular values and matrix inversion For non-smmetric matrices, the eigenvalues and singular

More information

The Geometry of Polynomial Division and Elimination

The Geometry of Polynomial Division and Elimination The Geometry of Polynomial Division and Elimination Kim Batselier, Philippe Dreesen Bart De Moor Katholieke Universiteit Leuven Department of Electrical Engineering ESAT/SCD/SISTA/SMC May 2012 1 / 26 Outline

More information

1 3 4 = 8i + 20j 13k. x + w. y + w

1 3 4 = 8i + 20j 13k. x + w. y + w ) Find the point of intersection of the lines x = t +, y = 3t + 4, z = 4t + 5, and x = 6s + 3, y = 5s +, z = 4s + 9, and then find the plane containing these two lines. Solution. Solve the system of equations

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

Factor Analysis. Chapter 420. Introduction

Factor Analysis. Chapter 420. Introduction Chapter 420 Introduction (FA) is an exploratory technique applied to a set of observed variables that seeks to find underlying factors (subsets of variables) from which the observed variables were generated.

More information

Differential Privacy Preserving Spectral Graph Analysis

Differential Privacy Preserving Spectral Graph Analysis Differential Privacy Preserving Spectral Graph Analysis Yue Wang, Xintao Wu, and Leting Wu University of North Carolina at Charlotte, {ywang91, xwu, lwu8}@uncc.edu Abstract. In this paper, we focus on

More information

8 Hyperbolic Systems of First-Order Equations

8 Hyperbolic Systems of First-Order Equations 8 Hyperbolic Systems of First-Order Equations Ref: Evans, Sec 73 8 Definitions and Examples Let U : R n (, ) R m Let A i (x, t) beanm m matrix for i,,n Let F : R n (, ) R m Consider the system U t + A

More information

3. Reaction Diffusion Equations Consider the following ODE model for population growth

3. Reaction Diffusion Equations Consider the following ODE model for population growth 3. Reaction Diffusion Equations Consider the following ODE model for population growth u t a u t u t, u 0 u 0 where u t denotes the population size at time t, and a u plays the role of the population dependent

More information

SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis

More information