Krylov subspace methods

Size: px
Start display at page:

Download "Krylov subspace methods"

Transcription

1 Program Lecture 3 Uppsala, April 2014 Krylov subspace methods Krylov basis & Hessenberg matrices Arnoldi s expansion Selecting approximate eigenvalues Convergence Stability issues in Arnoldi s decomposition Lanczos expansion Gerard Sleijpen Department of Mathematics sleij101/ (Krylov) Subspace methods Extraction en selection strategies Ritz values and harmonic Ritz values 1 2 Orthonormal basis K k (A,u 0 ) Recall V k [v 1,...,v k ]. Suppose v 1,...,v k is an orthonormal Krylov basis K k (A,u 0 ). Compute v k+1 by orthogonalising Av k against V k : Expand: w = Av k, Orthogonalize: ṽ = w V kh k with h k = V k w, Normalise: v k+1 = ṽ/ν k with ν k = ṽ 2. Theorem. Orthogonalising Av j against V j for j = 1,...,k leads to AV k = V k+1 H k, with V k orthonormal, spanning K k (A,v 1 ), H k is (k +1) k upper Hessenberg. Note. The matrix H k comes for free in the orthogonalization process. Note. With h k ( h T k,ν k) T, we have Av k = w = V k h h k +v k+1 ν k = [V k,v k+1 ][ k ν k Assemble A[V k 1,v k ] = [V k,v k+1 ] ] H k 1 h k ν k = V k+1 hk Find y k such that AV k y k ϑ k V k y k Find y k such that V k+1 H k y k V k (ϑ k y k ) if v 1 = ρ 0 u 0 Find y k such that H k y k ϑ k ( y k T,0)T Details later. 3 4

2 Hessenberg and Krylov Orthogonalisation Hessenberg matrices and Krylov subspaces are intimately related. Theorem. Consider the relation AV k = V k+1 H k, where V k+1 = [V k,v k+1 ] is n (k +1), and H k is (k +1) k. Terminology. If V is an n k orthonormal matrix and w is an n vector, then, with orthonormalise w against V, we mean: construct an n-vector v and a (k +1)-vector h such that v V, v 2 = 1, w = [V,v] h Then, v 1,...,v k form a Krylov basis for K k (A,v 1 ) Notation. [v, h] = Orth(V,w) i.e., V j spans K j (A,v 1 ) for all j = 1,...,k, H k is Hessenberg. Use a stable variant of Gram Schmidt. In Arnoldi s decomposition, V k is selected to be orthonormal (to ease computations and to enhance stability). Arnoldi s method: orthonormalise Av k against V k to obtain v k+1 all k. 5 Note that the last coordinate of h is 0 if w is in the span of V: in such a case (and if k < n), we select v to be a (random) normalised vector orthogonal to V (we insist on expanding to avoid stagnation in subsequent steps). 6 Arnoldi s decomposition Eigenvalues and Arnoldi s decomposition AV k 1 = V k H k 1, Ax = λx. with V k n k orthonormal, H k 1 k (k 1) Hessenberg. Find a normalised u k K k (A,r 0 ) such that Expand the decomposition to AV k = V k+1 H k. with ϑ k u k Au k, r k Au k ϑ k u k Notation. [V k+1, H k ] = ArnStep(A,V k,h k 1 ) is small in some sense and ϑ k almost has the desired properties. w = Av k Arnoldi s decomposition: AV k = V k+1 H k. [v k+1, h k ] = Orth(V k,w) V k+1 = [V k,v k+1 ] H k = H k 1 0 k 1, H k [H k, h k ] Note r k = V k+1 (H k y k ϑ k y k ), ϑ k = y k H k y k. r k 2 = H k y k ϑ k y k 2 7 The computation of r k 2 and ϑ k is in k-space! 8

3 Arnoldi s method [Arnoldi 52] Convergence Proposition. With ux k V k y k and r k Au k ϑ k u k, y k solves H k y k = ϑ k y k r k K k (A,u 0 ) Select k max and tol Set ρ 0 = 1, V 1 = [ 1 u u ], H 0 = [] for k = 1,...,k max do Break if ρ k < tol [V k+1,h k ] = ArnStep(A,V k,h k 1 ) Solve H k y = ϑ y for k eigenpairs (ϑ, y). Select a pair, say, (ϑ k, y k ), y k y k / y k 2 ρ k = h k+1,k e k y k end for x = V k y k, λ = ϑ k. 9 The space K k (A,b) = span(v k ) contains all vectors that can be computed with k 1 steps of (shifted) power method, and also the vectors computed with a k 1-degree polynomial filter. faster convergence than any polynomial filter method. Achieving better convergence depends on how the approximate eigenpairs are extracted from the search subspaces span(v k ). Using Ritz-Galerkin, for extremal eigenvalues (selecting extremal Ritz values) Arnoldi:shifted power GMRES:Richardson 10 Gram Schmidt orthogonalisation Gram Schmidt orthogonalisation [v, h] = Orth(V,w), then v V, v 2 = 1, w = [V,v] h Classical Gram Schmidt [v, h] = Orth(V,w), then v V, v 2 = 1, w = [V,v] h Modified Gram Schmidt Loss of stability. h = V w, v = w V h ν = v 2, h ( h T,ν) T, v v/ν Sensitive to perturbations on V. DOTs and AXPYs introduce rounding errors. Scaling by ν amplifies rounding errors if tan( (w,span(v)) = ν h 2 1. Note. Costs of computing h 2 are negligible (wrt v 2 ) (wrt costs computing v 2 ). 11 v = w for j = 1,...,k do h j = v j v, v v v jh j end for ν = v 2, h = (h 1,h 2,...,h k,ν) T, v v/ν Loss of stability. Sensitive to perturbations on V Smaller rounding errors from AXPYs. Scaling by ν amplifies rounding errors if ν h More stable. Harder to parallel. 12

4 Gram Schmidt orthogonalisation Stability of the Gram Schmidt variants [v, h] = Orth(V,w), then v V, v 2 = 1, w = [V,v] h Repeated Gram Schmidt with DGKS criterion Loss of stability. h = V w, v = w V h ν = v 2, µ = h 2 while ν τµ g = V v, v v V g ν = v 2, µ = g 2, h h+ g end while h ( h T,ν) T, v v/ν Not sensitive to perturbations on V Smaller rounding errors from AXPYs. Scaling by ν amplifies rounding errors if ν h Orthogonalisation recursively applied to the columns of an n k matrix W leads to computed V and R such that W+ = V R for some n k perturbation matrix with with R is k k upper triangular, F 4k 2 u W F, Loss of orthogonality: V V I k 2 κu(c 2 (W)) l ClassGS: κ of order kn, ModGS: κ of order kn, l = 1. l = 2 (conjecture). RepGS: κ may depend on 1 τk (rarely), l = 0. Householder QR: κ = 0, l = Gram Schmidt and Arnoldi Theorem. Modified Gram Schmidt is sufficiently stable for solving linear systems. Eigenvalue computations requires more stability. Proof. In Arnoldi, the n (k +1) matrix W is [ ] 1 W = [v 1,Av 1,...,Av k ] and R = 0 H k. Arnoldi s decomposition based methods 1) Use recursive expansion for building a Krylov basis V k (involves high dimensional operations) 2) Consider a projected problem as: AV k y k ϑ k V k y k V k, or AV k y k ϑ k V k y k AV k (For theoretical analysis) 3) Form a projected matrix, as H k = V k AV k. (high dim) Hence, when solving Ax = r 0 with x k = V k y k, we have min Wz 2 z 2 r k 2 r 0 2 (take z = ( r 0 2, y T k )T ). 4) Use the projected matrix to solve the projected problem for y k in k-space (only k-dimensional operations) 5) Assemble u k V k y k. (high dim) Therefore, we have the (sharp) estimate C 2 (W) A 2 r 0 2 r k Note. When recursively using Gram-Schmidt to compute the component of Av k that is orthonormal to V k, the projected matrix H k comes for free. 16

5 Krylov subspace methods Krylov subspace methods Krylov subspace methods search for approximate solutions in a Krylov subspace: the search subspace is a Krylov subspace. Stages. Expansion. Expand a Krylov basis v 1,...,v k recursively Extraction. Extract an approximate solution from span(v k ) If space becomes too large Shrinking. (Restart) For some l < k, select a Krylov basis ṽ 1,...,ṽ l in the space span(v k ) such that span(ṽl) contains promising approximations. 17 Why searching for approximations in Krylov subspaces? 1) Convergence based on polynomial approximation theory (better than Richardson, Power method, etc.) 2) Krylov structure can be exploited to enhance efficiency. For instance, with Arnoldi s method, the Hessenberg matrix (projected matrix) comes for free. if A is Hermitian then expansion vectors can efficiently be computed (as in Lanczos, CG,...) 18 Ax = b Ax = λx Extraction strategies Subspace methods Iterate until sufficiently accurate: Expansion. Expand the search subspace V k = span(v k ). Restart if dim(v k ) is too large. Extraction. Extract an appropriate approximate solution (ϑ, u) from the search subspace. Example. Krylov subspace methods as GMRES, CG, Arnoldi, Lanczos: expansion by t k = Av k Goal. Expansion. (x,v k+1 ) (x,v k ) Extraction. Find u = V k y k s.t. (x,u) (x,v k+1 ) 19 Let V span(v) be a search subspace. Find u Vy V such that (Ritz )Galerkin. Au ϑu V Ritz values Orthogonal residuals Au b V for solving Ax = b Petrov Galerkin. Au ϑu AV harmonic Ritz values. Minimal residuals for solving Ax = b: u = minargz Az b 2 Au b AV Refined Ritz. For a given approximate eigenvalue ϑ, u minargũ V Aũ ϑũ 2 20

6 Selection Ritz values Ritz Galerkin and Petrov Galerkin lead to k Ritz pairs (ϑ i,u i ), Petrov pairs, respectively (i = 1,...,k). Select the most promising one as approximate eigenpair. Most promising : 1) Formulate a property that, among all eigenpairs, characterises the wanted eigenpair Example. λ = max(re(λ j )), λ = min λ j, λ = min λ j τ,... 2) Select among all Ritz pairs the one with this property. Example. ϑ = max(re(ϑ i )), ϑ = min ϑ i, ϑ = min ϑ i τ,... Warning. May lead to a wrong selection One wrong selection = one useless iteration step. Proposition. u = Vy. Ritz values are Rayleigh quotients: Au ϑu V ϑ = ρ(u) u Au u u. Proposition. For a given approximate eigenvector u, the Rayleigh quotient is best approximate eigenvalue, i.e., gives the smallest residual: Au ϑu 2 Au ϑu 2 ( ϑ C) ϑ = ρ(u). Proof. Au ϑu V Au ϑu Vy = u ϑ = ρ(u). Au ϑu 2 Au ϑu 2 ( ϑ C) Au ϑu u. One wrong selection at restart may spoil convergence Ritz values Ritz values For ease of discussion, assume AX = XΛ with X X = I, where X = [x 1,...,x n ], Λ = diag(λ 1,...,λ n ): Ax i = λ i x i (i = 1,...,n), the eigenvectors x i form an orthonormal basis of C n. Terminology. A has an orthonormal basis X of eigenvectors. Note. A is normal iff A A = AA. Hermitian and unitary matrices are normal. A is normal A has an orthonormal basis of eigenvectors. For ease of discussion, assume AX = XΛ with X X = I. u approximate eigenvector, u 2 = 1, ϑ = ρ(u). u = β i x i with i β i 2 = 1, ϑ = ρ(u) = i β i 2 λ i. Proposition. If A is normal, then any Ritz value is a convex mean (i.e., weighted averages) of eigenvalues. Proposition. Ritz values form a safe selection for finding extremal eigenvalues, an unsafe selection for interior eigenvalues

7 For ease of discussion, Harmonic Ritz values assume AX = XΛ with X X = I. Assume we are interested in eigenvalue λ closest to 0, 0 is in the interior of the spectrum, λ 0. Note that A 1 x = 1 λ x and 1 λ extremal in {1 λ i } With respect to W, find x Wy st A 1 x µ x W: largest µ forms a safe selection ( λ 1 µ, x x) Select W = AV. Then, with u Vy, we have x = Au A 1 x µ x W µ 1 u Au AV For ease of discussion, Harmonic Ritz values assume AX = XΛ with X X = I. Assume we are interested in eigenvalue λ closest to 0, 0 is in the interior of the spectrum, λ 0. Strategy using harmonic Ritz values 1) Solve Au ϑu AV 2) Select ϑ closest to 0. Proposition. If A is normal, then harmonic Ritz values are harmonic means of the eigenvalues. Proposition. Harmonic Ritz values form a safe selection for finding eigenvalues in the interior (close to 0). Harmonic Ritz relates to Ritz for an inverted matrix

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001, Fall 2010 September 30th, 2010 A. Donev (Courant Institute)

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

ALGEBRAIC EIGENVALUE PROBLEM

ALGEBRAIC EIGENVALUE PROBLEM ALGEBRAIC EIGENVALUE PROBLEM BY J. H. WILKINSON, M.A. (Cantab.), Sc.D. Technische Universes! Dsrmstedt FACHBEREICH (NFORMATiK BIBL1OTHEK Sachgebieto:. Standort: CLARENDON PRESS OXFORD 1965 Contents 1.

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

Lecture 1: Schur s Unitary Triangularization Theorem

Lecture 1: Schur s Unitary Triangularization Theorem Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010 Math 550 Notes Chapter 7 Jesse Crawford Department of Mathematics Tarleton State University Fall 2010 (Tarleton State University) Math 550 Chapter 7 Fall 2010 1 / 34 Outline 1 Self-Adjoint and Normal Operators

More information

Applied Linear Algebra I Review page 1

Applied Linear Algebra I Review page 1 Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices. Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R

More information

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

Inner product. Definition of inner product

Inner product. Definition of inner product Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product

More information

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

More information

Nested iteration methods for nonlinear matrix problems

Nested iteration methods for nonlinear matrix problems Nested iteration methods for nonlinear matrix problems Geneste iteratie methoden voor niet-lineaire matrix problemen (met een samenvatting in het Nederlands) Proefschrift ter verkrijging van de graad van

More information

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Review Jeopardy. Blue vs. Orange. Review Jeopardy Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?

More information

Orthogonal Bases and the QR Algorithm

Orthogonal Bases and the QR Algorithm Orthogonal Bases and the QR Algorithm Orthogonal Bases by Peter J Olver University of Minnesota Throughout, we work in the Euclidean vector space V = R n, the space of column vectors with n real entries

More information

3. INNER PRODUCT SPACES

3. INNER PRODUCT SPACES . INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

Examination paper for TMA4205 Numerical Linear Algebra

Examination paper for TMA4205 Numerical Linear Algebra Department of Mathematical Sciences Examination paper for TMA4205 Numerical Linear Algebra Academic contact during examination: Markus Grasmair Phone: 97580435 Examination date: December 16, 2015 Examination

More information

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001,

More information

Model order reduction via dominant poles

Model order reduction via dominant poles Model order reduction via dominant poles NXP PowerPoint template (Title) Template for presentations (Subtitle) Joost Rommes [joost.rommes@nxp.com] NXP Semiconductors/Corp. I&T/DTF/Mathematics Joint work

More information

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α

More information

Lecture Topic: Low-Rank Approximations

Lecture Topic: Low-Rank Approximations Lecture Topic: Low-Rank Approximations Low-Rank Approximations We have seen principal component analysis. The extraction of the first principle eigenvalue could be seen as an approximation of the original

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

On the Subspace Projected Approximate Matrix method. J. H. Brandts and R. Reis da Silva

On the Subspace Projected Approximate Matrix method. J. H. Brandts and R. Reis da Silva NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2011; 00:1 23 Published online in Wiley InterScience (www.interscience.wiley.com). On the Subspace Projected Approximate Matrix method

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i. Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n. ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

More information

Orthogonal Projections

Orthogonal Projections Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

[1] Diagonal factorization

[1] Diagonal factorization 8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

More information

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

More information

17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function

17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function 17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function, : V V R, which is symmetric, that is u, v = v, u. bilinear, that is linear (in both factors):

More information

Nonlinear Iterative Partial Least Squares Method

Nonlinear Iterative Partial Least Squares Method Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Recall that two vectors in are perpendicular or orthogonal provided that their dot Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

More information

Lecture 5: Singular Value Decomposition SVD (1)

Lecture 5: Singular Value Decomposition SVD (1) EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system

More information

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

MATH 551 - APPLIED MATRIX THEORY

MATH 551 - APPLIED MATRIX THEORY MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

More information

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0. Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability

More information

160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

More information

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

More information

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1 (d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

More information

MATH1231 Algebra, 2015 Chapter 7: Linear maps

MATH1231 Algebra, 2015 Chapter 7: Linear maps MATH1231 Algebra, 2015 Chapter 7: Linear maps A/Prof. Daniel Chan School of Mathematics and Statistics University of New South Wales danielc@unsw.edu.au Daniel Chan (UNSW) MATH1231 Algebra 1 / 43 Chapter

More information

18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in 2-106. Total: 175 points.

18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in 2-106. Total: 175 points. 806 Problem Set 4 Solution Due Wednesday, March 2009 at 4 pm in 2-06 Total: 75 points Problem : A is an m n matrix of rank r Suppose there are right-hand-sides b for which A x = b has no solution (a) What

More information

Continuity of the Perron Root

Continuity of the Perron Root Linear and Multilinear Algebra http://dx.doi.org/10.1080/03081087.2014.934233 ArXiv: 1407.7564 (http://arxiv.org/abs/1407.7564) Continuity of the Perron Root Carl D. Meyer Department of Mathematics, North

More information

NUMERICAL METHODS FOR LARGE EIGENVALUE PROBLEMS

NUMERICAL METHODS FOR LARGE EIGENVALUE PROBLEMS NUMERICAL METHODS FOR LARGE EIGENVALUE PROBLEMS Second edition Yousef Saad Copyright c 2011 by the Society for Industrial and Applied Mathematics Contents Preface to Classics Edition Preface xiii xv 1

More information

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS Revised Edition James Epperson Mathematical Reviews BICENTENNIAL 0, 1 8 0 7 z ewiley wu 2007 r71 BICENTENNIAL WILEY-INTERSCIENCE A John Wiley & Sons, Inc.,

More information

7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization 7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

More information

Vector Math Computer Graphics Scott D. Anderson

Vector Math Computer Graphics Scott D. Anderson Vector Math Computer Graphics Scott D. Anderson 1 Dot Product The notation v w means the dot product or scalar product or inner product of two vectors, v and w. In abstract mathematics, we can talk about

More information

Inner Product Spaces. 7.1 Inner Products

Inner Product Spaces. 7.1 Inner Products 7 Inner Product Spaces 71 Inner Products Recall that if z is a complex number, then z denotes the conjugate of z, Re(z) denotes the real part of z, and Im(z) denotes the imaginary part of z By definition,

More information

Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 51 First Exam January 29, 2015 Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

More information

MAT 242 Test 3 SOLUTIONS, FORM A

MAT 242 Test 3 SOLUTIONS, FORM A MAT Test SOLUTIONS, FORM A. Let v =, v =, and v =. Note that B = { v, v, v } is an orthogonal set. Also, let W be the subspace spanned by { v, v, v }. A = 8 a. [5 points] Find the orthogonal projection

More information

Similar matrices and Jordan form

Similar matrices and Jordan form Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

More information

Understanding and Applying Kalman Filtering

Understanding and Applying Kalman Filtering Understanding and Applying Kalman Filtering Lindsay Kleeman Department of Electrical and Computer Systems Engineering Monash University, Clayton 1 Introduction Objectives: 1. Provide a basic understanding

More information

Inner products on R n, and more

Inner products on R n, and more Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +

More information

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of

More information

The implicit QZ algorithm for the palindromic eigenvalue problem

The implicit QZ algorithm for the palindromic eigenvalue problem Luminy, October 2007 p. The implicit QZ algorithm for the palindromic eigenvalue problem David S. Watkins watkins@math.wsu.edu Department of Mathematics Washington State University Context Luminy, October

More information

Duality of linear conic problems

Duality of linear conic problems Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least

More information

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj Section 5. l j v j = [ u u j u m ] l jj = l jj u j + + l mj u m. l mj Section 5. 5.. Not orthogonal, the column vectors fail to be perpendicular to each other. 5..2 his matrix is orthogonal. Check that

More information

Towards a deeper understanding of Chris Paige s error analysis of the finite precision Lanczos process

Towards a deeper understanding of Chris Paige s error analysis of the finite precision Lanczos process Towards a deeper understanding of Chris Paige s error analysis of the finite precision Lanczos process Jens-Peter M. Zemke zemke@tu-harburg.de Institut für Numerische Simulation Technische Universität

More information

Solving linear equations on parallel distributed memory architectures by extrapolation Christer Andersson Abstract Extrapolation methods can be used to accelerate the convergence of vector sequences. It

More information

Vector Spaces 4.4 Spanning and Independence

Vector Spaces 4.4 Spanning and Independence Vector Spaces 4.4 and Independence October 18 Goals Discuss two important basic concepts: Define linear combination of vectors. Define Span(S) of a set S of vectors. Define linear Independence of a set

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBERT SPACE REVIEW BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

More information

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8 Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e

More information

Subspaces of R n LECTURE 7. 1. Subspaces

Subspaces of R n LECTURE 7. 1. Subspaces LECTURE 7 Subspaces of R n Subspaces Definition 7 A subset W of R n is said to be closed under vector addition if for all u, v W, u + v is also in W If rv is in W for all vectors v W and all scalars r

More information

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

More information

Matrix Representations of Linear Transformations and Changes of Coordinates

Matrix Representations of Linear Transformations and Changes of Coordinates Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under

More information

CS3220 Lecture Notes: QR factorization and orthogonal transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss

More information

GROUP ALGEBRAS. ANDREI YAFAEV

GROUP ALGEBRAS. ANDREI YAFAEV GROUP ALGEBRAS. ANDREI YAFAEV We will associate a certain algebra to a finite group and prove that it is semisimple. Then we will apply Wedderburn s theory to its study. Definition 0.1. Let G be a finite

More information

Name: Section Registered In:

Name: Section Registered In: Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are

More information

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A = MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the

More information

5. Orthogonal matrices

5. Orthogonal matrices L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

APPM4720/5720: Fast algorithms for big data. Gunnar Martinsson The University of Colorado at Boulder

APPM4720/5720: Fast algorithms for big data. Gunnar Martinsson The University of Colorado at Boulder APPM4720/5720: Fast algorithms for big data Gunnar Martinsson The University of Colorado at Boulder Course objectives: The purpose of this course is to teach efficient algorithms for processing very large

More information

Chapter 20. Vector Spaces and Bases

Chapter 20. Vector Spaces and Bases Chapter 20. Vector Spaces and Bases In this course, we have proceeded step-by-step through low-dimensional Linear Algebra. We have looked at lines, planes, hyperplanes, and have seen that there is no limit

More information

1 0 5 3 3 A = 0 0 0 1 3 0 0 0 0 0 0 0 0 0 0

1 0 5 3 3 A = 0 0 0 1 3 0 0 0 0 0 0 0 0 0 0 Solutions: Assignment 4.. Find the redundant column vectors of the given matrix A by inspection. Then find a basis of the image of A and a basis of the kernel of A. 5 A The second and third columns are

More information

DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II. Matrix Algorithms DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

More information

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

The cover SU(2) SO(3) and related topics

The cover SU(2) SO(3) and related topics The cover SU(2) SO(3) and related topics Iordan Ganev December 2011 Abstract The subgroup U of unit quaternions is isomorphic to SU(2) and is a double cover of SO(3). This allows a simple computation of

More information

More than you wanted to know about quadratic forms

More than you wanted to know about quadratic forms CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences More than you wanted to know about quadratic forms KC Border Contents 1 Quadratic forms 1 1.1 Quadratic forms on the unit

More information

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone

More information

Linköping University Electronic Press

Linköping University Electronic Press Linköping University Electronic Press Report A Preconditioned GMRES Method for Solving a 1D Sideways Heat Equation Zohreh Ranjbar and Lars Eldén LiTH-MAT-R, 348-296, No. 6 Available at: Linköping University

More information

Lecture 5 Least-squares

Lecture 5 Least-squares EE263 Autumn 2007-08 Stephen Boyd Lecture 5 Least-squares least-squares (approximate) solution of overdetermined equations projection and orthogonality principle least-squares estimation BLUE property

More information

x + y + z = 1 2x + 3y + 4z = 0 5x + 6y + 7z = 3

x + y + z = 1 2x + 3y + 4z = 0 5x + 6y + 7z = 3 Math 24 FINAL EXAM (2/9/9 - SOLUTIONS ( Find the general solution to the system of equations 2 4 5 6 7 ( r 2 2r r 2 r 5r r x + y + z 2x + y + 4z 5x + 6y + 7z 2 2 2 2 So x z + y 2z 2 and z is free. ( r

More information

HIDING GLOBAL COMMUNICATION LATENCY IN THE GMRES ALGORITHM ON MASSIVELY PARALLEL MACHINES

HIDING GLOBAL COMMUNICATION LATENCY IN THE GMRES ALGORITHM ON MASSIVELY PARALLEL MACHINES HIDING GLOBAL COMMUNICATION LATENCY IN THE GMRES ALGORITHM ON MASSIVELY PARALLEL MACHINES P. GHYSELS, T.J. ASHBY, K. MEERBERGEN, AND W. VANROOSE Abstract. In the Generalized Minimal Residual Method GMRES),

More information

Chapter 7. Lyapunov Exponents. 7.1 Maps

Chapter 7. Lyapunov Exponents. 7.1 Maps Chapter 7 Lyapunov Exponents Lyapunov exponents tell us the rate of divergence of nearby trajectories a key component of chaotic dynamics. For one dimensional maps the exponent is simply the average

More information