Examination paper for TMA4205 Numerical Linear Algebra



Similar documents
Numerical Methods I Eigenvalue Problems

Linear Algebra Review. Vectors

Similarity and Diagonalization. Similar Matrices

Lecture 1: Schur s Unitary Triangularization Theorem

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems

Continuity of the Perron Root

Section Inner Products and Norms

Orthogonal Diagonalization of Symmetric Matrices

Inner Product Spaces and Orthogonality

Vector and Matrix Norms

Applied Linear Algebra I Review page 1

Linear Algebra Methods for Data Mining

October 3rd, Linear Algebra & Properties of the Covariance Matrix

Chapter 6. Orthogonality

Constrained Least Squares

AMS526: Numerical Analysis I (Numerical Linear Algebra)

3 Orthogonal Vectors and Matrices

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

Linear Algebra: Determinants, Inverses, Rank

Derivative Free Optimization

5. Orthogonal matrices

Lecture 5: Singular Value Decomposition SVD (1)

Numerical Analysis Lecture Notes

NUMERICAL METHODS FOR LARGE EIGENVALUE PROBLEMS

CS3220 Lecture Notes: QR factorization and orthogonal transformations

DATA ANALYSIS II. Matrix Algorithms

A note on companion matrices

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Finite Dimensional Hilbert Spaces and Linear Inverse Problems

Notes on Symmetric Matrices

Introduction to Matrix Algebra

Inner product. Definition of inner product

LINEAR ALGEBRA. September 23, 2010

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

More than you wanted to know about quadratic forms

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

Similar matrices and Jordan form

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Examination paper for TMA4115 Matematikk 3

Using row reduction to calculate the inverse and the determinant of a square matrix

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

Inner Product Spaces

ALGEBRAIC EIGENVALUE PROBLEM

MAT 242 Test 3 SOLUTIONS, FORM A

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

Quadratic forms Cochran s theorem, degrees of freedom, and all that

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

Factorization Theorems

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Problems and Solutions in Matrix Calculus

Lecture 5 Least-squares

7 Gaussian Elimination and LU Factorization

17. Inner product spaces Definition Let V be a real vector space. An inner product on V is a function

[1] Diagonal factorization

Solving Linear Systems of Equations. Gerald Recktenwald Portland State University Mechanical Engineering Department

Linear Algebraic Equations, SVD, and the Pseudo-Inverse

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

BANACH AND HILBERT SPACE REVIEW

Notes on Determinant

1 Introduction to Matrices

Inner products on R n, and more

by the matrix A results in a vector which is a reflection of the given

x = + x 2 + x

MATH APPLIED MATRIX THEORY

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10

Elementary Gradient-Based Parameter Estimation

Chapter 6. Cuboids. and. vol(conv(p ))

Variance Reduction. Pricing American Options. Monte Carlo Option Pricing. Delta and Common Random Numbers

Orthogonal Bases and the QR Algorithm

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method

The Determinant: a Means to Calculate Volume

Operation Count; Numerical Linear Algebra

Eigenvalues and Eigenvectors

4 MT210 Notebook Eigenvalues and Eigenvectors Definitions; Graphical Illustrations... 3

CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #18: Dimensionality Reduc7on

Nonlinear Iterative Partial Least Squares Method

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

University of Lille I PC first year list of exercises n 7. Review

Multivariate normal distribution and testing for means (see MKB Ch 3)

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Finite dimensional C -algebras

Iterative Methods for Solving Linear Systems

Chapter 17. Orthogonal Matrices and Symmetries of Space

Numerical Analysis Introduction. Student Audience. Prerequisites. Technology.

International Doctoral School Algorithmic Decision Theory: MCDA and MOO

8 Square matrices continued: Determinants

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

Statistical Machine Learning

Mean value theorem, Taylors Theorem, Maxima and Minima.

Matrices and Linear Algebra

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

Solving Linear Systems, Continued and The Inverse of a Matrix

Transcription:

Department of Mathematical Sciences Examination paper for TMA4205 Numerical Linear Algebra Academic contact during examination: Markus Grasmair Phone: 97580435 Examination date: December 16, 2015 Examination time (from to): 09:00 13:00 Permitted examination support material: C: Specified, written and handwritten examination support materials are permitted. A specified, simple calculator is permitted. The permitted examination support materials are: Y. Saad: Iterative Methods for Sparse Linear Systems. 2nd ed. SIAM, 2003 (book or printout) L. N. Trefethen and D. Bau: Numerical Linear Algebra, SIAM, 1997 (book or photocopy) G. Golub and C. Van Loan: Matrix Computations. 3rd ed. The Johns Hopkins University Press, 1996 (book or photocopy) J. W. Demmel: Applied Numerical Linear Algebra, SIAM, 1997 (book or printout) E. Rønquist: Note on The Poisson problem in R 2 : diagonalization methods (printout) K. Rottmann: Matematisk formelsamling Your own lecture notes from the course (handwritten) Language: English Number of pages: 6 Number pages enclosed: 0 Checked by: Date Signature

TMA4205 Numerical Linear Algebra/December 16, 2015 Page 1 of 6 Problem 1 a) Let A = ( ) 0 2. 1 3 Perform one iteration of QR-algorithm (for computing eigenvalues) with shift µ = 1. Solution: The shifted matrix A µi = ( ) 1 2 1 2 has orthogonal (but not orthonormal) columns. Thus Q in its QR-factorization is easily obtained by rescaling the columns, and R is diagonal with the length of the columns of A on the diagonal: Q = ( ) 1/ 2 1/ 2 1/ 2 1/ 2 R = ( ) 2 0 0 2 2 Furthermore Q = Q T, R = R T, therefore RQ + µi = (QR + µi) T = A T. b) Let now A R n n be an arbitrary square matrix. Assume that the shift µ in the QR-algorithm with shifts is equal to one of the eigenvalues of A. How can we easily detect this situation based on the QR factorization of the shifted matrix? Solution: If µ σ(a) then 0 = det(a µi) = det(qr) = det(q) det(r). Since Q is unitary, det(q) = 1 and therefore det(r) = 0. Now R is upper triangular and therefore must have a 0 element on the diagonal. Problem 2 Let A = ( ) 1 3. 3 1 a) Find a singular value decomposition of A. Solution: A is symmetric and therefore is unitarily diagonalizable. Its characteristic polynomial is p A (λ) = (1 λ) 2 9 = λ 2 2λ 8

TMA4205 Numerical Linear Algebra/December 16, 2015 Page 2 of 6 thus the eigenvalues are λ 1 = 2 and λ 2 = 4. The corresponding eigenvectors are for example ( ) ( ) 1/ 2 q 1 = 1/ 1/ 2 q 2 2 = 1/ 2 Therefore A = [q 1, q 2 ]diag( 2, 4)[q 1, q 2 ] T = [q 1, q 2 ]diag(2, 4)[ q 1, q 2 ] T, the latter being an SVD of A. b) Find the best, with respect to the 2 -norm, rank 1 approximation of A 1. That is, find some vectors p, q R 2 minimizing the norm pq T A 1 2. Solution: Let UΣV T be an SVD of A with non-singular Σ. Then A 1 = V Σ 1 U T is an SVD of A 1. The best rank-1 approximation of A 1 is V n σn 1 Un T, where σ n is the smallest singular value of A (reciprocal of the largest singular value of A 1 ) with the corresponding singular vectors U n, V n. Given the previously computed SVD we can take p = q 1, q = 1/2q 1, pq T = ( ) 1/ 2 1/ 1/2 ( 1/ 2 1/ 2 ) ( ) 1/4 1/4 = 2 1/4 1/4 c) Let B R n n be an arbitrary non-singular matrix, and let x 0, b R n be given vectors. Further, let B 1 = pq T be a rank 1 approximation of B 1 for some p, q R n. Consider a general projection method with a search space K and a constraint space L for solving a left-preconditioned linear algebraic system B 1 Bx = B 1 b. Show that x x 0 + K satisfies Petrov Galerkin conditions if and only if at least one of the following conditions hold: (i) b B x q or (ii) p L. Solution: Petrov Galerkin conditions are: x x 0 + K and B 1 b B 1 B x = pq T (b B x) L. The condition on the left is given; the condition on the right is a product of a vector p and a scalar q T (b B x). Thus the resulting vector is orthogonal to L iff p L or q T (b B x) = 0.

TMA4205 Numerical Linear Algebra/December 16, 2015 Page 3 of 6 Problem 3 Let A C n n be an arbitrary square matrix. We put B = (A + A H )/2, C = (A A H )/2; in particular A = B + C. a) Show that σ(b) R and σ(c) ir, where σ( ) is the spectrum of a matrix and i 2 = 1. Solution: The short answer here is that B is Hermitian (B = B H and C is skew-hermitian (C = C H ). One can also argue directly like this: if Bv = λv for some λ C and v C n \ {0} then λ v 2 = (λv, v) = (Bv, v) = (v, B H v) = (v, Bv) = (v, λv) = λ v 2. Since v 2 0 we get λ = λ, or λ R. b) Let α C be an arbitrary scalar, and I R n n be the identity matrix. Show that B ± αi and C ± αi are unitarily diagonalizable. Solution: The matrix is unitarily diagonalizable iff it is normal. Normality follows from that of C, B, and the fact that they commute with ±αi. Alternatively, direct computation: (B ± αi) H (B ± αi) = (B ± ᾱi)(b ± αi) = B 2 ± (α + ᾱ)b + αᾱi = (B ± αi)(b ± αi) H, (C ± αi) H (C ± αi) = ( C ± ᾱi)(c ± αi) = C 2 ± (ᾱ α)c + αᾱi = (C ± αi)(c ± αi) H. Assume now that that both B + αi and C + αi are non-singular. Consider the following iterative algorithm for solving the system Ax = b starting with some initial approximaiton x 0 C n : for k = 0, 1,..., do x k+1/2 = (B + αi) 1 [b (C αi)x k ] x k+1 = (C + αi) 1 [b (B αi)x k+1/2 ] end for c) Show that if α = 0 the algorithm converges, but the limits x = lim k x k and ˆx = lim k x k+1/2 do not necessarily coincide or solve the system A x = b (or Aˆx = b). Solution: Direct computation: x 1/2 = B 1 [b Cx 0 ], x 1 = C 1 [b Bx 1/2 ] = C 1 [b BB 1 [b Cx 0 ]] = x 0. Thus algorithm converges, but not necessarily to the solution of the system Ax = b.

TMA4205 Numerical Linear Algebra/December 16, 2015 Page 4 of 6 d) Show that the sequence of points x k generated by the algorithm satisfies lim k x k A 1 b 2 = 0 for an arbitrary x 0 C n if and only if ρ((c + αi) 1 (B αi)(b + αi) 1 (C αi)) < 1, where ρ( ) is a spectral radius of a matrix. Solution: Exactly as with matrix-splitting algorithms, one considers the error e k = x k A 1 b. Then e k+1/2 = (B + αi) 1 (C αi)e k, and e k+1 = (C +αi) 1 (B αi)e k+1/2 = (C +αi) 1 (B αi)(b+αi) 1 (C αi)e k. Again, exactly as with matrix-splitting algorithms we obtain convergence from an arbitrary starting point iff the spectral radius of the iteration matrix is < 1. e) Let A be Hermitian and positive definite. Show that the algorithm above converges for an arbitrary α > 0 and x 0 C. Solution: If A is Hermitian then C = 0, A = B. If A is additionally positive definite then so is B. Thus the algorithm s iteration matrix is (αi) 1 (B αi)(b + αi) 1 ( αi) = (B αi)(b + αi) 1. If λ 1,..., λ n are eigenvalues of B, then the eigenvalues of the iteration matrix are (λ i α)/(λ i + α). Since λ i > 0, α > 0, the spectral radius of the iteration matrix will be < 1.

TMA4205 Numerical Linear Algebra/December 16, 2015 Page 5 of 6 Problem 4 Let A R n n be a symmetric and positive definite matrix and b R n be an arbitrary vector. Let x = A 1 b. a) Show that the standard A-norm error estimate for the conjugate gradient algorithm: [ ] m κ 1 x m x A 2 x 0 x A, κ + 1 implies the 2-norm estimate x m x 2 2 [ ] m κ 1 κ x 0 x 2, κ + 1 where κ = λ max (A)/λ min (A) is the spectral condition number of A. Solution: Since A is normal (symmetric!) then it is unitarily diagonalizable. Let q 1,..., q n be an orthonormal basis composed of eigenvvectors of A and λ 1,..., λ n be the corresponding (positive) eigenvalues. Let x = i α i q i. Then x 2 2 = i α i 2, x 2 A = i α i 2 λ i. Thus min{λ i } x 2 2 x 2 A max{λ i } x 2 2, from which the estimate follows. Let V 2m, H 2m be the matrices produced after 2m iterations of Arnoldi process (without breakdown) applied to A, starting from some vector r 0 = b Ax 0. Let ˆV m := [V,1, V,3,..., V,2m 1 ] (i.e., the submatrix of V 2m corresponding to odd columns), and Ĥm = ˆV T m A ˆV m. b) Show that Ĥm is a non-singular diagonal matrix. Solution: By construction H 2m = V2mAV T 2m, thus Ĥm = ˆV m T A ˆV m is its submatrix corresponding to the odd rows/columns. As A is symmetric, then H 2m is tri-diagonal, and as a result Ĥm is diagonal. Furthermore, (Ĥm) ii = ( ˆV m ) T ia( ˆV m ) i > 0 since A is positive definite and the columns of ˆV m (and V 2m ) have length 1. c) Consider now an orthogonal projection method for the system Ax = b with L = K = Ran ˆV m, where we seek ˆx m x 0 + K satisfying the Galerkin orthogonality condition. Show that ˆx m = x 0 + (r0 T r 0 )(r0 T Ar 0 ) 1 r 0. (That is, the method takes one steepest descent step and then stops improving the solution.)

TMA4205 Numerical Linear Algebra/December 16, 2015 Page 6 of 6 Solution: Since K = Ran ˆV m we can write ˆx m = x 0 + ˆV m ŷ m for some ŷ m. Galerkin orthogonality condition is 0 = ˆV T m (b Aˆx m ) = ˆV T m (r 0 A ˆV m ŷ m ) = ˆV T m r 0 Ĥmŷ m = r 0 2 e 1 Ĥmŷ m, since ( ˆV m ) 1 = r 0 / r 0 2. Therefore r 0 2 /(Ĥm) 11 r 0 2 /(( ˆV m ) T 1A( ˆV m ) 1 ) r 0 3 2/(r T 0 Ar 0 ) ŷ m = 0. = 0. = 0.. 0 0 0 As a result ˆx m = x 0 + r 0 3 2(r T 0 Ar 0 ) 1 ( ˆV m ) 1 = x 0 + r 0 2 2(r T 0 Ar 0 ) 1 r 0.