be a set of basis functions. In order to represent f as their linear combination i=0

Size: px
Start display at page:

Download "be a set of basis functions. In order to represent f as their linear combination i=0"

Transcription

1 Maria Cameron 1. The least squares fitting using non-orthogonal basis We have learned how to find the least squares approximation of a function f using an orthogonal basis. For example, f can be approximates by a truncated trigonometric Fourier series or by a truncated series based on orthogonal polynomials. Such series converge fast if f(x) is smooth. Now we switch to the case where f(x) is not necessarily smooth or its values are available only at the given set of points x 1,..., x n. Let a function f(x) is given by the set of values {f j, x j } m 1 j=0. The values of f contain noise which makes interpolation not appropriate for finding f at intermediate points. We want to (i) smooth f and (ii) be able to find its values in the intermediate points. Let {φ i } n 1 i=0 be a set of basis functions. In order to represent f as their linear combination n 1 f(x) c i φ i (x) we need to solve the following m n system of linear equations φ 0 (x 0 ) φ 1 (x 0 )... φ n 1 (x 0 ) c 0 (1) φ 0 (x 1 ) φ 1 (x 1 )... φ n 1 (x 1 ) c φ 0 (x m 1 ) φ 1 (x m 1 )... φ n 1 (x m 1 ) c n 1 i=0 = f 0 f 1... f m 1 In the typical case there are many data points and a few basis functions, i.e., m n. The first question is what is a good set of basis functions? In order to answer this question we need to recall the basic linear algebra concepts. Definition 1. A norm of a matrix is associated with the vector norm is defined as (2) A = max x 0 Ax x. Exercise Show that if A is a symmetric square n n matrix then where λ i s, i = 1,..., n are its eigenvalues. A = ρ(a) max λ i, i Exercise Show that if A is an m n matrix, m n, then A = ρ(a T A). The condition number of a square matrix A is defined as κ(a) A A 1. 1.

2 2 The condition number allows to bound the relative error of the solution x in terms of the relative error in the data b. Let A be a square matrix and let b be the right-hand side. Let x and x + δx be the solutions of Ax = b and A(x + δx) = b + δb respectively, where δb be the perturbation of the right-hand side. Then On the other hand, since Ax = b, we have Therefore, δx = A 1 b, hence δx A 1 δb. A x b. Hence x b A. δx x A A 1 δb b κ(a) δb b. Hence, for a good set of basis functions the condition number of the matrix in Eq. (1) is not large. For example, φ i = x i is a bad set of basis functions. Do not use it unless n is really small! Exercise Calculate the condition numbers for the matrix defined in Eq. (1) for φ i = x i and x j = j/m. Set m = n and plot the graph of κ(a) versus n. The matlab function that computed the condition number is cond(a). Since we are going to deal with overdetermined systems of linear equations whose matrices are m n with m > n, we need to define the condition number for a rectangular matrix. A very useful concept in the matrix theory is the singular value decomposition (SVD). We are going to consider it in details Singular Value Decomposition. Theorem 1. Let a be an arbitrary m n matrix with m n. Then we can write A = UΣV T, where U is m n and U T U = I n n, Σ = diag{σ 1,..., σ n }, σ 1 σ 2... σ n 0, and V is n n and V T V = I n n. The columns of U, u 1,..., u n, are called left singular vectors. The columns of V, v 1,..., v n are called right singular vectors. The numbers σ 1,..., σ n are called singular values. If m < n, the SVD is defined for A T. The geometric sense of this theorem is the following. Let us view the matrix A as a map from R n into R m : A : R n R m, x Ax.

3 Then one can find orthogonal bases in R n, v 1,...,v n, and in R m, u 1,..., u m and numbers σ 1,..., σ n, such that v j σ j u j, j = 1,..., n. Then for any x R n we have: if x = n x j v j then Ax = j=1 For a rectangular matrix A the condition number is n x j σ j u j. κ(a) = σ max σ min, where σ max and σ min are the largest and the smallest singular values of A. Proof. We use induction in m and n. We assume that the SVD exists for (m 1) (n 1) matrices and prove it for m n. We assume A 0; otherwise we take Σ = 0 and U and V are arbitrary orthogonal matrices. The basic step occurs when n = 1 (since m > n). We write A = UΣV T with U = where is the 2-norm. For the induction step, choose v so that Let j=1 A, Σ = A, V = 1, A v = 1 and A = Av > 0. u = Av Av, which is a unit vector. Choose Ũ and Ṽ so that U = [u, Ũ] and V = [v, Ṽ ] are m n and n n orthogonal matrices respectively. Now write [ ] [ U T u T u AV = Ũ T A [v Ṽ ] = t Av u t ] AṼ Ũ T Av Ũ T. AV Then and We claim that u T AṼ a contradiction. Therefore, u T Av = (Av)T (Av) Av = Av := σ Ũ T Av = Ũ T u Av = 0. = 0 too because otherwise σ = A = U T AV [1, 0,..., 0]U T AV σ u T AṼ > σ, U T AV = [ σ 0 0 Ũ T AV ] = [ u t Av 0 0 Ã ]. 3

4 4 Now we apply the induction hypothesis that Hence, or U T AV = A = [ σ 0 0 U 1 Σ 1 V T 1 ( U which is our desired decomposition. Ã = U 1 Σ 1 V T 1. ] = [ ]) [ ] ( 1 0 σ 0 V 0 U 1 0 Σ 1 [ U 1 ] [ σ 0 0 Σ 1 ] [ V 1 ] T [ ] ) T 1 0, 0 V 1 The SVD has a large number of important algebraic and geometric properties, the most important of which are summarized in the following theorem. Theorem 2. Let A = UΣV T be the SVD of the m n matrix A, m n. (1) Suppose A is symmetric and A = UΛU T be an eigendecomposition of A Then the SVD of A is UΣV T where σ i = λ i and v i = u i sign(λ i ), where sign(0) = 1. (2) The eigenvalues of the symmetric matrix A T A are σi 2. The right singular vectors v i are the corresponding orthonormal eigenvactors. (3) The eigenvectors of the symmetric matrix AA T are σi 2 and m n zeroes. The left singular vectors u i are the corresponding orthonormal eigenvectors for the eigenvalues σi 2. One can take any m n orthogonal vectors as eigenvectors for the eigenvalue 0. (4) If A has full rank, the solution of (5) If A is square and nonsingular, then and (6) Suppose Then min x Ax b is x = V Σ 1 U T b. A = σ 1. A 1 = 1 σ n κ(a) = A A 1 = σ 1 σ n. σ 1... σ r > σ r+1 =... = σ n = 0. rank(a) = r, null(a) = {x R n : Ax = 0 R m } = span(v r+1,..., v n ), range(a) = span(u 1,..., u r ).

5 (7) A = UΣV T = n σ i u i vi T, i=1 i.e., A is a sum of rank 1 matrices. Then a matrix of rank k < n closest to A is k A k = σ i u i vi T, and A A k = σ k+1. i=1 Example This example illustrates the low rank approximation of a large matrix. The original image is shown in Fig. 1(a). The rank 3, 10, and 20 approximations are shown in Figs. 1 (b), (c), and (d) respectively. The sequence of matlab commands to create an 5 (a) (b) (c) (d) Figure 1. Low rank approximations of image. (a): original; (b): rank 3; (c) rank 10; (d) rank 20. approximation of rank m for a given image is the following.

6 6 >> clear all >> im=imread( IMG_1413.jpg ); >> [m n k]=size(im) m = 1600 n = 1200 k = 3 >> mimi=zeros(m,n); >> mimi=sum(im,3); >> fig=figure; >> imagesc(mimi) >> colormap gray >> set(gca, DataAspectRatio,[1 1 1]) >> [U S V]=svd(mimi); >> size(u) ans = >> size(v) ans = >> size(s) ans = >> fig=figure; >> m=10; >> rm=u(:,1:m)*s(1:m,1:m)*v(:,1:m) ; >> colormap gray >> set(gca, DataAspectRatio,[1 1 1]) 1.2. The QR decomposition Normal equations. Consider the overdetermined system of linear equations Ax = b, A m n, m n. Definition 2. We say that x is the least squares solution of Ax = b, A is m n, m n, if x = arg min Ax b. x Rn Now we will show that x is given by the formula (3) x = (A T A) 1 A T b. Note that x is the solution of the so called normal equation that is obtained from Ax = b by multiplication by A T from the left. If the matrix A has full rank, i.e., rank(a) = n, the matrix A T A is symmetric positive definite. Write x = x + e and consider Ax b 2. We

7 7 want to show that it is minimal if and only if e = 0, i.e., x = x. Ax b 2 = (Ax b) ( Ax b) = (Ax + Ae b) T (Ax + Ae b) = Ae 2 + Ax b 2 + 2(Ae) T (Ax b) = Ae 2 + Ax b 2 + 2e T (A T Ax A T b) = Ae 2 + Ax b 2 Ax b 2 The equality occurs if and only if e = 0, i.e., the norm Ax b is minimal if and only if x = x given by Eq. (3). The geometric sense of the least squares solution is the following: the residual r := Ax b is orthogonal to the space spanned by the columns of the matrix A, i.e., r dotted with any column of A is zero, or A T r = Gram-Schmidt Algorithm. Theorem 3. Let A be m n, m n. Suppose that A has full column rank. Then there exist a unique m n orthogonal matrix Q, i.e., Q T Q = I n n, and a unique n n upper-triangular matrix R with positive diagonals r ii > 0 such that A = QR. Proof. The proof of this theorem is given by the Gram-Schmidt orthogonalization process. Algorithm 1: Gram-Schmidt orthogonalization Input : matrix A, m n, rank(a) = n. Output: orthogonal matrix Q m n, Q T Q = I n n, and upper-triangular n n matrix R with r ii > 0. for i = 1,..., N do q i = a i ; for j = 1,..., i 1 do { r ji = q T j a i CGS r ji = qj T q i MGS ; q i = q i r ji q j ; end r ii = q i ; q i = q i /r ii ; end Here CGS and MGS stand for the Classic Gram-Schmidt and the Modified Gram- Schmidt respectively. Unfortunately the Classic Gram-Schmidt algorithm is numerically unstable when the columns of A are nearly linearly dependent. The Modified Gram-Schmidt is better but

8 8 still can result in Q that is far from orthogonal (i.e., Q Q I is much larger than the machine ɛ) when A is ill-conditioned. Exercise Show that the least squares solution of Ax = b is given by x = R 1 Q T b, where A = QR is the QR decomposition of A Householder reflections. A numerically stable way to perform the QR decomposition is via using the Householder transformations or reflections. Definition 3. A Householder transformation (or reflection) is a matrix of the form P = I 2uu t where u = 1. Exercise Show that P is both symmetric and orthogonal, i.e., P = P T and P P T = I. The Householder transformation is called reflection because it P x is the reflection of x with respect to the plane normal to u. Given a vector x we can find u such that P = I 2uu T zeros out all entries of x except the first one, i.e., P x = [c, 0,..., 0] T = ce 1. We do it as follows. Write P x = x 2u(u T x) = ce 1. Hence u = x ce 1 2u T x, i.e., u is a linear combination of x and e 1. Since u is parallel to ũ = x ± x e 1. Hence x = P x = c, u = ũ ũ. One can verify that either choice of sign yields a u satisfying P x = ce 1 as long as ũ 0. We will stick with ũ = x + sign(x 1 ) x e 1. In summary we have x 1 + sign(x 1 ) x x 2 (4) ũ =. with u = ũ ũ. x n Example We will show how to compute the QR decomposition of a 5 4 matrix A using the Householder reflections. x denotes a generic nonzero entry of A. x x x x (1) Choose P 1 so that A 1 := P 1 A =.

9 [ (2) Choose P 2 = (3) Choose P 3 = (4) Choose P 4 = P P P 4 ] so that A 2 := P 2 A 1 = so that A 3 := P 3 A 2 = so that A 4 := P 4 A 3 = x x x x. x x x x x x Now we need to understand how to extract the matrices Q and R. Then A 4 = P 4 P 3 P 2 P 1 A. Therefore A = (P 4 P 3 P 2 P 1 ) 1 A 4 = P1 1 P2 1 P3 1 P4 1 A 4. Since P j s j = 1, 2, 3, 4 are symmetric and orthogonal,. x x x x x A = P 1 1 P 1 2 P 1 3 P 1 4 A 4 = P T 1 P T 2 P T 3 P T 4 A 4 = P 1 P 2 P 3 P 4 A 4. On the other hand, A = QR. Hence R is the first 4 rows of A 4 and Q is the first 4 columns of P T 1 P T 2 P T 3 P T 4 = P 1P 2 P 3 P 4. In Matlab, the least squares solution of Ax = b is found by A\b. The QR decomposition per se can be obtained by [Q,R]=qr(A). References [1] James W. Demmel, Applied Numerical Linear Algebra, SIAM,

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every

More information

Lecture 5: Singular Value Decomposition SVD (1)

Lecture 5: Singular Value Decomposition SVD (1) EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

CS3220 Lecture Notes: QR factorization and orthogonal transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss

More information

Applied Linear Algebra I Review page 1

Applied Linear Algebra I Review page 1 Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Recall that two vectors in are perpendicular or orthogonal provided that their dot Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

More information

MATH 551 - APPLIED MATRIX THEORY

MATH 551 - APPLIED MATRIX THEORY MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

5. Orthogonal matrices

5. Orthogonal matrices L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

More information

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Linear Systems, Continued and The Inverse of a Matrix , Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

More information

Inner product. Definition of inner product

Inner product. Definition of inner product Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 Lecture 3: QR, least squares, linear regression Linear Algebra Methods for Data Mining, Spring 2007, University

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions. 3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

More information

Similar matrices and Jordan form

Similar matrices and Jordan form Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

More information

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Orthogonal Bases and the QR Algorithm

Orthogonal Bases and the QR Algorithm Orthogonal Bases and the QR Algorithm Orthogonal Bases by Peter J Olver University of Minnesota Throughout, we work in the Euclidean vector space V = R n, the space of column vectors with n real entries

More information

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

Lecture 1: Schur s Unitary Triangularization Theorem

Lecture 1: Schur s Unitary Triangularization Theorem Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001, Fall 2010 September 30th, 2010 A. Donev (Courant Institute)

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

3 Orthogonal Vectors and Matrices

3 Orthogonal Vectors and Matrices 3 Orthogonal Vectors and Matrices The linear algebra portion of this course focuses on three matrix factorizations: QR factorization, singular valued decomposition (SVD), and LU factorization The first

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

Linear Algebra: Determinants, Inverses, Rank

Linear Algebra: Determinants, Inverses, Rank D Linear Algebra: Determinants, Inverses, Rank D 1 Appendix D: LINEAR ALGEBRA: DETERMINANTS, INVERSES, RANK TABLE OF CONTENTS Page D.1. Introduction D 3 D.2. Determinants D 3 D.2.1. Some Properties of

More information

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i. Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

More information

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices. Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R

More information

Linear Algebra Notes

Linear Algebra Notes Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note

More information

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001,

More information

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1 (d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

More information

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution

More information

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

More information

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II. Matrix Algorithms DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

More information

Chapter 7. Lyapunov Exponents. 7.1 Maps

Chapter 7. Lyapunov Exponents. 7.1 Maps Chapter 7 Lyapunov Exponents Lyapunov exponents tell us the rate of divergence of nearby trajectories a key component of chaotic dynamics. For one dimensional maps the exponent is simply the average

More information

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Review Jeopardy. Blue vs. Orange. Review Jeopardy Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?

More information

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n. ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

More information

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8 Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

1.5 SOLUTION SETS OF LINEAR SYSTEMS

1.5 SOLUTION SETS OF LINEAR SYSTEMS 1-2 CHAPTER 1 Linear Equations in Linear Algebra 1.5 SOLUTION SETS OF LINEAR SYSTEMS Many of the concepts and computations in linear algebra involve sets of vectors which are visualized geometrically as

More information

Nonlinear Iterative Partial Least Squares Method

Nonlinear Iterative Partial Least Squares Method Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization 7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

More information

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d. DEFINITION: A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers), subject to the following axioms

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

A note on companion matrices

A note on companion matrices Linear Algebra and its Applications 372 (2003) 325 33 www.elsevier.com/locate/laa A note on companion matrices Miroslav Fiedler Academy of Sciences of the Czech Republic Institute of Computer Science Pod

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 51 First Exam January 29, 2015 Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

More information

Continuity of the Perron Root

Continuity of the Perron Root Linear and Multilinear Algebra http://dx.doi.org/10.1080/03081087.2014.934233 ArXiv: 1407.7564 (http://arxiv.org/abs/1407.7564) Continuity of the Perron Root Carl D. Meyer Department of Mathematics, North

More information

Orthogonal Projections

Orthogonal Projections Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors

More information

8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

More information

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

More information

Math 312 Homework 1 Solutions

Math 312 Homework 1 Solutions Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

More information

1 Review of Least Squares Solutions to Overdetermined Systems

1 Review of Least Squares Solutions to Overdetermined Systems cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares

More information

8. Linear least-squares

8. Linear least-squares 8. Linear least-squares EE13 (Fall 211-12) definition examples and applications solution of a least-squares problem, normal equations 8-1 Definition overdetermined linear equations if b range(a), cannot

More information

[1] Diagonal factorization

[1] Diagonal factorization 8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

More information

Lecture Topic: Low-Rank Approximations

Lecture Topic: Low-Rank Approximations Lecture Topic: Low-Rank Approximations Low-Rank Approximations We have seen principal component analysis. The extraction of the first principle eigenvalue could be seen as an approximation of the original

More information

Operation Count; Numerical Linear Algebra

Operation Count; Numerical Linear Algebra 10 Operation Count; Numerical Linear Algebra 10.1 Introduction Many computations are limited simply by the sheer number of required additions, multiplications, or function evaluations. If floating-point

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

ISOMETRIES OF R n KEITH CONRAD

ISOMETRIES OF R n KEITH CONRAD ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj Section 5. l j v j = [ u u j u m ] l jj = l jj u j + + l mj u m. l mj Section 5. 5.. Not orthogonal, the column vectors fail to be perpendicular to each other. 5..2 his matrix is orthogonal. Check that

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

3. INNER PRODUCT SPACES

3. INNER PRODUCT SPACES . INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

More information

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

6. Cholesky factorization

6. Cholesky factorization 6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix

More information

160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

More information

Examination paper for TMA4205 Numerical Linear Algebra

Examination paper for TMA4205 Numerical Linear Algebra Department of Mathematical Sciences Examination paper for TMA4205 Numerical Linear Algebra Academic contact during examination: Markus Grasmair Phone: 97580435 Examination date: December 16, 2015 Examination

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

General Framework for an Iterative Solution of Ax b. Jacobi s Method

General Framework for an Iterative Solution of Ax b. Jacobi s Method 2.6 Iterative Solutions of Linear Systems 143 2.6 Iterative Solutions of Linear Systems Consistent linear systems in real life are solved in one of two ways: by direct calculation (using a matrix factorization,

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: SVD revisited; Software for Linear Algebra Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 9 Outline 1 Computing

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

We shall turn our attention to solving linear systems of equations. Ax = b

We shall turn our attention to solving linear systems of equations. Ax = b 59 Linear Algebra We shall turn our attention to solving linear systems of equations Ax = b where A R m n, x R n, and b R m. We already saw examples of methods that required the solution of a linear system

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

Methods for Finding Bases

Methods for Finding Bases Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,

More information

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors Chapter 9. General Matrices An n m matrix is an array a a a m a a a m... = [a ij]. a n a n a nm The matrix A has n row vectors and m column vectors row i (A) = [a i, a i,..., a im ] R m a j a j a nj col

More information

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBERT SPACE REVIEW BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

More information

Chapter 6. Cuboids. and. vol(conv(p ))

Chapter 6. Cuboids. and. vol(conv(p )) Chapter 6 Cuboids We have already seen that we can efficiently find the bounding box Q(P ) and an arbitrarily good approximation to the smallest enclosing ball B(P ) of a set P R d. Unfortunately, both

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

4 MT210 Notebook 4 3. 4.1 Eigenvalues and Eigenvectors... 3. 4.1.1 Definitions; Graphical Illustrations... 3

4 MT210 Notebook 4 3. 4.1 Eigenvalues and Eigenvectors... 3. 4.1.1 Definitions; Graphical Illustrations... 3 MT Notebook Fall / prepared by Professor Jenny Baglivo c Copyright 9 by Jenny A. Baglivo. All Rights Reserved. Contents MT Notebook. Eigenvalues and Eigenvectors................................... Definitions;

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7. Review University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

More information

Inner products on R n, and more

Inner products on R n, and more Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +

More information