# The Power Method for Eigenvalues and Eigenvectors

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 Numerical Analysis Massoud Malek The Power Method for Eigenvalues and Eigenvectors The spectrum of a square matrix A, denoted by σ(a) is the set of all eigenvalues of A. The spectral radius of A, denoted by ρ(a) is defined as: ρ(a) = max{ λ : λ σ(a)} An eigenvalue of A that is larger in absolute value than any other eigenvalue is called the dominant eigenvalue; a corresponding eigenvector is called a dominant eigenvector. The most commonly encountered vector norm (often simply called the norm of a vector) is the Euclidean norm, given by x 2 = x = x x x2 n. The max norm, also known as the infinity norm of a vector is the largest component of the vector in absolute value: x max = x = max{ x 1, x 2,..., x n }. An eigenvector V is said to be normalized if its coordinates are divided by its norm. If the max norm is used, then clearly, the coordinate of largest magnitude is equal to one. Theorem 1. If u is an eigenvector of a matrix A, then its corresponding eigenvalue is given by λ = u Au u u, where u is the conjugate transpose of u. This quotient is called the Rayleigh quotient. Proof. Since u is an eigenvector of A, we know that Au = λu and we can write u Au u u = λu u u u = λ u u u u = λ 1 = λ Theorem 2. Let λ 1, λ 2,... λ m be the m eigenvalues (counted with multiplicity) of the n n matrix A and let v 1, v 2,..., v m be the corresponding eigenvectors. Suppose that λ 1 is the dominant eigenvalue, so that λ 1 > λ 2 λ j λ m.

2 Numerical Analysis The Power Method for Eigenvalues and Eigenvectors Page 2 Suppose x 0 is in the subspace generated the eigenvectors, i.e., x 0 = c 1 v 1 + c 2 v c m v m with c 1 0. Then x k = Ak x 0 A k x 0 converges to (a multiple of) the dominant eigenvector v 1. Proof. We have A k x 0 = c 1 A k v 1 + c 2 A k v c m A k v m = c 1 λ k 1v 1 + c 2 λ k 2v c m λ k mv m [ = c 1 λ k 1 v 1 + c ( ) k 2 λ2 v c m c 1 λ 1 c 1 The expression within brackets converges to v 1 x k = Ak x 0 A k x 0 converges to a dominant eigenvector. Remark 1. The convergence is geometric, with ratio λ 2 λ 1. ( λm λ 1 ) k v m ]. because λ j /λ 1 < 1 for j > 1. Thus Thus, the method converges slowly if λ 2 is close in magnitude to the dominant eigenvalue λ 1. Given an n n matrix A, for i = 1, 2,..., n, define: R i (A) = n a ij and r i (A) = R i (A) a ii. i=1 Levy-Deslanque theorem. If the matrix A is strictly diagonally dominant, that is a ii > r i (A) for all i = 1, 2,..., n. Then A is invertible. Proof. Suppose det(a) = 0, then for some nonzero vector u = (u 1, u 2,..., u n ) t, A u = θ. Now let k be the index where u k u i for all i = 1, 2,..., n. Then a kk uk = a kj u j i j a kj u j r i (A) i j

3 Numerical Analysis The Power Method for Eigenvalues and Eigenvectors Page 3 which is a contradiction with akk > r k (A). Gerschgorin s Disks theorem. The eigenvalues of A lie in the union of the disks D i (a ii, r i (A)), centered at a ii with the radius r i (A). Moreover, the union of any k of these disks that do not intersect the remaining (n k) contains precisely k (counting multiplicities) of the eigenvalues. Proof. We only prove the first part. Let k be an eigenvalue of A, then det (A λ k I n ) = 0. By the Levy-Deslanque theorem, we conclude that λ k a ii < r i (A) for at least one i. The fact that A and A t have the same set of eigenvalues, one can state a similar theorem dealing with columns. Example 1. Consider the symmetric matrix The eigenvalues are real and according to the above theorem, there are two eigenvalues between 1 and 5 and the dominant eigenvalue is in [6, 12]. The spectral norm of m n matrix A, denoted by A 2, which is the square root of the maximum eigenvalue of the positive semi-definite matrix A A or AA (we choose the one in smaller size), A 2 = ρ(a A) = ρ(aa ) (4) is often referred to as the matrix norm. So in order to find the spectral norm of an m n matrix A, we need to find the dominant eigenvalue of AA or A A. Power Method. This method is a simple iterative technique to obtain the dominant eigenvalue and the corresponding dominant eigenvector. It does not compute a matrix decomposition, and hence it can be used when A is a very large sparse matrix. However, it may converge only slowly.

4 Numerical Analysis The Power Method for Eigenvalues and Eigenvectors Page 4 The power iteration algorithm starts with a vector x 0, which may be an approximation to the dominant eigenvector or a random vector. The method is described by the iteration x k+1 = Ax k Ab k. So, at every iteration, the vector bk is multiplied by the matrix A and normalized by the max norm. The convergence is guaranteed under the following conditions: a. A has an eigenvalue that is strictly greater in magnitude than its other eigenvalues; b. The starting vector x 0 has a nonzero component in the direction of an eigenvector associated with the dominant eigenvalue (see Theorem 2). Remark 2. It is known that any matrix with positive entries has a unique dominant eigenvalues. Most non-negative matrices with not many zero entries also have a unique dominant eigenvalue. Therefore these types of matrices are good candidates for the power method. Algorithm. The algorithm for the power method is very simple. Choose an initial guess x 0, a maximum number of iterations N, and a tolerance T ol ; set k = 1; while k N; end. set y k = A x k 1 ; set α k = y k ; set x k = y k /α k ; if α k α k 1 T ol, then k = N + 1; else, set k = k + 1; Display the dominant eigenvector u 1 x k and the dominant eigenvalue λ 1 x k A x k x k x. k Example 2. Calculate seven iterations of the power method to approximate a dominant eigenvector of the matrix A = with x 0 = Solution Let y 1 = Ax 0 = =

5 Numerical Analysis The Power Method for Eigenvalues and Eigenvectors Page 5 Then α 1 = y 1 = 5; we obtain A second iteration yields x 1 = y 1 y 1 = = y 2 = (1.00, ) t with α 2 = y 2 = 2.20 and x 2 = (0.45, 0.45, 1.00) t. Continuing this process, we obtain the sequence of approximations shown below: x 0 x 1 x 2 x 3 x 4 x 5 x 6 x α 1 α 2 α 3 α 4 α 5 α 6 α Using the Rayleigh quotient (Theorem 1), we approximate the dominant eigenvalue of A to be When can the power method fail? In principle, the approximations x k approach the the dominant eigenvector eigenvector v 1 while the scaling factors α k tend to the corresponding eigenvalue λ 1 as the number of iterations k increases. This can fail if there is not a single largest eigenvalue: if λ 2 = λ 1, then in general the power method does not converge at all (see Theorem 2). Since we do not know the eigenvalues of A in advance, we cannot know in advance whether the power method will work or not. Recall that the eigenvalues of a real matrix A are in general complex, and occur in conjugate pairs (which necessarily have the same absolute value). So if the largest eigenvalue of A is not real (and there is no reason why it should be) then the power method will certainly fail. For this reason it is sensible to apply the power method only to matrices whose eigenvalues are known to be real, for example symmetric matrices. For such matrices we could consider ourselves unlucky if the two largest eigenvalues happened to have the same absolute value, i.e. if λ 1 = λ 2. As we explained in the proof of Theorem 2, when the power method does converge, the rate at which it does so is determined by the ratio λ 1 λ 2. Thus if λ 2 is only slightly smaller

6 Numerical Analysis The Power Method for Eigenvalues and Eigenvectors Page 6 than λ 1, then the method converges very slowly, so that a large number of iterations will be required to obtain a decent accuracy. Another point worth making is that it is not strictly true that the power method necessarily converges to the largest eigenvalue λ 1. This would fail in the exceptional case c 1 = 0 (see Theorem 2), that is if the initial guess x 0 contains no component of the dominant eigenvector v 1. In this case, the method will converge to the largest eigenvalue whose eigenvector is contained in the decomposition of x 0. If there are two such eigenvalues with the same absolute value, then the method will not converge at all. Of course it is extremely unlikely in general for c 1 to be zero. In practice we can help to avoid this unfortunate possibility by filling x 0 with random entries rather than something obvious like x 0 = (1, 1,..., 1) t. Suppose the dominant eigenvector is v 1 = (1, 0, 0,..., 0, 0) t and we choose our initial vector x 0 = (0, 1, 0, 0..., 0, 0) t, then clearly we shall not be able to obtain the dominant eigenvector. Inverse Power Method This method operates with A 1 rather than A. Thus we replace the step y k = Ax k 1 with y k = A 1 x k 1 (if A 1 is known) or solve the system Ay k = x k 1. Since the eigenvalues of A 1 are 1/λ 1, 1/λ 2, 1/λ m, the inverse power method should converge to 1/λ m, the largest eigenvalue of A 1. Thus this method gives a way of finding the smallest (in absolute value) eigenvalue of a matrix. Spectral Shift This method uses the fact that the eigenvalues of A αi n are λ 1 α, λ 2 α,..., λ m α Thus having computed the eigenvalue λ 1, we start the method over again using the shifted matrix A λ 1 I n. This reduces the eigenvalue we have already determined to zero, and the method now converges to the largest in absolute value of λ 2 λ 1, λ 3 λ 1,, λ m λ 1 A spectral shift always enables us to find at least one more eigenvalue than the dominant one. However it is not clear how it could be implemented in general to find all the eigenvalues of an n n matrix A. This method works only for symmetric matrices, but

7 Numerical Analysis The Power Method for Eigenvalues and Eigenvectors Page 7 is somewhat more subtle since it shifts only one eigenvalue at a time to zero, leaving the others unchanged. Symmetric Power Method This method is often used to approximate the dominant eigenvalue and eigenvector of a symmetric matrix. The vector norm used in this method is the Euclidean norm (L 2 -norm) instead of the max norm. Algorithm. Choose a maximum number of iterations N and a tolerance T ol; then start with an initial guess x 0 ; set k = 1; while k N; end. set y k = A x k 1 ; set α k = y k 2 ; set x k = y k /α k ; if α k α k 1 T ol, then k = N + 1; else, set k = k + 1; Display the dominant eigenvector u 1 x k and the dominant eigenvalue λ 1 x k A x k x k x. k Example 3. With a tolerance of T ol = 0.001, use the symmetric power method to approximate a dominant eigenvector of the matrix A = with x 0 = Solution. Let y 1 = Ax 0 = = Then µ 1 = x t 0y 1 = 4 and α 1 =y 1 2 = 18; we obtain A second iteration yields x 1 = y 1 = = y µ = and x 2 =

8 Numerical Analysis The Power Method for Eigenvalues and Eigenvectors Page 8 Continuing this process, we obtain the sequence of approximations shown below: Iteration eigenvalue eigenvector We conclude that the dominant eigenvalue and eigenvector are λ 1 = 6 and v 1 = (1, 1, 1) t, respectively. To obtain the same accuracy with the power method we need 19 iterations.

### Matrix Norms. Tom Lyche. September 28, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Matrix Norms Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 28, 2009 Matrix Norms We consider matrix norms on (C m,n, C). All results holds for

### 10.3 POWER METHOD FOR APPROXIMATING EIGENVALUES

58 CHAPTER NUMERICAL METHODS. POWER METHOD FOR APPROXIMATING EIGENVALUES In Chapter 7 you saw that the eigenvalues of an n n matrix A are obtained by solving its characteristic equation n c nn c nn...

### Diagonal, Symmetric and Triangular Matrices

Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by

### Similarity and Diagonalization. Similar Matrices

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

### 1 Eigenvalues and Eigenvectors

Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x

### Notes on Symmetric Matrices

CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

### Eigenvalues and eigenvectors of a matrix

Eigenvalues and eigenvectors of a matrix Definition: If A is an n n matrix and there exists a real number λ and a non-zero column vector V such that AV = λv then λ is called an eigenvalue of A and V is

### Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

### [1] Diagonal factorization

8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

### Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

### Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

### Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

### Vector and Matrix Norms

Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

### DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

### October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

### 13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

### Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001, Fall 2010 September 30th, 2010 A. Donev (Courant Institute)

### Chapter 6. Orthogonality

6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

### 7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

### Section 2.1. Section 2.2. Exercise 6: We have to compute the product AB in two ways, where , B =. 2 1 3 5 A =

Section 2.1 Exercise 6: We have to compute the product AB in two ways, where 4 2 A = 3 0 1 3, B =. 2 1 3 5 Solution 1. Let b 1 = (1, 2) and b 2 = (3, 1) be the columns of B. Then Ab 1 = (0, 3, 13) and

### Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

### Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

### by the matrix A results in a vector which is a reflection of the given

Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

### Linear Algebra Review. Vectors

Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

### Systems of Linear Equations

Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

### Lecture 6. Inverse of Matrix

Lecture 6 Inverse of Matrix Recall that any linear system can be written as a matrix equation In one dimension case, ie, A is 1 1, then can be easily solved as A x b Ax b x b A 1 A b A 1 b provided that

### Diagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions

Chapter 3 Diagonalisation Eigenvalues and eigenvectors, diagonalisation of a matrix, orthogonal diagonalisation fo symmetric matrices Reading As in the previous chapter, there is no specific essential

### University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

### Factorization Theorems

Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

### 4. MATRICES Matrices

4. MATRICES 170 4. Matrices 4.1. Definitions. Definition 4.1.1. A matrix is a rectangular array of numbers. A matrix with m rows and n columns is said to have dimension m n and may be represented as follows:

### (January 14, 2009) End k (V ) End k (V/W )

(January 14, 29) [16.1] Let p be the smallest prime dividing the order of a finite group G. Show that a subgroup H of G of index p is necessarily normal. Let G act on cosets gh of H by left multiplication.

### Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

### Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

The Hadamard Product Elizabeth Million April 12, 2007 1 Introduction and Basic Results As inexperienced mathematicians we may have once thought that the natural definition for matrix multiplication would

### Notes for STA 437/1005 Methods for Multivariate Data

Notes for STA 437/1005 Methods for Multivariate Data Radford M. Neal, 26 November 2010 Random Vectors Notation: Let X be a random vector with p elements, so that X = [X 1,..., X p ], where denotes transpose.

### Similar matrices and Jordan form

Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

### Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

### 2.5 Elementary Row Operations and the Determinant

2.5 Elementary Row Operations and the Determinant Recall: Let A be a 2 2 matrtix : A = a b. The determinant of A, denoted by det(a) c d or A, is the number ad bc. So for example if A = 2 4, det(a) = 2(5)

### Inner products on R n, and more

Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +

### MATH 551 - APPLIED MATRIX THEORY

MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

### Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

Quadratic Functions, Optimization, and Quadratic Forms Robert M. Freund February, 2004 2004 Massachusetts Institute of echnology. 1 2 1 Quadratic Optimization A quadratic optimization problem is an optimization

### B such that AB = I and BA = I. (We say B is an inverse of A.) Definition A square matrix A is invertible (or nonsingular) if matrix

Matrix inverses Recall... Definition A square matrix A is invertible (or nonsingular) if matrix B such that AB = and BA =. (We say B is an inverse of A.) Remark Not all square matrices are invertible.

### The Characteristic Polynomial

Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

### Continuity of the Perron Root

Linear and Multilinear Algebra http://dx.doi.org/10.1080/03081087.2014.934233 ArXiv: 1407.7564 (http://arxiv.org/abs/1407.7564) Continuity of the Perron Root Carl D. Meyer Department of Mathematics, North

### 22 Matrix exponent. Equal eigenvalues

22 Matrix exponent. Equal eigenvalues 22. Matrix exponent Consider a first order differential equation of the form y = ay, a R, with the initial condition y) = y. Of course, we know that the solution to

### Presentation 3: Eigenvalues and Eigenvectors of a Matrix

Colleen Kirksey, Beth Van Schoyck, Dennis Bowers MATH 280: Problem Solving November 18, 2011 Presentation 3: Eigenvalues and Eigenvectors of a Matrix Order of Presentation: 1. Definitions of Eigenvalues

### 6. Cholesky factorization

6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix

### 5.3 Determinants and Cramer s Rule

290 5.3 Determinants and Cramer s Rule Unique Solution of a 2 2 System The 2 2 system (1) ax + by = e, cx + dy = f, has a unique solution provided = ad bc is nonzero, in which case the solution is given

### The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every

### Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

### 1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

### LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

### 3 Does the Simplex Algorithm Work?

Does the Simplex Algorithm Work? In this section we carefully examine the simplex algorithm introduced in the previous chapter. Our goal is to either prove that it works, or to determine those circumstances

### NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

### 1. LINEAR EQUATIONS. A linear equation in n unknowns x 1, x 2,, x n is an equation of the form

1. LINEAR EQUATIONS A linear equation in n unknowns x 1, x 2,, x n is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b, where a 1, a 2,..., a n, b are given real numbers. For example, with x and

### 15.062 Data Mining: Algorithms and Applications Matrix Math Review

.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

### 2.1: MATRIX OPERATIONS

.: MATRIX OPERATIONS What are diagonal entries and the main diagonal of a matrix? What is a diagonal matrix? When are matrices equal? Scalar Multiplication 45 Matrix Addition Theorem (pg 0) Let A, B, and

### Notes on Jordan Canonical Form

Notes on Jordan Canonical Form Eric Klavins University of Washington 8 Jordan blocks and Jordan form A Jordan Block of size m and value λ is a matrix J m (λ) having the value λ repeated along the main

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

### Lecture 1: Schur s Unitary Triangularization Theorem

Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections

### Notes on Determinant

ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

### Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

### WHICH LINEAR-FRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE?

WHICH LINEAR-FRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE? JOEL H. SHAPIRO Abstract. These notes supplement the discussion of linear fractional mappings presented in a beginning graduate course

### MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

### Orthogonal Diagonalization of Symmetric Matrices

MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

### We call this set an n-dimensional parallelogram (with one vertex 0). We also refer to the vectors x 1,..., x n as the edges of P.

Volumes of parallelograms 1 Chapter 8 Volumes of parallelograms In the present short chapter we are going to discuss the elementary geometrical objects which we call parallelograms. These are going to

### Cofactor Expansion: Cramer s Rule

Cofactor Expansion: Cramer s Rule MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Introduction Today we will focus on developing: an efficient method for calculating

### Solution. Area(OABC) = Area(OAB) + Area(OBC) = 1 2 det( [ 5 2 1 2. Question 2. Let A = (a) Calculate the nullspace of the matrix A.

Solutions to Math 30 Take-home prelim Question. Find the area of the quadrilateral OABC on the figure below, coordinates given in brackets. [See pp. 60 63 of the book.] y C(, 4) B(, ) A(5, ) O x Area(OABC)

### Introduction to Matrix Algebra

Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

### SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis

### Inner Product Spaces

Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

### A note on companion matrices

Linear Algebra and its Applications 372 (2003) 325 33 www.elsevier.com/locate/laa A note on companion matrices Miroslav Fiedler Academy of Sciences of the Czech Republic Institute of Computer Science Pod

### The Inverse of a Matrix

The Inverse of a Matrix 7.4 Introduction In number arithmetic every number a ( 0) has a reciprocal b written as a or such that a ba = ab =. Some, but not all, square matrices have inverses. If a square

### DETERMINANTS. b 2. x 2

DETERMINANTS 1 Systems of two equations in two unknowns A system of two equations in two unknowns has the form a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 This can be written more concisely in

### CS3220 Lecture Notes: QR factorization and orthogonal transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss

### Lecture 5: Singular Value Decomposition SVD (1)

EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system

### Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Chapter 3 Linear Least Squares Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

### Iterative Methods for Solving Linear Systems

Chapter 5 Iterative Methods for Solving Linear Systems 5.1 Convergence of Sequences of Vectors and Matrices In Chapter 2 we have discussed some of the main methods for solving systems of linear equations.

### Review Jeopardy. Blue vs. Orange. Review Jeopardy

Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round \$200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?

### Linear Dependence Tests

Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks

### General Framework for an Iterative Solution of Ax b. Jacobi s Method

2.6 Iterative Solutions of Linear Systems 143 2.6 Iterative Solutions of Linear Systems Consistent linear systems in real life are solved in one of two ways: by direct calculation (using a matrix factorization,

### Solutions to Linear Algebra Practice Problems

Solutions to Linear Algebra Practice Problems. Find all solutions to the following systems of linear equations. (a) x x + x 5 x x x + x + x 5 (b) x + x + x x + x + x x + x + 8x Answer: (a) We create the

### 4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:

### Mathematics of Cryptography Modular Arithmetic, Congruence, and Matrices. A Biswas, IT, BESU SHIBPUR

Mathematics of Cryptography Modular Arithmetic, Congruence, and Matrices A Biswas, IT, BESU SHIBPUR McGraw-Hill The McGraw-Hill Companies, Inc., 2000 Set of Integers The set of integers, denoted by Z,

### The Inverse of a Square Matrix

These notes closely follow the presentation of the material given in David C Lay s textbook Linear Algebra and its Applications (3rd edition) These notes are intended primarily for in-class presentation

### L1-2. Special Matrix Operations: Permutations, Transpose, Inverse, Augmentation 12 Aug 2014

L1-2. Special Matrix Operations: Permutations, Transpose, Inverse, Augmentation 12 Aug 2014 Unfortunately, no one can be told what the Matrix is. You have to see it for yourself. -- Morpheus Primary concepts:

### Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

### 2.1 Functions. 2.1 J.A.Beachy 1. from A Study Guide for Beginner s by J.A.Beachy, a supplement to Abstract Algebra by Beachy / Blair

2.1 J.A.Beachy 1 2.1 Functions from A Study Guide for Beginner s by J.A.Beachy, a supplement to Abstract Algebra by Beachy / Blair 21. The Vertical Line Test from calculus says that a curve in the xy-plane

### MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

### Lecture Notes: Matrix Inverse. 1 Inverse Definition

Lecture Notes: Matrix Inverse Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Inverse Definition We use I to represent identity matrices,

### Lecture 4: Partitioned Matrices and Determinants

Lecture 4: Partitioned Matrices and Determinants 1 Elementary row operations Recall the elementary operations on the rows of a matrix, equivalent to premultiplying by an elementary matrix E: (1) multiplying

### 1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each)

Math 33 AH : Solution to the Final Exam Honors Linear Algebra and Applications 1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each) (1) If A is an invertible

### Nonlinear Iterative Partial Least Squares Method

Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for

### I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called

### C 1 x(t) = e ta C = e C n. 2! A2 + t3

Matrix Exponential Fundamental Matrix Solution Objective: Solve dt A x with an n n constant coefficient matrix A x (t) Here the unknown is the vector function x(t) x n (t) General Solution Formula in Matrix

### December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation