Matrix Norms. Tom Lyche. September 28, Centre of Mathematics for Applications, Department of Informatics, University of Oslo


 Ella Gallagher
 1 years ago
 Views:
Transcription
1 Matrix Norms Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 28, 2009
2 Matrix Norms We consider matrix norms on (C m,n, C). All results holds for (R m,n, R). Definition (Matrix Norms) A function : C m,n C is called a matrix norm on C m,n if for all A, B C m,n and all α C 1. A 0 with equality if and only if A = 0. (positivity) 2. αa = α A. (homogeneity) 3. A + B A + B. (subadditivity) A matrix norm is simply a vector norm on the finite dimensional vector spaces (C m,n, C) of m n matrices.
3 Equivalent norms Adapting some general results on vector norms to matrix norms give Theorem x 1. All matrix norms are equivalent. Thus, if and are two matrix norms on C m,n then there are positive constants µ and M such that µ A A M A holds for all A C m,n. 2. A matrix norm is a continuous function : C m,n R.
4 Examples From any vector norm V on C mn we can define a matrix norm on C m,n by A := vec(a) V, where vec(a) C mn is the vector obtained by stacking the columns of A on top of each other. A S := m i=1 A F := ( m i=1 n a ij, p = 1, Sum norm, j=1 n a ij 2) 1/2, p = 2, Frobenius norm, j=1 A M := max a ij, p =, Max norm. i,j (1)
5 The Frobenius Matrix Norm 1. A H 2 F = n m j=1 i=1 a ij 2 = m n i=1 j=1 a ij 2 = A 2 F.
6 The Frobenius Matrix Norm 2. m i=1 m i=1 n j=1 a ij 2 = m n j=1 a ij 2 = n j=1 i=1 a i 2 2 m i=1 a ij 2 = n j=1 a j 2 2.
7 Unitary Invariance. If A C m,n and U C m,m, V C n,n are unitary UA 2 F 2. = n j=1 Ua j 2 2 = n j=1 a j = A 2 F. AV F 1. = V H A H F = A H F 1. = A F.
8 Submultiplicativity Suppose A, B are rectangular matrices so that the product AB is defined. AB 2 F = n k ( ) i=1 j=1 a T 2 i b j k j=1 a i 2 2 b j 2 2 = A 2 F B 2 F. n i=1
9 Subordinance Ax 2 A F x 2, for all x C n. Since v F = v 2 for a vector this follows from submultiplicativity.
10 Explicit Expression Let A C m,n have singular values σ 1,..., σ n and SVD A = UΣV H. Then A F 3. = U H AV F = Σ F = σ σ2 n.
11 Consistency A matrix norm is called consistent on C n,n if 4. AB A B (submultiplicativity) holds for all A, B C n,n. A matrix norm is consistent if it is defined on C m,n for all m, n N, and 4. holds for all matrices A, B for which the product AB is defined. The Frobenius norm is consistent. The Sum norm is consistent. The Max norm is not consistent. The norm A := mn A M, A C m,n is consistent.
12 Subordinate Matrix Norm Definition Suppose m, n N are given, Let α on C m and β on C n be vector norms, and let be a matrix norm on C m,n. We say that the matrix norm is subordinate to the vector norms α and β if Ax α A x β for all A C m,n and all x C n. If α = β then we say that is subordinate to α. The Frobenius norm is subordinate to the Euclidian vector norm. The Sum norm is subordinate to the l 1 norm. Ax A M x 1.
13 Operator Norm Definition Suppose m, n N are given and let α be a vector norm on C n and β a vector norm on C m. For A C m,n we define A := A α,β := max x 0 Ax β x α. (2) We call this the (α, β) operator norm, the (α, β)norm, or simply the αnorm if α = β.
14 Observations A α,β = max x/ ker(a) Ax α x β = max x β =1 Ax α. Ax α A x β. A α,β = Ax α for some x C n with x β = 1. The operator norm is a matrix norm on C mn,. The Sum norm and Frobenius norm are not an α operator norm for any α.
15 Operator norm Properties The operator norm is a matrix norm on C m,n. The operator norm is consistent if the vector norm α is defined for all m N and β = α.
16 Proof In 2. and 3. below we take the max over the unit sphere S β. 1. Nonnegativity is obvious. If A = 0 then Ay β = 0 for each y C n. In particular, each column Ae j in A is zero. Hence A = ca = max x cax α = max x c Ax α = c A. 3. A + B = max x (A + B)x α max x Ax α + max x Bx α = A + B. 4. AB = max Bx 0 ABx α x α max y 0 Ay α y α = max Bx 0 ABx α Bx α max x 0 Bx α x α Bx α x α = A B.
17 The p matrix norm The operator norms p defined from the pvector norms are of special interest. Recall x p := ( n j=1 x j p) 1/p, p 1, x := max 1 j n x j. Used quite frequently for p = 1, 2,. We define for any 1 p A p := max x 0 Ax p x p = max y p=1 Ay p. (3) The pnorms are consistent matrix norms which are subordinate to the pvector norm.
18 Explicit expressions Theorem For A C m,n we have A 1 := max 1 j n m a k,j, (max column sum) k=1 A 2 := σ 1, (largest singular value of A) n A := max a k,j, (max row sum). 1 k m j=1 The expression A 2 is called the twonorm or the spectral norm of A. The explicit expression follows from the minmax theorem for singular values. (4)
19 Examples For A := 1 15 A 1 = A 2 = 2. A = A F = 5. A := [ ] A 1 = 6 A 2 = A = 7. A F = [ ] we find
20 The 2 norm Theorem Suppose A C n,n has singular values σ 1 σ 2 σ n and eigenvalues λ 1 λ 2 λ n. Then A 2 = σ 1 and A 1 2 = 1 σ n, (5) A 2 = λ 1 and A 1 2 = 1 λ n, if A is symmetric positive definite, (6) A 2 = λ 1 and A 1 2 = 1, if A is normal. (7) λ n For the norms of A 1 we assume of course that A is nonsingular.
21 Unitary Transformations Definition A matrix norm on C m,n is called unitary invariant if UAV = A for any A C m,n and any unitary matrices U C m,m and V C n,n. If U and V are unitary then U(A + E)V = UAV + F, where F = E. Theorem The Frobenius norm and the spectral norm are unitary invariant. Moreover A H F = A F and A H 2 = A 2.
22 Proof 2 norm UA 2 = max x 2 =1 UAx 2 = max x 2 =1 Ax 2 = A 2. A H 2 = A 2 (same singular values). AV 2 = (AV) H 2 = V H A H 2 = A H 2 = A 2.
23 Perturbation of linear systems x 1 + x 2 = 20 x 1 + ( )x 2 = The exact solution is x 1 = x 2 = 10. Suppose we replace the second equation by x 1 + ( )x 2 = , the exact solution changes to x 1 = 30, x 2 = 10. A small change in one of the coefficients, from to , changed the exact solution by a large amount.
24 Ill Conditioning A mathematical problem in which the solution is very sensitive to changes in the data is called illconditioned or sometimes illposed. Such problems are difficult to solve on a computer. If at all possible, the mathematical model should be changed to obtain a more wellconditioned or properlyposed problem.
25 Perturbations We consider what effect a small change (perturbation) in the data A,b has on the solution x of a linear system Ax = b. Suppose y solves (A + E)y = b+e where E is a (small) n n matrix and e a (small) vector. How large can y x be? To measure this we use vector and matrix norms.
26 Conditions on the norms will denote a vector norm on C n and also a submultiplicative matrix norm on C n,n which in addition is subordinate to the vector norm. Thus for any A, B C n,n and any x C n we have AB A B and Ax A x. This is satisfied if the matrix norm is the operator norm corresponding to the given vector norm or the Frobenius norm.
27 Absolute and relative error The difference y x measures the absolute error in y as an approximation to x, y x / x or y x / y is a measure for the relative error.
28 Perturbation in the right hand side Theorem Suppose A C n,n is invertible, b, e C n, b 0 and Ax = b, Ay = b+e. Then 1 e K(A) b y x x K(A) e b, K(A) = A A 1. (8) Proof: Consider (8). e / b is a measure for the size of the perturbation e relative to the size of b. y x / x can in the worst case be times as large as e / b. K(A) = A A 1
29 Condition number K(A) is called the condition number with respect to inversion of a matrix, or just the condition number, if it is clear from the context that we are talking about solving linear systems. The condition number depends on the matrix A and on the norm used. If K(A) is large, A is called illconditioned (with respect to inversion). If K(A) is small, A is called wellconditioned (with respect to inversion).
30 Condition number properties Since A A 1 AA 1 = I 1 we always have K(A) 1. Since all matrix norms are equivalent, the dependence of K(A) on the norm chosen is less important than the dependence on A. Usually one chooses the spectral norm when discussing properties of the condition number, and the l 1 and l norm when one wishes to compute it or estimate it.
31 The 2norm Suppose A has singular values σ 1 σ 2 σ n > 0 and eigenvalues λ 1 λ 2 λ n if A is square. K 2 (A) = A 2 A 1 2 = σ 1 σ n K 2 (A) = A 2 A 1 2 = λ 1 λ n, A normal. It follows that A is illconditioned with respect to inversion if and only if σ 1 /σ n is large, or λ 1 / λ n is large when A is normal. K 2 (A) = A 2 A 1 2 = λ 1 λ n, A positive definite.
32 The residual Suppose we have computed an approximate solution y to Ax = b. The vector r(y :) = Ay b is called the residual vector, or just the residual. We can bound x y in term of r(y). Theorem Suppose A C n,n, b C n, A is nonsingular and b 0. Let r(y) = Ay b for each y C n. If Ax = b then 1 r(y) K(A) b y x x K(A) r(y) b. (9)
33 Discussion If A is wellconditioned, (9) says that y x / x r(y) / b. In other words, the accuracy in y is about the same order of magnitude as the residual as long as b 1. If A is illconditioned, anything can happen. The solution can be inaccurate even if the residual is small We can have an accurate solution even if the residual is large.
34 The inverse of A + E Theorem Suppose A C n,n is nonsingular and let be a consistent matrix norm on C n,n. If E C n,n is so small that r := A 1 E < 1 then A + E is nonsingular and If r < 1/2 then (A + E) 1 A 1 1 r. (10) (A + E) 1 A 1 A 1 2K(A) E A. (11)
35 Proof We use that if B C n,n and B < 1 then I B is nonsingular and (I B) B. Since r < 1 the matrix I B := I + A 1 E is nonsingular. Since (I B) 1 A 1 (A + E) = I we see that A + E is nonsingular with inverse (I B) 1 A 1. Hence, (A + E) 1 (I B) 1 A 1 and (10) follows. From the identity (A + E) 1 A 1 = A 1 E(A + E) 1 we obtain by (A + E) 1 A 1 A 1 E (A + E) 1 K(A) E A A 1 1 r. Dividing by A 1 and setting r = 1/2 proves (11).
36 Perturbation in A Theorem Suppose A, E C n,n, b C n with A invertible and b 0. If r := A 1 E < 1/2 for some operator norm then A + E is invertible. If Ax = b and (A + E)y = b then y x y y x x A 1 E K(A) E A, (12) 2K(A) E.. (13) A
37 Proof A + E is invertible. (12) follows easily by taking norms in the equation x y = A 1 Ey and dividing by y. From the identity y x = ( (A + E) 1 A 1) Ax we obtain y x (A + E) 1 A 1 A x and (13) follows.
38 Finding the rank of a matrix GaussJordan cannot be used to determine rank numerically Use singular value decomposition numerically will normally find σ n > 0. Determine minimal r so that σ r+1,..., σ n are close to round off unit. Use this r as an estimate for the rank.
39 Convergence in R m,n and C m,n Consider an infinite sequence of matrices {A k } = A 0, A 1, A 2,... in C m,n. {A k } is said to converge to the limit A in C m,n if each element sequence {A k (ij)} k converges to the corresponding element A(ij) for i = 1,..., m and j = 1,..., n. {A k } is a Cauchy sequence if for each ɛ > 0 there is an integer N N such that for each k, l N and all i, j we have A k (ij) A l (ij) ɛ. {A k } is bounded if there is a constant M such that A k (ij) M for all i, j, k.
40 More on Convergence By stacking the columns of A into a vector in C mn we obtain A sequence {A k } in C m,n converges to a matrix A C m,n if and only if lim k A k A = 0 for any matrix norm. A sequence {A k } in C m,n is convergent if and only if it is a Cauchy sequence. Every bounded sequence {A k } in C m,n has a convergent subsequence.
41 The Spectral Radius ρ(a) = max λ σ(a) λ. For any matrix norm on C n,n and any A C n,n we have ρ(a) A. Proof: Let (λ, x) be an eigenpair for A X := [x,..., x] C n,n. λx = AX, which implies λ X = λx = AX A X. Since X 0 we obtain λ A.
42 A Special Norm Theorem Let A C n,n and ɛ > 0 be given. There is a consistent matrix norm on C n,n such that ρ(a) A ρ(a) + ɛ.
43 A Very Important Result Theorem For any A C n,n we have lim k Ak = 0 ρ(a) < 1. Proof: Suppose ρ(a) < 1. There is a consistent matrix norm on C n,n such that A < 1. But then A k A k 0 as k. Hence A k 0. The converse is easier.
44 Convergence can be slow A = , A 100 = , ɛ A 2000 = ɛ
45 More limits For any submultiplicative matrix norm on C n,n and any A C n,n we have lim k Ak 1/k = ρ(a). (14)
The Power Method for Eigenvalues and Eigenvectors
Numerical Analysis Massoud Malek The Power Method for Eigenvalues and Eigenvectors The spectrum of a square matrix A, denoted by σ(a) is the set of all eigenvalues of A. The spectral radius of A, denoted
More informationUNIT 2 MATRICES  I 2.0 INTRODUCTION. Structure
UNIT 2 MATRICES  I Matrices  I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress
More informationSolutions for Chapter 8
Solutions for Chapter 8 Solutions for exercises in section 8. 2 8.2.1. The eigenvalues are σ (A ={12, 6} with alg mult A (6 = 2, and it s clear that 12 = ρ(a σ (A. The eigenspace N(A 12I is spanned by
More informationVector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a nonempty
More informationNumerical Linear Algebra Chap. 4: Perturbation and Regularisation
Numerical Linear Algebra Chap. 4: Perturbation and Regularisation Heinrich Voss voss@tuharburg.de Hamburg University of Technology Institute of Numerical Simulation TUHH Heinrich Voss Numerical Linear
More informationNUMERICALLY EFFICIENT METHODS FOR SOLVING LEAST SQUARES PROBLEMS
NUMERICALLY EFFICIENT METHODS FOR SOLVING LEAST SQUARES PROBLEMS DO Q LEE Abstract. Computing the solution to Least Squares Problems is of great importance in a wide range of fields ranging from numerical
More information1.5 Elementary Matrices and a Method for Finding the Inverse
.5 Elementary Matrices and a Method for Finding the Inverse Definition A n n matrix is called an elementary matrix if it can be obtained from I n by performing a single elementary row operation Reminder:
More informationSolution based on matrix technique Rewrite. ) = 8x 2 1 4x 1x 2 + 5x x1 2x 2 2x 1 + 5x 2
8.2 Quadratic Forms Example 1 Consider the function q(x 1, x 2 ) = 8x 2 1 4x 1x 2 + 5x 2 2 Determine whether q(0, 0) is the global minimum. Solution based on matrix technique Rewrite q( x1 x 2 = x1 ) =
More informationMatrix Inverse and Determinants
DM554 Linear and Integer Programming Lecture 5 and Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1 2 3 4 and Cramer s rule 2 Outline 1 2 3 4 and
More information4.9 Markov matrices. DEFINITION 4.3 A real n n matrix A = [a ij ] is called a Markov matrix, or row stochastic matrix if. (i) a ij 0 for 1 i, j n;
49 Markov matrices DEFINITION 43 A real n n matrix A = [a ij ] is called a Markov matrix, or row stochastic matrix if (i) a ij 0 for 1 i, j n; (ii) a ij = 1 for 1 i n Remark: (ii) is equivalent to AJ n
More information4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns
L. Vandenberghe EE133A (Spring 2016) 4. Matrix inverses left and right inverse linear independence nonsingular matrices matrices with linearly independent columns matrices with linearly independent rows
More informationNON SINGULAR MATRICES. DEFINITION. (Non singular matrix) An n n A is called non singular or invertible if there exists an n n matrix B such that
NON SINGULAR MATRICES DEFINITION. (Non singular matrix) An n n A is called non singular or invertible if there exists an n n matrix B such that AB = I n = BA. Any matrix B with the above property is called
More informationDeterminants. Dr. Doreen De Leon Math 152, Fall 2015
Determinants Dr. Doreen De Leon Math 52, Fall 205 Determinant of a Matrix Elementary Matrices We will first discuss matrices that can be used to produce an elementary row operation on a given matrix A.
More informationSolving Sets of Equations. 150 B.C.E., 九章算術 Carl Friedrich Gauss,
Solving Sets of Equations 5 B.C.E., 九章算術 Carl Friedrich Gauss, 777855 GaussianJordan Elimination In GaussJordan elimination, matrix is reduced to diagonal rather than triangular form Row combinations
More informationMathematics Notes for Class 12 chapter 3. Matrices
1 P a g e Mathematics Notes for Class 12 chapter 3. Matrices A matrix is a rectangular arrangement of numbers (real or complex) which may be represented as matrix is enclosed by [ ] or ( ) or Compact form
More informationLinear Least Squares
Linear Least Squares Suppose we are given a set of data points {(x i,f i )}, i = 1,...,n. These could be measurements from an experiment or obtained simply by evaluating a function at some points. One
More informationMATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix.
MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. Inverse matrix Definition. Let A be an n n matrix. The inverse of A is an n n matrix, denoted
More informationInverses and powers: Rules of Matrix Arithmetic
Contents 1 Inverses and powers: Rules of Matrix Arithmetic 1.1 What about division of matrices? 1.2 Properties of the Inverse of a Matrix 1.2.1 Theorem (Uniqueness of Inverse) 1.2.2 Inverse Test 1.2.3
More informationMATHEMATICAL BACKGROUND
Chapter 1 MATHEMATICAL BACKGROUND This chapter discusses the mathematics that is necessary for the development of the theory of linear programming. We are particularly interested in the solutions of a
More information7  Linear Transformations
7  Linear Transformations Mathematics has as its objects of study sets with various structures. These sets include sets of numbers (such as the integers, rationals, reals, and complexes) whose structure
More informationThe Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression
The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonaldiagonalorthogonal type matrix decompositions Every
More informationNotes on Linear Recurrence Sequences
Notes on Linear Recurrence Sequences April 8, 005 As far as preparing for the final exam, I only hold you responsible for knowing sections,,, 6 and 7 Definitions and Basic Examples An example of a linear
More informationMATH 3795 Lecture 7. Linear Least Squares.
MATH 3795 Lecture 7. Linear Least Squares. Dmitriy Leykekhman Fall 2008 Goals Basic properties of linear least squares problems. Normal equation. D. Leykekhman  MATH 3795 Introduction to Computational
More informationIterative Techniques in Matrix Algebra. Jacobi & GaussSeidel Iterative Techniques II
Iterative Techniques in Matrix Algebra Jacobi & GaussSeidel Iterative Techniques II Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin
More informationB such that AB = I and BA = I. (We say B is an inverse of A.) Definition A square matrix A is invertible (or nonsingular) if matrix
Matrix inverses Recall... Definition A square matrix A is invertible (or nonsingular) if matrix B such that AB = and BA =. (We say B is an inverse of A.) Remark Not all square matrices are invertible.
More informationChapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. ( 1 1 5 2 0 6
Chapter 7 Matrices Definition An m n matrix is an array of numbers set out in m rows and n columns Examples (i ( 1 1 5 2 0 6 has 2 rows and 3 columns and so it is a 2 3 matrix (ii 1 0 7 1 2 3 3 1 is a
More informationTopic 1: Matrices and Systems of Linear Equations.
Topic 1: Matrices and Systems of Linear Equations Let us start with a review of some linear algebra concepts we have already learned, such as matrices, determinants, etc Also, we shall review the method
More informationA matrix over a field F is a rectangular array of elements from F. The symbol
Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F) denotes the collection of all m n matrices over F Matrices will usually be denoted
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More informationMatrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Pearson Education, Inc.
2 Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Theorem 8: Let A be a square matrix. Then the following statements are equivalent. That is, for a given A, the statements are either all true
More informationCHAPTER III  MARKOV CHAINS
CHAPTER III  MARKOV CHAINS JOSEPH G. CONLON 1. General Theory of Markov Chains We have already discussed the standard random walk on the integers Z. A Markov Chain can be viewed as a generalization of
More informationMATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix.
MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix. Matrices Definition. An mbyn matrix is a rectangular array of numbers that has m rows and n columns: a 11
More information30.4. Matrix norms. Introduction. Prerequisites. Learning Outcomes
Matrix norms 30 Introduction A matrix norm is a number defined in terms of the entries of the matrix The norm is a useful quantity which can give important information about a matrix Prerequisites Before
More informationSolving Linear Systems, Continued and The Inverse of a Matrix
, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing
More informationFacts About Eigenvalues
Facts About Eigenvalues By Dr David Butler Definitions Suppose A is an n n matrix An eigenvalue of A is a number λ such that Av = λv for some nonzero vector v An eigenvector of A is a nonzero vector v
More informationDefinition A square matrix M is invertible (or nonsingular) if there exists a matrix M 1 such that
0. Inverse Matrix Definition A square matrix M is invertible (or nonsingular) if there exists a matrix M such that M M = I = M M. Inverse of a 2 2 Matrix Let M and N be the matrices: a b d b M =, N = c
More informationLecture 5  Triangular Factorizations & Operation Counts
LU Factorization Lecture 5  Triangular Factorizations & Operation Counts We have seen that the process of GE essentially factors a matrix A into LU Now we want to see how this factorization allows us
More informationLinear Dependence Tests
Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks
More informationAn Application of Linear Algebra to Image Compression
An Application of Linear Algebra to Image Compression Paul Dostert July 2, 2009 1 / 16 Image Compression There are hundreds of ways to compress images. Some basic ways use singular value decomposition
More informationLecture 7: Singular Value Decomposition
0368324801Algorithms in Data Mining Fall 2013 Lecturer: Edo Liberty Lecture 7: Singular Value Decomposition Warning: This note may contain typos and other inaccuracies which are usually discussed during
More informationSummary of week 8 (Lectures 22, 23 and 24)
WEEK 8 Summary of week 8 (Lectures 22, 23 and 24) This week we completed our discussion of Chapter 5 of [VST] Recall that if V and W are inner product spaces then a linear map T : V W is called an isometry
More informationMarkov Chains, part I
Markov Chains, part I December 8, 2010 1 Introduction A Markov Chain is a sequence of random variables X 0, X 1,, where each X i S, such that P(X i+1 = s i+1 X i = s i, X i 1 = s i 1,, X 0 = s 0 ) = P(X
More informationLecture 5: Singular Value Decomposition SVD (1)
EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25Sep02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system
More informationDiagonal, Symmetric and Triangular Matrices
Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationMATH 240 Fall, Chapter 1: Linear Equations and Matrices
MATH 240 Fall, 2007 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 9th Ed. written by Prof. J. Beachy Sections 1.1 1.5, 2.1 2.3, 4.2 4.9, 3.1 3.5, 5.3 5.5, 6.1 6.3, 6.5, 7.1 7.3 DEFINITIONS
More informationMath 2331 Linear Algebra
2.2 The Inverse of a Matrix Math 2331 Linear Algebra 2.2 The Inverse of a Matrix Jiwen He Department of Mathematics, University of Houston jiwenhe@math.uh.edu math.uh.edu/ jiwenhe/math2331 Jiwen He, University
More informationLinear Systems. Singular and Nonsingular Matrices. Find x 1, x 2, x 3 such that the following three equations hold:
Linear Systems Example: Find x, x, x such that the following three equations hold: x + x + x = 4x + x + x = x + x + x = 6 We can write this using matrixvector notation as 4 {{ A x x x {{ x = 6 {{ b General
More informationHomework: 2.1 (page 56): 7, 9, 13, 15, 17, 25, 27, 35, 37, 41, 46, 49, 67
Chapter Matrices Operations with Matrices Homework: (page 56):, 9, 3, 5,, 5,, 35, 3, 4, 46, 49, 6 Main points in this section: We define a few concept regarding matrices This would include addition of
More informationTHEORY OF SIMPLEX METHOD
Chapter THEORY OF SIMPLEX METHOD Mathematical Programming Problems A mathematical programming problem is an optimization problem of finding the values of the unknown variables x, x,, x n that maximize
More informationSimilar matrices and Jordan form
Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive
More informationMatrices: 2.3 The Inverse of Matrices
September 4 Goals Define inverse of a matrix. Point out that not every matrix A has an inverse. Discuss uniqueness of inverse of a matrix A. Discuss methods of computing inverses, particularly by row operations.
More informationSystems of Linear Equations
Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and
More informationMath 312 Homework 1 Solutions
Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please
More informationNOTES ON LINEAR TRANSFORMATIONS
NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all
More information1 Singular Value Decomposition (SVD)
Contents 1 Singular Value Decomposition (SVD) 2 1.1 Singular Vectors................................. 3 1.2 Singular Value Decomposition (SVD)..................... 7 1.3 Best Rank k Approximations.........................
More information(a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular.
Theorem.7.: (Properties of Triangular Matrices) (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. (b) The product
More information4. MATRICES Matrices
4. MATRICES 170 4. Matrices 4.1. Definitions. Definition 4.1.1. A matrix is a rectangular array of numbers. A matrix with m rows and n columns is said to have dimension m n and may be represented as follows:
More informationLecture 23: The Inverse of a Matrix
Lecture 23: The Inverse of a Matrix Winfried Just, Ohio University March 9, 2016 The definition of the matrix inverse Let A be an n n square matrix. The inverse of A is an n n matrix A 1 such that A 1
More informationChapter 4: Binary Operations and Relations
c Dr Oksana Shatalov, Fall 2014 1 Chapter 4: Binary Operations and Relations 4.1: Binary Operations DEFINITION 1. A binary operation on a nonempty set A is a function from A A to A. Addition, subtraction,
More informationAPPLICATIONS OF MATRICES. Adj A is nothing but the transpose of the cofactor matrix [A ij ] of A.
APPLICATIONS OF MATRICES ADJOINT: Let A = [a ij ] be a square matrix of order n. Let Aij be the cofactor of a ij. Then the n th order matrix [A ij ] T is called the adjoint of A. It is denoted by adj
More informationbe a nested sequence of closed nonempty connected subsets of a compact metric space X. Prove that
Problem 1A. Let... X 2 X 1 be a nested sequence of closed nonempty connected subsets of a compact metric space X. Prove that i=1 X i is nonempty and connected. Since X i is closed in X, it is compact.
More information1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
More informationVector Norms and Matrix Norms
Chapter 4 Vector Norms and Matrix Norms 4.1 Normed Vector Spaces In order to define how close two vectors or two matrices are, and in order to define the convergence of sequences of vectors or matrices,
More informationThe Hadamard Product
The Hadamard Product Elizabeth Million April 12, 2007 1 Introduction and Basic Results As inexperienced mathematicians we may have once thought that the natural definition for matrix multiplication would
More informationChapter 1  Matrices & Determinants
Chapter 1  Matrices & Determinants Arthur Cayley (August 16, 1821  January 26, 1895) was a British Mathematician and Founder of the Modern British School of Pure Mathematics. As a child, Cayley enjoyed
More information1 Gaussian Elimination
Contents 1 Gaussian Elimination 1.1 Elementary Row Operations 1.2 Some matrices whose associated system of equations are easy to solve 1.3 Gaussian Elimination 1.4 GaussJordan reduction and the Reduced
More informationMarkov Chains, Stochastic Processes, and Advanced Matrix Decomposition
Markov Chains, Stochastic Processes, and Advanced Matrix Decomposition Jack Gilbert Copyright (c) 2014 Jack Gilbert. Permission is granted to copy, distribute and/or modify this document under the terms
More informationUsing the Singular Value Decomposition
Using the Singular Value Decomposition Emmett J. Ientilucci Chester F. Carlson Center for Imaging Science Rochester Institute of Technology emmett@cis.rit.edu May 9, 003 Abstract This report introduces
More informationLecture 6: The Group Inverse
Lecture 6: The Group Inverse The matrix index Let A C n n, k positive integer. Then R(A k+1 ) R(A k ). The index of A, denoted IndA, is the smallest integer k such that R(A k ) = R(A k+1 ), or equivalently,
More informationBasics Inversion and related concepts Random vectors Matrix calculus. Matrix algebra. Patrick Breheny. January 20
Matrix algebra January 20 Introduction Basics The mathematics of multiple regression revolves around ordering and keeping track of large arrays of numbers and solving systems of equations The mathematical
More informationMATH36001 Background Material 2015
MATH3600 Background Material 205 Matrix Algebra Matrices and Vectors An ordered array of mn elements a ij (i =,, m; j =,, n) written in the form a a 2 a n A = a 2 a 22 a 2n a m a m2 a mn is said to be
More informationEC9A0: Presessional Advanced Mathematics Course
University of Warwick, EC9A0: Presessional Advanced Mathematics Course Peter J. Hammond & Pablo F. Beker 1 of 55 EC9A0: Presessional Advanced Mathematics Course Slides 1: Matrix Algebra Peter J. Hammond
More informationPROVING STATEMENTS IN LINEAR ALGEBRA
Mathematics V2010y Linear Algebra Spring 2007 PROVING STATEMENTS IN LINEAR ALGEBRA Linear algebra is different from calculus: you cannot understand it properly without some simple proofs. Knowing statements
More informationIntroduction to Flocking {Stochastic Matrices}
Supelec EECI Graduate School in Control Introduction to Flocking {Stochastic Matrices} A. S. Morse Yale University Gif sur  Yvette May 21, 2012 CRAIG REYNOLDS  1987 BOIDS The Lion King CRAIG REYNOLDS
More informationSingular Value Decomposition (SVD) and Principal Component Analysis (PCA)
Singular Value Decomposition SVD and Principal Component Analysis PCA Edo Liberty Algorithms in Data mining 1 Singular Value Decomposition SVD We will see that any matrix A R m n w.l.o.g. m n can be written
More informationT ( a i x i ) = a i T (x i ).
Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)
More informationIterative Methods for Computing Eigenvalues and Eigenvectors
The Waterloo Mathematics Review 9 Iterative Methods for Computing Eigenvalues and Eigenvectors Maysum Panju University of Waterloo mhpanju@math.uwaterloo.ca Abstract: We examine some numerical iterative
More informationLecture Notes: Matrix Inverse. 1 Inverse Definition
Lecture Notes: Matrix Inverse Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Inverse Definition We use I to represent identity matrices,
More informationFactorization Theorems
Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 3 Linear Least Squares Prof. Michael T. Heath Department of Computer Science University of Illinois at UrbanaChampaign Copyright c 2002. Reproduction
More informationLecture 4: Partitioned Matrices and Determinants
Lecture 4: Partitioned Matrices and Determinants 1 Elementary row operations Recall the elementary operations on the rows of a matrix, equivalent to premultiplying by an elementary matrix E: (1) multiplying
More informationSolutions to Review Problems
Chapter 1 Solutions to Review Problems Chapter 1 Exercise 42 Which of the following equations are not linear and why: (a x 2 1 + 3x 2 2x 3 = 5. (b x 1 + x 1 x 2 + 2x 3 = 1. (c x 1 + 2 x 2 + x 3 = 5. (a
More informationPresentation 3: Eigenvalues and Eigenvectors of a Matrix
Colleen Kirksey, Beth Van Schoyck, Dennis Bowers MATH 280: Problem Solving November 18, 2011 Presentation 3: Eigenvalues and Eigenvectors of a Matrix Order of Presentation: 1. Definitions of Eigenvalues
More informationLecture 21: The Inverse of a Matrix
Lecture 21: The Inverse of a Matrix Winfried Just, Ohio University October 16, 2015 Review: Our chemical reaction system Recall our chemical reaction system A + 2B 2C A + B D A + 2C 2D B + D 2C If we write
More informationSergei Silvestrov, Christopher Engström, Karl Lundengård, Johan Richter, Jonas Österberg. November 13, 2014
Sergei Silvestrov,, Karl Lundengård, Johan Richter, Jonas Österberg November 13, 2014 Analysis Todays lecture: Course overview. Repetition of matrices elementary operations. Repetition of solvability of
More information5. Orthogonal matrices
L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 51 Orthonormal
More informationMATH 56A SPRING 2008 STOCHASTIC PROCESSES 31
MATH 56A SPRING 2008 STOCHASTIC PROCESSES 3.3. Invariant probability distribution. Definition.4. A probability distribution is a function π : S [0, ] from the set of states S to the closed unit interval
More informationA short proof of Perron s theorem. Hannah Cairns, April 25, 2014.
A short proof of Perron s theorem. Hannah Cairns, April 25, 2014. A matrix A or a vector ψ is said to be positive if every component is a positive real number. A B means that every component of A is greater
More informationComputational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science
Computational Methods CMSC/AMSC/MAPL 460 Eigenvalues and Eigenvectors Ramani Duraiswami, Dept. of Computer Science Eigen Values of a Matrix Definition: A N N matrix A has an eigenvector x (nonzero) with
More informationElementary Row Operations and Matrix Multiplication
Contents 1 Elementary Row Operations and Matrix Multiplication 1.1 Theorem (Row Operations using Matrix Multiplication) 2 Inverses of Elementary Row Operation Matrices 2.1 Theorem (Inverses of Elementary
More informationDETERMINANTS. b 2. x 2
DETERMINANTS 1 Systems of two equations in two unknowns A system of two equations in two unknowns has the form a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 This can be written more concisely in
More informationMatrices, transposes, and inverses
Matrices, transposes, and inverses Math 40, Introduction to Linear Algebra Wednesday, February, 202 Matrixvector multiplication: two views st perspective: A x is linear combination of columns of A 2 4
More informationGRA6035 Mathematics. Eivind Eriksen and Trond S. Gustavsen. Department of Economics
GRA635 Mathematics Eivind Eriksen and Trond S. Gustavsen Department of Economics c Eivind Eriksen, Trond S. Gustavsen. Edition. Edition Students enrolled in the course GRA635 Mathematics for the academic
More informationThe purpose of this section is to derive the asymptotic distribution of the Pearson chisquare statistic. k (n j np j ) 2. np j.
Chapter 7 Pearson s chisquare test 7. Null hypothesis asymptotics Let X, X 2, be independent from a multinomial(, p) distribution, where p is a kvector with nonnegative entries that sum to one. That
More informationv 1. v n R n we have for each 1 j n that v j v n max 1 j n v j. i=1
1. Limits and Continuity It is often the case that a nonlinear function of nvariables x = (x 1,..., x n ) is not really defined on all of R n. For instance f(x 1, x 2 ) = x 1x 2 is not defined when x
More information( % . This matrix consists of $ 4 5 " 5' the coefficients of the variables as they appear in the original system. The augmented 3 " 2 2 # 2 " 3 4&
Matrices define matrix We will use matrices to help us solve systems of equations. A matrix is a rectangular array of numbers enclosed in parentheses or brackets. In linear algebra, matrices are important
More informationContinuity of the Perron Root
Linear and Multilinear Algebra http://dx.doi.org/10.1080/03081087.2014.934233 ArXiv: 1407.7564 (http://arxiv.org/abs/1407.7564) Continuity of the Perron Root Carl D. Meyer Department of Mathematics, North
More informationMATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006
MATH 511 ADVANCED LINEAR ALGEBRA SPRING 26 Sherod Eubanks HOMEWORK 1 1.1 : 3, 5 1.2 : 4 1.3 : 4, 6, 12, 13, 16 1.4 : 1, 5, 8 Section 1.1: The EigenvalueEigenvector Equation Problem 3 Let A M n (R). If
More informationWe seek a factorization of a square matrix A into the product of two matrices which yields an
LU Decompositions We seek a factorization of a square matrix A into the product of two matrices which yields an efficient method for solving the system where A is the coefficient matrix, x is our variable
More information