Solution based on matrix technique Rewrite. ) = 8x 2 1 4x 1x 2 + 5x x1 2x 2 2x 1 + 5x 2

Size: px
Start display at page:

Download "Solution based on matrix technique Rewrite. ) = 8x 2 1 4x 1x 2 + 5x x1 2x 2 2x 1 + 5x 2"

Transcription

1 8.2 Quadratic Forms Example 1 Consider the function q(x 1, x 2 ) = 8x 2 1 4x 1x 2 + 5x 2 2 Determine whether q(0, 0) is the global minimum. Solution based on matrix technique Rewrite q( x1 x 2 = x1 ) = 8x 2 1 4x 1x 2 + 5x 2 2 x 2 8x1 2x 2 2x 1 + 5x 2 Note that we split the contribution 4x 1 x 2 equally among the two components. More succinctly, we can write q( x) = x A x, where A =

2 or q( x) = x T A x The matrix A is symmetric by construction. By the spectral theorem, there is an orthonormal eigenbasis v 1, v 2 for A. We find v 1 = , v 2 = with associated eigenvalues λ 1 = 9 and λ 2 = 4. Let x = c 1 v 1 + c 2 v 2, we can express the value of the function as follows: q( x) = x A x = (c 1 v 1 + c 2 v 2 ) (c 1 λ 1 v 1 + c 2 λ 2 v 2 ) = λ 1 c λ 2c 2 2 = 9c c2 2 Therefore, q( x) > 0 for all nonzero x. q(0, 0) = 0 is the global minimum of the function.

3 Def Quadratic forms A function q(x 1, x 2,..., x n ) from R n to R is called a quadratic form if it is a linear combination of functions of the form x i x j. A quadratic form can be written as q( x) = x A x = x T A x for a symmetric n n matrix A. Example 2 Consider the quadratic form q(x 1, x 2, x 3 ) = 9x x2 2 +3x2 3 2x 1x 2 +4x 1 x 3 6x 2 x 3 Find a symmetric matrix A such that q( x) = x A x for all x in R 3. Solution As in Example 1, we let a ii = (coefficient of x 2 i ), a ij = 1 2 (coefficient of x ix j ), if i j. Therefore, A =

4 Change of Variables in a Quadratic Form Fact Consider a quadratic form q( x) = x A x from R n to R. Let B be an orthonormal eigenbasis for A, with associated eigenvalues λ 1,..., λ n. Then q( x) = λ 1 c λ 2c λ nc 2 n, where the c i are the coordinates of x with respect to B. Let x = P y, or equivalently, y = P 1 x = c 1., if change of variable is made in a quadratic c n form x T Ax, then x T Ax = (P y) T A(P y) = y T P T AP y = y T (P T AP )y Since P orghogonally diagonalizes A, the P T AP = P 1 AP = D. 3

5 Classifying Quadratic Form Positive definite quadratic form If q( x) > 0 for all nonzero x in R n, we say A is positive definite. If q( x) 0 for all nonzero x in R n, we say A is positive semidefinite. If q( x) takes positive as well as negative values, we say A is indefinite. 4

6

7 Example 3 Consider m n matrix A. Show that the function q( x) = A x 2 is a quadratic form, find its matrix and determine its definiteness. Solution q( x) = (A x) (A x) = (A x) T (A x) = x T A T A x = x (A T A x). This shows that q is a quadratic form, with symmetric matrix A T A. Since q( x) = A x 2 0 for all vectors x in R n, this quadratic form is positive semidefinite. Note that q( x) = 0 iff x is in the kernel of A. Therefore, the quadratic form is positive definite iff ker(a) = { 0}. Fact Eigenvalues and definiteness A symmetric matrix A is positive definite iff all its eigenvalues are positive. The matrix is positive semidefinite iff all of its eigenvalues are positive or zero. 5

8 Fact: The Principal Axes Theorem Let A be an n n symmetric matrix. Then there is an orthogonal change of variable, x = P y, that transforms the quadratic form x T Ax into a quadratic form y T Dy with no cross-product term. Principle Axes When we study a function f(x 1, x 2,..., x n ) from R n to R, we are often interested in the solution of the equation f(x 1, x 2,..., x n ) = k, for a fixed k in R, called the level sets of f. Example 4 Sketch the curve 8x 2 1 4x 1x 2 + 5x 2 2 = 1 Solution In Example 1, we found that we can write this equation as 9c c2 2 = 1 6

9 where c 1 and c 2 are the coordinates of x with respect to the orthonormal eigenbasis v 1 = 1 2, v = for A = Figure We sketch this ellipse in The c 1 -axe and c 2 -axe are called the principle axes of the quadratic form q(x 1, x 2 ) = 8x 2 1 4x 1 x 2 +5x 2 2. Note that these are the eigenspaces of the matrix A = of the quadratic form

10 Constrained Optimization When a quadratic form Q has no cross-product terms, it is easy to find the maximum and minimum of Q( x) for x T xx = 1. Example 1 Find the maximum and minimum values of Q( x) = 9x x x2 3 subject to the constraint x T xx = 1. Solution Q( x) = 9x x x2 3 9x x x2 3 = 9(x x2 2 + x2 3 ) = 9 whenever x x2 2 + x2 3 = 1. x = (1, 0, 0). Similarly, Q( x) = 9 when Q( x) = 9x x x2 3 3x x x2 3 = 3(x x2 2 + x2 3 ) = 3 whenever x x2 2 + x2 3 = 1. x = (0, 0, 1). Q( x) = 3 when 7

11 THEOREM Let A be a symmetric matrix, and define m = min{x T Ax : x} = 1}, M = max{x T Ax : x} = 1}. Then M is the greatest eigenvalues λ 1 of A and m is the least eigenvalue of A. The value of x T Ax is M when x is a unit eigenvector u 1 corresponding to eigenvalue M. The value of x T Ax is m when x is a unit eigenvector corresponding to m. Proof Orthogonally diagonalize A, i.e. P T AP = D (by change of variable x = P y), we can transform the quadratic form x T Ax = (P y) T A(P y) into y T Dy. The constraint x = 1 implies y = 1 since x 2 = P y 2 = (P y) T P y = y T P T P y = y T (P T P )y = y T y = 1. Arrange the columns of P so that P = u 1 u n and λ 1 λ n. 8

12 Given that any unit vector y with coordinates c 1., observe that c n y T Dy = λ 1 c λ nc 2 n λ 1 c λ 1c 2 n = λ 1 y = λ 1 Thus x T Ax has the largest value M = λ 1 when y = 1., i.e. x = P y = u 1. 0 A similar argument show that m is the least eigenvalue λ n when y = 0., i.e. x = P y = 1 u n.

13 THEOREM Let A, λ 1 and u 1 be as in the last theorem. Then the maximum value of x T Ax subject to the constraints x T x = 1, x T u 1 = 0 is the second greatest eigenvalue, λ 2, and this maximum is attained when x is an eigenvector u 2 corresponding to λ 2. THEOREM Let A be a symmetric n n matrix with an orthogonal diagonalization A = P DP 1, where the entries on the diagonal of D are arranged so that λ 1 λ n, and where the columns of P are corresponding unit eigenvectors u 1,..., u n. Then for k = 2,..., n, the maximum value of x T Ax subject to the constraints x T x = 1, x T u 1 = 0,..., x T u k 1 = 0 is the eigenvalue λ k, and this maximum is attained when x = u k. 9

14 The Singular Value Decomposition The absolute values of the eigenvalues of a symmetric matrix A measure the amounts that A stretches or shrinks certain the eigenvectors. If Ax = λx and x T x = 1, then Ax = λx = λ x = λ based on the diagonalization of A = P DP 1. The description has an analogue for rectangular matrices that will lead to the singular value decomposition A = QDP 1. 10

15 Example If A =, then the linear transformation T (x) = Ax maps the unit sphere {x : x = 1} in R 3 into an ellipse in R 2 (see Fig. 1). Find a unit vector at which Ax is maximized. 11

16 Observe that Ax = (Ax) T Ax = x T A T Ax = x T (A T A)x Also A T A is a symmetric matrix since (A T A) T = A T A T T = A T A. So the problem now is to maximize the quadratic form x T (A T A)x subject to the constraint x = 1. Compute A T A = = Find the eigenvalues of A T A: λ 1 = 360, λ 2 = 90, λ 3 = 0, and the corresponding unit eigenvectors, v 1 = 1/3 2/3, v2 = 2/3 1/3, v3 = 2/3 2/3 2/3 2/3 1/3 The maximum value of Ax 2 is 360, attained when x is the unit vector v 1. 12

17 The Singular Values of an m n Matrix Let A be an m n matrix. Then A T A is symmetric and can be orthogonally diagonalized. Let {v 1,..., v n } be an orthonormal basis for R n consisting of eigenvectors of A T A, and let λ 1,..., λ n be the associated eigenvalues of A T A. Then for 1 i n, Av i 2 = (Av i ) T Av i = v T i AT Av i = v T i (λ iv i ) = λ i So the eigenvalues of A T A are all nonnegative. Let λ 1 λ 2 λ n 0 The singular values of A are the square roots of the eigenvalues of A T A, denoted by σ 1,..., σ n. That is σ i = λ i for 1 i n. The singular values of A are the lengths of the vectors Av 1,..., Av n. 13

18 Example Let A be the matrix in the last example. Since the eigenvalues of A T A are 360, 90, and 0, the singular values of A are σ 1 = 360 = 6 10, σ 2 = 90 = 3 10, σ 3 = 0 Note that, the first singular value of A is the maximum of Ax over all unit vectors, and the maximum is attained at the unit eigenvector v 1. The second singular value of A is the maximum of Ax over all unit vectors that are orthogonal to v 1, and this maximum is attained at the second unit eigenvector, v 2. Compute Av 1 = /3 2/3 2/3 = 18 6 Av 2 = /3 1/3 2/3 = 3 9 The fact that Av 1 and Av 2 are orthogonal is no accident, as the next theorem shows. 14

19 THEOREM Suppose that {v 1,..., v n } is an orthonormal basis of R n consisting of eigenvectors of A T A, arranged so that the corresponding eigenvalues of A T A satisfy λ 1 λ 2 λ n, and suppose that A has r nonzero singular values. Then {Av 1,..., Av r } is an orthogonal basis for im(a), and rank(a)=r. Proof Because v i and v j are orthogonal for i j, (Av i ) T (Av j ) = v T i AT Av j = v T i λ jv j = 0 Thus {Av 1,..., Av n } is an orthogonal set. Furthermore, Av i = 0 for i > r. For any y in im(a), i.e. y = Ax y = Ax = A(c 1 v c n v n ) = c 1 Av c r Av r Thus y is in Span{Av 1,..., Av r }, which shows that {Av 1,..., Av r }is an (orthogonal) basis for im(a). Hence rank(a)=dim im(a)=r. 15

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each)

1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each) Math 33 AH : Solution to the Final Exam Honors Linear Algebra and Applications 1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each) (1) If A is an invertible

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

1 Eigenvalues and Eigenvectors

1 Eigenvalues and Eigenvectors Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x

More information

Diagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions

Diagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions Chapter 3 Diagonalisation Eigenvalues and eigenvectors, diagonalisation of a matrix, orthogonal diagonalisation fo symmetric matrices Reading As in the previous chapter, there is no specific essential

More information

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7. Review University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

More information

Summary of week 8 (Lectures 22, 23 and 24)

Summary of week 8 (Lectures 22, 23 and 24) WEEK 8 Summary of week 8 (Lectures 22, 23 and 24) This week we completed our discussion of Chapter 5 of [VST] Recall that if V and W are inner product spaces then a linear map T : V W is called an isometry

More information

Matrix Norms. Tom Lyche. September 28, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Matrix Norms. Tom Lyche. September 28, Centre of Mathematics for Applications, Department of Informatics, University of Oslo Matrix Norms Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 28, 2009 Matrix Norms We consider matrix norms on (C m,n, C). All results holds for

More information

Notes for STA 437/1005 Methods for Multivariate Data

Notes for STA 437/1005 Methods for Multivariate Data Notes for STA 437/1005 Methods for Multivariate Data Radford M. Neal, 26 November 2010 Random Vectors Notation: Let X be a random vector with p elements, so that X = [X 1,..., X p ], where denotes transpose.

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

4.9 Markov matrices. DEFINITION 4.3 A real n n matrix A = [a ij ] is called a Markov matrix, or row stochastic matrix if. (i) a ij 0 for 1 i, j n;

4.9 Markov matrices. DEFINITION 4.3 A real n n matrix A = [a ij ] is called a Markov matrix, or row stochastic matrix if. (i) a ij 0 for 1 i, j n; 49 Markov matrices DEFINITION 43 A real n n matrix A = [a ij ] is called a Markov matrix, or row stochastic matrix if (i) a ij 0 for 1 i, j n; (ii) a ij = 1 for 1 i n Remark: (ii) is equivalent to AJ n

More information

An Application of Linear Algebra to Image Compression

An Application of Linear Algebra to Image Compression An Application of Linear Algebra to Image Compression Paul Dostert July 2, 2009 1 / 16 Image Compression There are hundreds of ways to compress images. Some basic ways use singular value decomposition

More information

Linear Least Squares

Linear Least Squares Linear Least Squares Suppose we are given a set of data points {(x i,f i )}, i = 1,...,n. These could be measurements from an experiment or obtained simply by evaluating a function at some points. One

More information

UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure

UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure UNIT 2 MATRICES - I Matrices - I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

[1] Diagonal factorization

[1] Diagonal factorization 8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

More information

MATH 240 Fall, Chapter 1: Linear Equations and Matrices

MATH 240 Fall, Chapter 1: Linear Equations and Matrices MATH 240 Fall, 2007 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 9th Ed. written by Prof. J. Beachy Sections 1.1 1.5, 2.1 2.3, 4.2 4.9, 3.1 3.5, 5.3 5.5, 6.1 6.3, 6.5, 7.1 7.3 DEFINITIONS

More information

Facts About Eigenvalues

Facts About Eigenvalues Facts About Eigenvalues By Dr David Butler Definitions Suppose A is an n n matrix An eigenvalue of A is a number λ such that Av = λv for some nonzero vector v An eigenvector of A is a nonzero vector v

More information

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

More information

GRA6035 Mathematics. Eivind Eriksen and Trond S. Gustavsen. Department of Economics

GRA6035 Mathematics. Eivind Eriksen and Trond S. Gustavsen. Department of Economics GRA635 Mathematics Eivind Eriksen and Trond S. Gustavsen Department of Economics c Eivind Eriksen, Trond S. Gustavsen. Edition. Edition Students enrolled in the course GRA635 Mathematics for the academic

More information

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science Computational Methods CMSC/AMSC/MAPL 460 Eigenvalues and Eigenvectors Ramani Duraiswami, Dept. of Computer Science Eigen Values of a Matrix Definition: A N N matrix A has an eigenvector x (non-zero) with

More information

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010 Math 550 Notes Chapter 7 Jesse Crawford Department of Mathematics Tarleton State University Fall 2010 (Tarleton State University) Math 550 Chapter 7 Fall 2010 1 / 34 Outline 1 Self-Adjoint and Normal Operators

More information

Lecture 7: Singular Value Decomposition

Lecture 7: Singular Value Decomposition 0368-3248-01-Algorithms in Data Mining Fall 2013 Lecturer: Edo Liberty Lecture 7: Singular Value Decomposition Warning: This note may contain typos and other inaccuracies which are usually discussed during

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

1.5 Elementary Matrices and a Method for Finding the Inverse

1.5 Elementary Matrices and a Method for Finding the Inverse .5 Elementary Matrices and a Method for Finding the Inverse Definition A n n matrix is called an elementary matrix if it can be obtained from I n by performing a single elementary row operation Reminder:

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1 (d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

On the general equation of the second degree

On the general equation of the second degree On the general equation of the second degree S Kesavan The Institute of Mathematical Sciences, CIT Campus, Taramani, Chennai - 600 113 e-mail:kesh@imscresin Abstract We give a unified treatment of the

More information

Using the Singular Value Decomposition

Using the Singular Value Decomposition Using the Singular Value Decomposition Emmett J. Ientilucci Chester F. Carlson Center for Imaging Science Rochester Institute of Technology emmett@cis.rit.edu May 9, 003 Abstract This report introduces

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

Similar matrices and Jordan form

Similar matrices and Jordan form Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

More information

Module 3: Second-Order Partial Differential Equations

Module 3: Second-Order Partial Differential Equations Module 3: Second-Order Partial Differential Equations In Module 3, we shall discuss some general concepts associated with second-order linear PDEs. These types of PDEs arise in connection with various

More information

Inner products on R n, and more

Inner products on R n, and more Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +

More information

The Power Method for Eigenvalues and Eigenvectors

The Power Method for Eigenvalues and Eigenvectors Numerical Analysis Massoud Malek The Power Method for Eigenvalues and Eigenvectors The spectrum of a square matrix A, denoted by σ(a) is the set of all eigenvalues of A. The spectral radius of A, denoted

More information

Presentation 3: Eigenvalues and Eigenvectors of a Matrix

Presentation 3: Eigenvalues and Eigenvectors of a Matrix Colleen Kirksey, Beth Van Schoyck, Dennis Bowers MATH 280: Problem Solving November 18, 2011 Presentation 3: Eigenvalues and Eigenvectors of a Matrix Order of Presentation: 1. Definitions of Eigenvalues

More information

CHARACTERISTIC ROOTS AND VECTORS

CHARACTERISTIC ROOTS AND VECTORS CHARACTERISTIC ROOTS AND VECTORS 1 DEFINITION OF CHARACTERISTIC ROOTS AND VECTORS 11 Statement of the characteristic root problem Find values of a scalar λ for which there exist vectors x 0 satisfying

More information

Quadratic Functions, Optimization, and Quadratic Forms

Quadratic Functions, Optimization, and Quadratic Forms Quadratic Functions, Optimization, and Quadratic Forms Robert M. Freund February, 2004 2004 Massachusetts Institute of echnology. 1 2 1 Quadratic Optimization A quadratic optimization problem is an optimization

More information

Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) Singular Value Decomposition SVD and Principal Component Analysis PCA Edo Liberty Algorithms in Data mining 1 Singular Value Decomposition SVD We will see that any matrix A R m n w.l.o.g. m n can be written

More information

EC9A0: Pre-sessional Advanced Mathematics Course

EC9A0: Pre-sessional Advanced Mathematics Course University of Warwick, EC9A0: Pre-sessional Advanced Mathematics Course Peter J. Hammond & Pablo F. Beker 1 of 55 EC9A0: Pre-sessional Advanced Mathematics Course Slides 1: Matrix Algebra Peter J. Hammond

More information

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix.

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. Inverse matrix Definition. Let A be an n n matrix. The inverse of A is an n n matrix, denoted

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices. Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

7 - Linear Transformations

7 - Linear Transformations 7 - Linear Transformations Mathematics has as its objects of study sets with various structures. These sets include sets of numbers (such as the integers, rationals, reals, and complexes) whose structure

More information

Solutions for Chapter 8

Solutions for Chapter 8 Solutions for Chapter 8 Solutions for exercises in section 8. 2 8.2.1. The eigenvalues are σ (A ={12, 6} with alg mult A (6 = 2, and it s clear that 12 = ρ(a σ (A. The eigenspace N(A 12I is spanned by

More information

Diagonal, Symmetric and Triangular Matrices

Diagonal, Symmetric and Triangular Matrices Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by

More information

(January 14, 2009) End k (V ) End k (V/W )

(January 14, 2009) End k (V ) End k (V/W ) (January 14, 29) [16.1] Let p be the smallest prime dividing the order of a finite group G. Show that a subgroup H of G of index p is necessarily normal. Let G act on cosets gh of H by left multiplication.

More information

Preliminaries of linear algebra

Preliminaries of linear algebra Preliminaries of linear algebra (for the Automatic Control course) Matteo Rubagotti March 3, 2011 This note sums up the preliminary definitions and concepts of linear algebra needed for the resolution

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

More information

Markov Chains, part I

Markov Chains, part I Markov Chains, part I December 8, 2010 1 Introduction A Markov Chain is a sequence of random variables X 0, X 1,, where each X i S, such that P(X i+1 = s i+1 X i = s i, X i 1 = s i 1,, X 0 = s 0 ) = P(X

More information

Lecture 5 Principal Minors and the Hessian

Lecture 5 Principal Minors and the Hessian Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and

More information

Quadratic Polynomials

Quadratic Polynomials Math 210 Quadratic Polynomials Jerry L. Kazdan Polynomials in One Variable. After studying linear functions y = ax + b, the next step is to study quadratic polynomials, y = ax 2 + bx + c, whose graphs

More information

CHAPTER III - MARKOV CHAINS

CHAPTER III - MARKOV CHAINS CHAPTER III - MARKOV CHAINS JOSEPH G. CONLON 1. General Theory of Markov Chains We have already discussed the standard random walk on the integers Z. A Markov Chain can be viewed as a generalization of

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,

More information

More than you wanted to know about quadratic forms

More than you wanted to know about quadratic forms CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences More than you wanted to know about quadratic forms KC Border Contents 1 Quadratic forms 1 1.1 Quadratic forms on the unit

More information

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8 Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e

More information

The variable λ is a dummy variable called a Lagrange multiplier ; we only really care about the values of x, y, and z.

The variable λ is a dummy variable called a Lagrange multiplier ; we only really care about the values of x, y, and z. Math a Lagrange Multipliers Spring, 009 The method of Lagrange multipliers allows us to maximize or minimize functions with the constraint that we only consider points on a certain surface To find critical

More information

EIGENVALUES AND EIGENVECTORS

EIGENVALUES AND EIGENVECTORS Chapter 6 EIGENVALUES AND EIGENVECTORS 61 Motivation We motivate the chapter on eigenvalues b discussing the equation ax + hx + b = c, where not all of a, h, b are zero The expression ax + hx + b is called

More information

WHICH LINEAR-FRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE?

WHICH LINEAR-FRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE? WHICH LINEAR-FRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE? JOEL H. SHAPIRO Abstract. These notes supplement the discussion of linear fractional mappings presented in a beginning graduate course

More information

Lecture 5: Singular Value Decomposition SVD (1)

Lecture 5: Singular Value Decomposition SVD (1) EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system

More information

be a nested sequence of closed nonempty connected subsets of a compact metric space X. Prove that

be a nested sequence of closed nonempty connected subsets of a compact metric space X. Prove that Problem 1A. Let... X 2 X 1 be a nested sequence of closed nonempty connected subsets of a compact metric space X. Prove that i=1 X i is nonempty and connected. Since X i is closed in X, it is compact.

More information

2.1: Determinants by Cofactor Expansion. Math 214 Chapter 2 Notes and Homework. Evaluate a Determinant by Expanding by Cofactors

2.1: Determinants by Cofactor Expansion. Math 214 Chapter 2 Notes and Homework. Evaluate a Determinant by Expanding by Cofactors 2.1: Determinants by Cofactor Expansion Math 214 Chapter 2 Notes and Homework Determinants The minor M ij of the entry a ij is the determinant of the submatrix obtained from deleting the i th row and the

More information

Definition of a Linear Program

Definition of a Linear Program Definition of a Linear Program Definition: A function f(x 1, x,..., x n ) of x 1, x,..., x n is a linear function if and only if for some set of constants c 1, c,..., c n, f(x 1, x,..., x n ) = c 1 x 1

More information

Bindel, Fall 2012 Matrix Computations (CS 6210) Week 8: Friday, Oct 12

Bindel, Fall 2012 Matrix Computations (CS 6210) Week 8: Friday, Oct 12 Why eigenvalues? Week 8: Friday, Oct 12 I spend a lot of time thinking about eigenvalue problems. In part, this is because I look for problems that can be solved via eigenvalues. But I might have fewer

More information

Polynomials can be added or subtracted simply by adding or subtracting the corresponding terms, e.g., if

Polynomials can be added or subtracted simply by adding or subtracting the corresponding terms, e.g., if 1. Polynomials 1.1. Definitions A polynomial in x is an expression obtained by taking powers of x, multiplying them by constants, and adding them. It can be written in the form c 0 x n + c 1 x n 1 + c

More information

Maximum and Minimum Values

Maximum and Minimum Values Jim Lambers MAT 280 Spring Semester 2009-10 Lecture 8 Notes These notes correspond to Section 11.7 in Stewart and Section 3.3 in Marsden and Tromba. Maximum and Minimum Values In single-variable calculus,

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II. Matrix Algorithms DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

More information

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA LINEAR ALGEBRA

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA LINEAR ALGEBRA THE FUNDAMENTAL THEOREM OF ALGEBRA VIA LINEAR ALGEBRA KEITH CONRAD Our goal is to use abstract linear algebra to prove the following result, which is called the fundamental theorem of algebra. Theorem

More information

Shiqian Ma, SEEM 5121, Dept. of SEEM, CUHK 1. Chapter 2. Convex Optimization

Shiqian Ma, SEEM 5121, Dept. of SEEM, CUHK 1. Chapter 2. Convex Optimization Shiqian Ma, SEEM 5121, Dept. of SEEM, CUHK 1 Chapter 2 Convex Optimization Shiqian Ma, SEEM 5121, Dept. of SEEM, CUHK 2 2.1. Convex Optimization General optimization problem: min f 0 (x) s.t., f i (x)

More information

Inverses and powers: Rules of Matrix Arithmetic

Inverses and powers: Rules of Matrix Arithmetic Contents 1 Inverses and powers: Rules of Matrix Arithmetic 1.1 What about division of matrices? 1.2 Properties of the Inverse of a Matrix 1.2.1 Theorem (Uniqueness of Inverse) 1.2.2 Inverse Test 1.2.3

More information

Section 2.1. Section 2.2. Exercise 6: We have to compute the product AB in two ways, where , B =. 2 1 3 5 A =

Section 2.1. Section 2.2. Exercise 6: We have to compute the product AB in two ways, where , B =. 2 1 3 5 A = Section 2.1 Exercise 6: We have to compute the product AB in two ways, where 4 2 A = 3 0 1 3, B =. 2 1 3 5 Solution 1. Let b 1 = (1, 2) and b 2 = (3, 1) be the columns of B. Then Ab 1 = (0, 3, 13) and

More information

4 Classification of Second-Order Equations

4 Classification of Second-Order Equations 4 Classification of Second-Order Equations 41 Types of Second-Order Equations We now turn our attention to second-order equations F x, u, Du, D 2 u 0 In general, higher-order equations are more complicated

More information

Inner product. Definition of inner product

Inner product. Definition of inner product Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product

More information

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

More information

SYSTEMS OF EQUATIONS

SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS 1. Examples of systems of equations Here are some examples of systems of equations. Each system has a number of equations and a number (not necessarily the same) of variables for which

More information

9.3 Advanced Topics in Linear Algebra

9.3 Advanced Topics in Linear Algebra 548 93 Advanced Topics in Linear Algebra Diagonalization and Jordan s Theorem A system of differential equations x = Ax can be transformed to an uncoupled system y = diag(λ,, λ n y by a change of variables

More information

Basics Inversion and related concepts Random vectors Matrix calculus. Matrix algebra. Patrick Breheny. January 20

Basics Inversion and related concepts Random vectors Matrix calculus. Matrix algebra. Patrick Breheny. January 20 Matrix algebra January 20 Introduction Basics The mathematics of multiple regression revolves around ordering and keeping track of large arrays of numbers and solving systems of equations The mathematical

More information

MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix.

MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix. MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix. Matrices Definition. An m-by-n matrix is a rectangular array of numbers that has m rows and n columns: a 11

More information

MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006

MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006 MATH 511 ADVANCED LINEAR ALGEBRA SPRING 26 Sherod Eubanks HOMEWORK 1 1.1 : 3, 5 1.2 : 4 1.3 : 4, 6, 12, 13, 16 1.4 : 1, 5, 8 Section 1.1: The Eigenvalue-Eigenvector Equation Problem 3 Let A M n (R). If

More information

Linear Systems. Singular and Nonsingular Matrices. Find x 1, x 2, x 3 such that the following three equations hold:

Linear Systems. Singular and Nonsingular Matrices. Find x 1, x 2, x 3 such that the following three equations hold: Linear Systems Example: Find x, x, x such that the following three equations hold: x + x + x = 4x + x + x = x + x + x = 6 We can write this using matrix-vector notation as 4 {{ A x x x {{ x = 6 {{ b General

More information

Nonlinear Programming Methods.S2 Quadratic Programming

Nonlinear Programming Methods.S2 Quadratic Programming Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective

More information

some algebra prelim solutions

some algebra prelim solutions some algebra prelim solutions David Morawski August 19, 2012 Problem (Spring 2008, #5). Show that f(x) = x p x + a is irreducible over F p whenever a F p is not zero. Proof. First, note that f(x) has no

More information

The Hadamard Product

The Hadamard Product The Hadamard Product Elizabeth Million April 12, 2007 1 Introduction and Basic Results As inexperienced mathematicians we may have once thought that the natural definition for matrix multiplication would

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2014 Timo Koski () Mathematisk statistik 24.09.2014 1 / 75 Learning outcomes Random vectors, mean vector, covariance

More information

Iterative Methods for Computing Eigenvalues and Eigenvectors

Iterative Methods for Computing Eigenvalues and Eigenvectors The Waterloo Mathematics Review 9 Iterative Methods for Computing Eigenvalues and Eigenvectors Maysum Panju University of Waterloo mhpanju@math.uwaterloo.ca Abstract: We examine some numerical iterative

More information

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

THEORY OF SIMPLEX METHOD

THEORY OF SIMPLEX METHOD Chapter THEORY OF SIMPLEX METHOD Mathematical Programming Problems A mathematical programming problem is an optimization problem of finding the values of the unknown variables x, x,, x n that maximize

More information

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7 Linear Algebra-Lab 2

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7 Linear Algebra-Lab 2 Linear Algebra-Lab 1 1) Use Gaussian elimination to solve the following systems x 1 + x 2 2x 3 + 4x 4 = 5 1.1) 2x 1 + 2x 2 3x 3 + x 4 = 3 3x 1 + 3x 2 4x 3 2x 4 = 1 x + y + 2z = 4 1.4) 2x + 3y + 6z = 10

More information

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 31

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 31 MATH 56A SPRING 2008 STOCHASTIC PROCESSES 3.3. Invariant probability distribution. Definition.4. A probability distribution is a function π : S [0, ] from the set of states S to the closed unit interval

More information

Review: Vector space

Review: Vector space Math 2F Linear Algebra Lecture 13 1 Basis and dimensions Slide 1 Review: Subspace of a vector space. (Sec. 4.1) Linear combinations, l.d., l.i. vectors. (Sec. 4.3) Dimension and Base of a vector space.

More information

Eigenvalues and eigenvectors of a matrix

Eigenvalues and eigenvectors of a matrix Eigenvalues and eigenvectors of a matrix Definition: If A is an n n matrix and there exists a real number λ and a non-zero column vector V such that AV = λv then λ is called an eigenvalue of A and V is

More information

Harvard College. Math 21b: Linear Algebra and Differential Equations Formula and Theorem Review

Harvard College. Math 21b: Linear Algebra and Differential Equations Formula and Theorem Review Harvard College Math 21b: Linear Algebra and Differential Equations Formula and Theorem Review Tommy MacWilliam, 13 tmacwilliam@college.harvard.edu May 5, 2010 1 Contents Table of Contents 4 1 Linear Equations

More information

We seek a factorization of a square matrix A into the product of two matrices which yields an

We seek a factorization of a square matrix A into the product of two matrices which yields an LU Decompositions We seek a factorization of a square matrix A into the product of two matrices which yields an efficient method for solving the system where A is the coefficient matrix, x is our variable

More information