Quadratic Functions, Optimization, and Quadratic Forms

Size: px
Start display at page:

Download "Quadratic Functions, Optimization, and Quadratic Forms"

Transcription

1 Quadratic Functions, Optimization, and Quadratic Forms Robert M. Freund February, Massachusetts Institute of echnology. 1

2 2 1 Quadratic Optimization A quadratic optimization problem is an optimization problem of the form: (QP) : minimize f (x) := 1 x Qx + c x 2 n s.t. x R. Problems of the form QP are natural models that arise in a variety of settings. For example, consider the problem of approximately solving an over-determined linear system Ax = b, where A has more rows than columns. We might want to solve: (P 1 ) : minimize Ax b n s.t. x R. Now notice that Ax b 2 = x A Ax 2b Ax+b b, and so this problem is equivalent to: (P 1 ) : minimize x A Ax 2b Ax + b b n s.t. x R, which is in the format of QP. A symmetric matrix is a square matrix Q R n n with the property that Q ij = Q ji for all i, j = 1,...,n.

3 3 We can alternatively define a matrix Q to be symmetric if Q = Q. We denote the identity matrix (i.e., a matrix with all 1 s on the diagonal and 0 s everywhere else) by I, that is, I =......, and note that I is a symmetric matrix. he gradient vector of a smooth function f(x) : R n Ris the vector of first partial derivatives of f(x): f(x) x 1. f(x) x n f(x) :=.. he Hessian matrix of a smooth function f(x) : R n Ris the matrix of second partial derivatives. Suppose that f(x) : R n Ris twice differentiable, and let 2 f(x) [H(x)] ij := xi x j. hen the matrix H(x) is a symmetric matrix, reflecting the fact that 2 f(x) 2 f(x) =. x i x j x j x i A very general optimization problem is:

4 4 (GP) : minimize f (x) n s.t. x R, where f (x) : R n R is a function. We often design algorithms for GP by building a local quadratic model of f ( ) atagivenpoint x = x. We form the gradient f ( x) (the vector of partial derivatives) and the Hessian H( x) (the matrix of second partial derivatives), and approximate GP by the following problem which uses the aylor expansion of f (x) at x = x up to the quadratic term. (P 2 ) : minimize f (x) := f ( x)+ f ( x) (x x)+ 1 2 (x x) H( x)(x x) n s.t. x R. his problem is also in the format of QP. Notice in the general model QP that we can always presume that Q is a symmetric matrix, because: 1 x Qx = x (Q + Q )x 2 and so we could replace Q by the symmetric matrix Q := 1 2 (Q + Q ). Now suppose that f (x) := 1 x Qx + c x 2 where Q is symmetric. hen it is easy to see that: f (x) = Qx + c and H (x) = Q.

5 5 Before we try to solve QP, we first review some very basic properties of symmetric matrices. 2 Convexity, Definiteness of a Symmetric Matrix, and Optimality Conditions A function f (x) : R n Ris a convex function if n f (λx+(1 λ)y) λf (x)+(1 λ)f (y) for all x, y R, for all λ [0, 1]. A function f (x) as above is called a strictly convex function if the inequality above is strict for all x = y and λ (0, 1). A function f (x) : R n Ris a concave function if n f (λx+(1 λ)y) λf (x)+(1 λ)f (y) for all x, y R, for all λ [0, 1]. A function f (x) as above is called a strictly concave function if the inequality above is strict for all x y and λ (0, 1). Here are some more definitions: Q is symmetric and positive semidefinite (abbreviated SPSD and denoted by Q 0) if x Qx 0 for all x R n. Q is symmetric and positive definite (abbreviated SPD and denoted by Q 0) if x Qx > 0 for all x R n, x 0. 1 heorem 1 he function f (x) := 2x Qx + c x is a convex function if and only if Q is SPSD.

6 6 Proof: First, suppose that Q is not SPSD. hen there exists r such that r Qr < 0. Let x = θr. hen f (x) = f (θr) = 1 2 θ2 r Qr + θc r is strictly concave on the subset {x x = θr}, since r Qr < 0. hus f ( ) is not a convex function. Next, suppose that Q is SPSD. For all λ [0, 1], and for all x, y, f (λx +(1 λ)y) = f (y + λ(x y)) = 1 2 (y + λ(x y)) Q(y + λ(x y)) + c (y + λ(x y)) 1 1 = Qy + λ(x y) Qy + 2 y 2λ 2 (x y) Q(x y)+ λc x +(1 λ)c y Qy + λ(x y) Qy + 2 y λ(x y) Q(x y)+ λc x +(1 λ)c y 1 1 2λx Qx + 2 = (1 λ)y Qy + λc x +(1 λ)c y = λf (x)+(1 λ)f (y), thus showing that f (x) is a convex function. Corollary 2 f (x) is strictly convex if and only if Q 0. f (x) is concave if and only if Q 0. f (x) is strictly concave if and only if Q 0. f (x) is neither convex nor concave if and only if Q is indefinite. heorem 3 Suppose that Q is SPSD. he function f (x) := 1 2 x Qx + c x attains its minimum at x if andonlyif x solves the equation system: f (x) = Qx + c = 0.

7 7 Proof: Suppose that x satisfies Qx + c = 0. hen for any x, wehave: f(x) = f(x +(x x )) = 1 2 (x +(x x )) Q(x +(x x )) + c (x +(x x )) = 1 2 (x ) Qx +(x x ) Qx (x x ) Q(x x )+c x + c (x x ) = 1 2 (x ) Qx +(x x ) (Qx + c)+ 1 2 (x x ) Q(x x )+c x = 1 2 (x ) Qx + c x (x x ) Q(x x ) = f(x )+ 1 2 (x x ) Q(x x ) f(x ), thus showing that x is a minimizer of f(x). Next, suppose that x is a minimizer of f(x), but that d := Qx + c 0. hen: f(x + αd) = 1 2 (x + αd) Q(x + αd)+c (x + αd) = 1 2 (x ) Qx + αd Qx α2 d Qd + c x + αc d = f(x )+αd (Qx + c)+ 1 2 α2 d Qd = f(x )+αd d α2 d Qd. But notice that for α<0 and sufficiently small, that the last expression will be strictly less than f(x ), and so f(x + αd) <f(x ). his contradicts the supposition that x is a minimizer of f(x), and so it must be true that d = Qx + c =0. Here are some examples of convex quadratic forms: f(x) =x x

8 8 f(x) =(x a) (x a) f(x) =(x a) D(x a), where d 1 D = d n is a diagonal matrix with d j > 0, j =1,...,n. f(x) =(x a) M DM(x a), where M is a non-singular matrix and D is as above. 3 Characteristics of Symmetric Matrices A matrix M is an orthonormal matrix if M orthonormal and y = Mx, then = M 1. Note that if M is and so y = x. y 2 = y y = x M Mx = x M 1 Mx = x x = x 2, Anumberγ Ris an eigenvalue of M if there exists a vector x 0 such that M x = γ x. x is called an eigenvector of M (and is called an eigenvector corresponding to γ). Note that γ is an eigenvalue of M if and only if (M γi) x =0, x 0 or, equivalently, if and only if det(m γi) =0. Let g(γ) =det(m γi). hen g(γ) is a polynomial of degree n, andso will have n roots that will solve the equation g(γ) = det(m γi) =0, including multiplicities. hese roots are the eigenvalues of M. Proposition 4 If Q is a real symmetric matrix, all of its eigenvalues are real numbers.

9 9 Proof: If s = a + bi is a complex number, let s = a bi. hen s t = s t, s is real if and only if s = s, ands s = a 2 + b 2.Ifγ is an eigenvalue of Q, for some x 0, we have the following chains of equations: Qx = γx Qx = γx Q x = γ x x Q x = x Q x = x ( γ x) = γx x as well as the following chains of equations: Qx = γx x Qx = x (γx) =γ x x x Q x = x Q x = x Qx = γ x x = γx x. hus γx x = γx x, and since x 0 implies x x 0, γ = γ, andsoγ is real. Proposition 5 If Q is a real symmetric matrix, its eigenvectors corresponding to different eigenvalues are orthogonal. Proof: Suppose Qx 1 = γ 1 x 1 and Qx 2 = γ 2 x 2, γ 1 γ 2. hen γ 1 x 1 x 2 =(γ 1 x 1 ) x 2 =(Qx 1 ) x 2 = x 1 Qx 2 = x 1 (γ 2 x 2 )=γ 2 x 1 x 2. Since γ 1 γ 2, the above equality implies that x 1 x 2 =0. Proposition 6 If Q is a symmetric matrix, then Q has n (distinct) eigenvectors that form an orthonormal basis for R n. Proof: If all of the eigenvalues of Q are distinct, then we are done, as the previous proposition provides the proof. If not, we construct eigenvectors

10 10 iteratively, as follows. Let u 1 be a normalized (i.e., re-scaled so that its norm is 1) eigenvector of Q with corresponding eigenvalue γ 1. Suppose we have k mutually orthogonal normalized eigenvectors u 1,...,u k, with corresponding eigenvalues γ 1,...,γ k. We will now show how to construct a new eigenvector u k+1 with eigenvalue γ k+1, such that u k+1 is orthogonal to each of the vectors u 1,...,u k. Let U =[u 1,...,u k ] R n k. hen QU =[γ 1 u 1,...,γ k u k ]. Let V =[v k+1,...,v n ] R n (n k) be a matrix composed of any n k mutually orthogonal vectors such that the n vectors u 1,...,u k,v k+1,...,v n constitute an orthonormal basis for R n. hen note that U V =0 and V QU = V [γ 1 u 1,...,γ k u k ]=0. Let w be an eigenvector of V QV R (n k) (n k) forsomeeigenvalue γ, so that V QV w = γw, andu k+1 = Vw (assume w is normalized so that u k+1 has norm 1). We now claim the following two statements are true: (a) U u k+1 = 0, so that u k+1 is orthogonal to all of the columns of U, and (b) u k+1 is an eigenvector of Q, andγ is the corresponding eigenvalue of Q. Note that if (a) and (b) are true, we can keep adding orthogonal vectors until k = n, completing the proof of the proposition. o prove (a), simply note that U u k+1 = U Vw =0w =0. oprove (b), letd = Qu k+1 γu k+1. We need to show that d = 0. Note that d = QV w γv w, andsov d = V QV w γv Vw= V QV w γw =0. herefore, d = Ur for some r R k,andso r = U Ur = U d = U QV w γu Vw=0 0=0. herefore, d = 0, which completes the proof. Proposition 7 If Q is SPSD, the eigenvalues of Q are nonnegative.

11 11 Proof: If γ is an eigenvalue of Q, Qx = γx for some x 0. hen 0 x Qx = x (γx) =γx x, whereby γ 0. Proposition 8 If Q is symmetric, then Q = RDR, where R is an orthonormal matrix, the columns of R are an orthonormal basis of eigenvectors of Q, andd is a diagonal matrix of the corresponding eigenvalues of Q. Proof: Let R =[u 1,...,u n ], where u 1,...,u n are the n orthonormal eigenvectors of Q, andlet γ 1 0 D =..., 0 γ n where γ 1,...,γ n are the corresponding eigenvalues. hen { (R R) ij = u 0, if i j i u j = 1, if i = j, so R R = I, i.e., R = R 1. Note that γ i R u i = γ i e i, i =1,...,n (here, e i is the ith unit vector). herefore, R QR = R Q[u 1,...,u n ] = R [γ 1 u 1,...,γ n u n ] hus Q =(R ) 1 DR 1 = RDR. = [γ 1 e 1,...,γ n e n ] = γ γ n = D. Proposition 9 If Q is SPSD, then Q = M M for some matrix M. Proof: Q = RDR = RD 1 2 D 1 2 R = M M, where M = D 1 2 R.

12 12 Proposition 10 If Q is SPSD, then x Qx =0implies Qx =0. Proof: 0=x Qx = x M Mx =(Mx) (Mx)= Mx 2 Mx =0 Qx = M Mx =0. Proposition 11 Suppose Q is symmetric. hen Q 0 and nonsingular if andonlyifq 0. Proof: ( ) Suppose x 0. hen x Qx 0. If x Qx = 0, then Qx =0, which is a contradiction since Q is nonsingular. hus x Qx > 0, and so Q is positive definite. ( ) Clearly, if Q 0, then Q 0. If Q is singular, then Qx =0,x 0 has a solution, whereby x Qx =0,x 0,andsoQ is not positive definite, which is a contradiction. 4 Additional Properties of SPD Matrices Proposition 12 If Q 0 (Q 0), then any principal submatrix of Q is positive definite (positive semidefinite). Proof: Follows directly. Proposition 13 Suppose Q is symmetric. If Q 0 and [ ] Q c M =, b then M 0 ifandonlyifb>c Q 1 c. c Proof: Suppose b c Q 1 c.letx =( c Q 1, 1). hen x Mx = c Q 1 c 2c Q 1 c + b 0.

13 13 hus M is not positive definite. Conversely, suppose b>c Q 1 c.letx =(y, z). hen x Mx = y Qy + 2zc y + bz 2. If x 0 and z = 0, then x Mx = y Qy > 0, since Q 0. If z 0, we can assume without loss of generality that z = 1, andso x Mx = y Qy +2c y + b. he value of y that minimizes this form is y = Q 1 c, and at this point, y Qy +2c y + b = c Q 1 c + b>0, and so M is positive definite. he k th leading principal minor of a matrix M is the determinant of the submatrix of M corresponding to the first k indices of columns and rows. Proposition 14 Suppose Q is a symmetric matrix. hen Q is positive definite if and only if all leading principal minors of Q are positive. Proof: If Q 0, then any leading principal submatrix of Q is a matrix M, where [ ] M N Q = N, P and M must be SPD. herefore M = RDR = RDR 1 (where R is orthonormal and D is diagonal), and det(m) = det(d) > 0. Conversely, suppose all leading principal minors are positive. If n =1, then Q 0. If n>1, by induction, suppose that the statement is true for k = n 1. hen for k = n, [ ] M c Q = c, b where M R (n 1) (n 1) and M has all its principal minors positive, so M 0. herefore, M = for some nonsingular.hus [ ] Q = c. b c Let [ F = ( ) 1 0 c ( ) 1 1 ].

14 14 hen [ FQF = = ( ) 1 0 c ( ) 1 1 [ ( ) 1 c 0 b c ( ) 1 c ] [ ] [ ] c 1 ( c ) 1 c b 0 1 ] [ ] 1 ( ) 1 c = 0 1 [ I 0 0 b c ( ) 1 c hen det Q = b c ( ) 1 c det(f ) 2 > 0 implies b c ( ) 1 c>0, and so Q 0 from Proposition 13. ]. 5 Quadratic Forms Exercises 1. Suppose that M 0. Show that M 1 exists and that M Suppose that M 0. Show that there exists a matrix N satisfying N 0andN 2 := NN = M. Such a matrix N is called a square root of M and is written as M Let v denote the usual Euclidian norm of a vector, namely v := v v. he operator norm of a matrix M is defined as follows: M := max{ Mx x =1}. x Prove the following two propositions: Proposition 1: If M is n n and symmetric, then M =max{ λ λ is an eigenvalue of M}. λ Proposition 2: If M is m n with m<nand M has rank m, then M = λ max (MM ), where λ max (A) denotes the largest eigenvalue of a matrix A.

15 15 4. Let v denote the usual Euclidian norm of a vector, namely v := v v. he operator norm of a matrix M is defined as follows: M := max{ Mx x =1}. x Prove the following proposition: Proposition: Suppose that M is a symmetric matrix. hen the following are equivalent: (a) h>0 satisfies M 1 1 h (b) h>0 satisfies Mv h v for any vector v (c) h>0satisfies λ i (M) h for every eigenvalue λ i (M) ofm, i =1,...,m. 5. Let Q 0andletS := {x x Qx 1}. Prove that S is a closed convex set. 6. Let Q 0andletS := {x x Qx 1}. Let γ i be a nonzero eigenvalue of Q and let u i be a corresponding eigenvector normalized so that u i 2 =1. Leta i := ui γi. Prove that a i S and a i S. 7. Let Q 0 and consider the problem: (P) : z =maximum x c x s.t. x Qx 1. Prove that the unique optimal solution of (P) is: x = Q 1 c c Q 1 c with optimal objective function value z = c Q 1 c.

16 16 8. Let Q 0 and consider the problem: (P) : z =maximum x c x s.t. x Qx 1. For what values of c will it be true that the optimal solution of (P) will be equal to c? (Hint: think eigenvectors.) 9. Let Q 0andletS := {x x Qx 1}. Let the eigendecomposition of Q be Q = RDR where R is orthonormal and D is diagonal with diagonal entries γ 1,...,γ n. Prove that x S if and only if x = Rv for some vector v satisfying n γ i vi 2 1. j=1 10. Prove the following: Diagonal Dominance heorem: Suppose that M is symmetric and that for each i =1,...,n,wehave: M ii M ij. j i hen M is positive semidefinite. Furthermore, if the inequalities above are all strict, then M is positive definite. 11. A function f( ) :R n Ris a norm if: (i) f(x) 0 for any x, andf(x) = 0 if and only if x =0 (ii) f(αx) = α f(x) for any x and any α R,and (iii) f(x + y) f(x)+f(y). Define f Q (x) = x t Qx. Prove that f Q (x) is a norm if and only if Q is positive definite. 12. If Q is positive semi-definite, under what conditions (on Q and c) will f(x) = 1 2 xt Qx + c t x attain its minimum over all x R n?, be unbounded over all x R n?

17 Consider the problem to minimize f(x) = 1 2 xt Qx + c t x subject to Ax = b. When will this program have an optimal solution?, when not? 14. Prove that if Q is symmetric and all its eigenvalues are nonnegative, then Q is positive semi-definite. [ ] Let Q =. Note that γ = 1 and γ 2 = 2 are the eigenvalues of Q, but that x t Qx < 0forx =(2, 3) t. Why does this not contradict the result of the previous exercise? 16. A quadratic form of the type g(y) = p j=1 y2 j + n j=p+1 d j y j +d n+1 is a separable hybrid of a quadratic and linear form, as g(y) is quadratic in the first p components of y and linear (and separable) in the remaining n p components. Show that if f(x) = 1 2 xt Qx + c t x where Q is positive semi-definite, then there is an invertible linear transformation y = (x) =Fx + g such that f(x) =g(y) andg(y) is a separable hybrid, i.e., there is an index p, a nonsingular matrix F, a vector g and constants d p,...,d n+1 such that p n g(y) = (Fx+ g) 2 j + d j (Fx+ g) j + d n+1 = f(x). j=1 j=p An n n matrix P is called a projection matrix if P = P and PP = P. Prove that if P is a projection matrix, then a. I P is a projection matrix. b. P is positive semidefinite. c. Px x for any x, where is the Euclidian norm. 18. Let us denote the largest eigenvalue of a symmetric matrix M by λmax(m). Consider the program (Q) : z =maximum x x Mx s.t. x =1, where M is a symmetric matrix. Prove that z = λmax(m).

18 Let us denote the smallest eigenvalue of a symmetric matrix M by λ min (M). Consider the program (P) : z = minimum x x Mx s.t. x =1, where M is a symmetric matrix. Prove that z = λ min (M). 20. Consider the matrix ( ) A B M = B, C where A, B are symmetric matrices and A is nonsingular. Prove that M is positive semi-definite if and only if C B A 1 B is positive semi-definite.

Solution based on matrix technique Rewrite. ) = 8x 2 1 4x 1x 2 + 5x x1 2x 2 2x 1 + 5x 2

Solution based on matrix technique Rewrite. ) = 8x 2 1 4x 1x 2 + 5x x1 2x 2 2x 1 + 5x 2 8.2 Quadratic Forms Example 1 Consider the function q(x 1, x 2 ) = 8x 2 1 4x 1x 2 + 5x 2 2 Determine whether q(0, 0) is the global minimum. Solution based on matrix technique Rewrite q( x1 x 2 = x1 ) =

More information

Lecture 5 Principal Minors and the Hessian

Lecture 5 Principal Minors and the Hessian Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

Lecture 3: Convex Sets and Functions

Lecture 3: Convex Sets and Functions EE 227A: Convex Optimization and Applications January 24, 2012 Lecture 3: Convex Sets and Functions Lecturer: Laurent El Ghaoui Reading assignment: Chapters 2 (except 2.6) and sections 3.1, 3.2, 3.3 of

More information

GRA6035 Mathematics. Eivind Eriksen and Trond S. Gustavsen. Department of Economics

GRA6035 Mathematics. Eivind Eriksen and Trond S. Gustavsen. Department of Economics GRA635 Mathematics Eivind Eriksen and Trond S. Gustavsen Department of Economics c Eivind Eriksen, Trond S. Gustavsen. Edition. Edition Students enrolled in the course GRA635 Mathematics for the academic

More information

The Power Method for Eigenvalues and Eigenvectors

The Power Method for Eigenvalues and Eigenvectors Numerical Analysis Massoud Malek The Power Method for Eigenvalues and Eigenvectors The spectrum of a square matrix A, denoted by σ(a) is the set of all eigenvalues of A. The spectral radius of A, denoted

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

Advanced Topics in Machine Learning (Part II)

Advanced Topics in Machine Learning (Part II) Advanced Topics in Machine Learning (Part II) 3. Convexity and Optimisation February 6, 2009 Andreas Argyriou 1 Today s Plan Convex sets and functions Types of convex programs Algorithms Convex learning

More information

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

More information

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix.

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. Inverse matrix Definition. Let A be an n n matrix. The inverse of A is an n n matrix, denoted

More information

Shiqian Ma, SEEM 5121, Dept. of SEEM, CUHK 1. Chapter 2. Convex Optimization

Shiqian Ma, SEEM 5121, Dept. of SEEM, CUHK 1. Chapter 2. Convex Optimization Shiqian Ma, SEEM 5121, Dept. of SEEM, CUHK 1 Chapter 2 Convex Optimization Shiqian Ma, SEEM 5121, Dept. of SEEM, CUHK 2 2.1. Convex Optimization General optimization problem: min f 0 (x) s.t., f i (x)

More information

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8 Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e

More information

MATH 240 Fall, Chapter 1: Linear Equations and Matrices

MATH 240 Fall, Chapter 1: Linear Equations and Matrices MATH 240 Fall, 2007 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 9th Ed. written by Prof. J. Beachy Sections 1.1 1.5, 2.1 2.3, 4.2 4.9, 3.1 3.5, 5.3 5.5, 6.1 6.3, 6.5, 7.1 7.3 DEFINITIONS

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i. Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

More information

MATH36001 Background Material 2015

MATH36001 Background Material 2015 MATH3600 Background Material 205 Matrix Algebra Matrices and Vectors An ordered array of mn elements a ij (i =,, m; j =,, n) written in the form a a 2 a n A = a 2 a 22 a 2n a m a m2 a mn is said to be

More information

ME128 Computer-Aided Mechanical Design Course Notes Introduction to Design Optimization

ME128 Computer-Aided Mechanical Design Course Notes Introduction to Design Optimization ME128 Computer-ided Mechanical Design Course Notes Introduction to Design Optimization 2. OPTIMIZTION Design optimization is rooted as a basic problem for design engineers. It is, of course, a rare situation

More information

THEORY OF SIMPLEX METHOD

THEORY OF SIMPLEX METHOD Chapter THEORY OF SIMPLEX METHOD Mathematical Programming Problems A mathematical programming problem is an optimization problem of finding the values of the unknown variables x, x,, x n that maximize

More information

Determinants. Dr. Doreen De Leon Math 152, Fall 2015

Determinants. Dr. Doreen De Leon Math 152, Fall 2015 Determinants Dr. Doreen De Leon Math 52, Fall 205 Determinant of a Matrix Elementary Matrices We will first discuss matrices that can be used to produce an elementary row operation on a given matrix A.

More information

4.9 Markov matrices. DEFINITION 4.3 A real n n matrix A = [a ij ] is called a Markov matrix, or row stochastic matrix if. (i) a ij 0 for 1 i, j n;

4.9 Markov matrices. DEFINITION 4.3 A real n n matrix A = [a ij ] is called a Markov matrix, or row stochastic matrix if. (i) a ij 0 for 1 i, j n; 49 Markov matrices DEFINITION 43 A real n n matrix A = [a ij ] is called a Markov matrix, or row stochastic matrix if (i) a ij 0 for 1 i, j n; (ii) a ij = 1 for 1 i n Remark: (ii) is equivalent to AJ n

More information

Markov Chains, part I

Markov Chains, part I Markov Chains, part I December 8, 2010 1 Introduction A Markov Chain is a sequence of random variables X 0, X 1,, where each X i S, such that P(X i+1 = s i+1 X i = s i, X i 1 = s i 1,, X 0 = s 0 ) = P(X

More information

MATHEMATICAL BACKGROUND

MATHEMATICAL BACKGROUND Chapter 1 MATHEMATICAL BACKGROUND This chapter discusses the mathematics that is necessary for the development of the theory of linear programming. We are particularly interested in the solutions of a

More information

Iterative Techniques in Matrix Algebra. Jacobi & Gauss-Seidel Iterative Techniques II

Iterative Techniques in Matrix Algebra. Jacobi & Gauss-Seidel Iterative Techniques II Iterative Techniques in Matrix Algebra Jacobi & Gauss-Seidel Iterative Techniques II Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

CHAPTER III - MARKOV CHAINS

CHAPTER III - MARKOV CHAINS CHAPTER III - MARKOV CHAINS JOSEPH G. CONLON 1. General Theory of Markov Chains We have already discussed the standard random walk on the integers Z. A Markov Chain can be viewed as a generalization of

More information

Chapter 4 Sequential Quadratic Programming

Chapter 4 Sequential Quadratic Programming Optimization I; Chapter 4 77 Chapter 4 Sequential Quadratic Programming 4.1 The Basic SQP Method 4.1.1 Introductory Definitions and Assumptions Sequential Quadratic Programming (SQP) is one of the most

More information

A note on companion matrices

A note on companion matrices Linear Algebra and its Applications 372 (2003) 325 33 www.elsevier.com/locate/laa A note on companion matrices Miroslav Fiedler Academy of Sciences of the Czech Republic Institute of Computer Science Pod

More information

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method Robert M. Freund February, 004 004 Massachusetts Institute of Technology. 1 1 The Algorithm The problem

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science Computational Methods CMSC/AMSC/MAPL 460 Eigenvalues and Eigenvectors Ramani Duraiswami, Dept. of Computer Science Eigen Values of a Matrix Definition: A N N matrix A has an eigenvector x (non-zero) with

More information

Computational Methods. Least Squares Approximation/Optimization

Computational Methods. Least Squares Approximation/Optimization Computational Methods Least Squares Approximation/Optimization Manfred Huber 2010 1 Least Squares Least squares methods are aimed at finding approximate solutions when no precise solution exists Find the

More information

DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II. Matrix Algorithms DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

More information

The Hadamard Product

The Hadamard Product The Hadamard Product Elizabeth Million April 12, 2007 1 Introduction and Basic Results As inexperienced mathematicians we may have once thought that the natural definition for matrix multiplication would

More information

CHARACTERISTIC ROOTS AND VECTORS

CHARACTERISTIC ROOTS AND VECTORS CHARACTERISTIC ROOTS AND VECTORS 1 DEFINITION OF CHARACTERISTIC ROOTS AND VECTORS 11 Statement of the characteristic root problem Find values of a scalar λ for which there exist vectors x 0 satisfying

More information

1 Eigenvalues and Eigenvectors

1 Eigenvalues and Eigenvectors Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x

More information

SMOOTHING APPROXIMATIONS FOR TWO CLASSES OF CONVEX EIGENVALUE OPTIMIZATION PROBLEMS YU QI. (B.Sc.(Hons.), BUAA)

SMOOTHING APPROXIMATIONS FOR TWO CLASSES OF CONVEX EIGENVALUE OPTIMIZATION PROBLEMS YU QI. (B.Sc.(Hons.), BUAA) SMOOTHING APPROXIMATIONS FOR TWO CLASSES OF CONVEX EIGENVALUE OPTIMIZATION PROBLEMS YU QI (B.Sc.(Hons.), BUAA) A THESIS SUBMITTED FOR THE DEGREE OF MASTER OF SCIENCE DEPARTMENT OF MATHEMATICS NATIONAL

More information

Problem 1 (10 pts) Find the radius of convergence and interval of convergence of the series

Problem 1 (10 pts) Find the radius of convergence and interval of convergence of the series 1 Problem 1 (10 pts) Find the radius of convergence and interval of convergence of the series a n n=1 n(x + 2) n 5 n 1. n(x + 2)n Solution: Do the ratio test for the absolute convergence. Let a n =. Then,

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function

17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function 17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function, : V V R, which is symmetric, that is u, v = v, u. bilinear, that is linear (in both factors):

More information

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7 Linear Algebra-Lab 2

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7 Linear Algebra-Lab 2 Linear Algebra-Lab 1 1) Use Gaussian elimination to solve the following systems x 1 + x 2 2x 3 + 4x 4 = 5 1.1) 2x 1 + 2x 2 3x 3 + x 4 = 3 3x 1 + 3x 2 4x 3 2x 4 = 1 x + y + 2z = 4 1.4) 2x + 3y + 6z = 10

More information

More than you wanted to know about quadratic forms

More than you wanted to know about quadratic forms CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences More than you wanted to know about quadratic forms KC Border Contents 1 Quadratic forms 1 1.1 Quadratic forms on the unit

More information

EC9A0: Pre-sessional Advanced Mathematics Course

EC9A0: Pre-sessional Advanced Mathematics Course University of Warwick, EC9A0: Pre-sessional Advanced Mathematics Course Peter J. Hammond & Pablo F. Beker 1 of 55 EC9A0: Pre-sessional Advanced Mathematics Course Slides 1: Matrix Algebra Peter J. Hammond

More information

Lecture Notes: Matrix Inverse. 1 Inverse Definition

Lecture Notes: Matrix Inverse. 1 Inverse Definition Lecture Notes: Matrix Inverse Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Inverse Definition We use I to represent identity matrices,

More information

Iterative Methods for Computing Eigenvalues and Eigenvectors

Iterative Methods for Computing Eigenvalues and Eigenvectors The Waterloo Mathematics Review 9 Iterative Methods for Computing Eigenvalues and Eigenvectors Maysum Panju University of Waterloo mhpanju@math.uwaterloo.ca Abstract: We examine some numerical iterative

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

Linear Least Squares

Linear Least Squares Linear Least Squares Suppose we are given a set of data points {(x i,f i )}, i = 1,...,n. These could be measurements from an experiment or obtained simply by evaluating a function at some points. One

More information

Homework One Solutions. Keith Fratus

Homework One Solutions. Keith Fratus Homework One Solutions Keith Fratus June 8, 011 1 Problem One 1.1 Part a In this problem, we ll assume the fact that the sum of two complex numbers is another complex number, and also that the product

More information

Lecture 5: Singular Value Decomposition SVD (1)

Lecture 5: Singular Value Decomposition SVD (1) EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system

More information

(a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular.

(a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. Theorem.7.: (Properties of Triangular Matrices) (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. (b) The product

More information

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1 (d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

More information

Introduction to Convex Optimization

Introduction to Convex Optimization Convexity Xuezhi Wang Computer Science Department Carnegie Mellon University 10701-recitation, Jan 29 Outline Convexity 1 Convexity 2 3 Outline Convexity 1 Convexity 2 3 Convexity Definition For x, x X

More information

2.3 Convex Constrained Optimization Problems

2.3 Convex Constrained Optimization Problems 42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

More information

UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure

UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure UNIT 2 MATRICES - I Matrices - I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress

More information

Topic 3 Iterative methods for Ax = b

Topic 3 Iterative methods for Ax = b Topic 3 Iterative methods for Ax = b 3. Introduction Earlier in the course, we saw how to reduce the linear system Ax = b to echelon form using elementary row operations. Solution methods that rely on

More information

Summary of week 8 (Lectures 22, 23 and 24)

Summary of week 8 (Lectures 22, 23 and 24) WEEK 8 Summary of week 8 (Lectures 22, 23 and 24) This week we completed our discussion of Chapter 5 of [VST] Recall that if V and W are inner product spaces then a linear map T : V W is called an isometry

More information

Applied Linear Algebra I Review page 1

Applied Linear Algebra I Review page 1 Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

More information

Inner products on R n, and more

Inner products on R n, and more Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +

More information

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices. Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R

More information

Convexity I: Sets and Functions

Convexity I: Sets and Functions Convexity I: Sets and Functions Lecturer: Ryan Tibshirani Convex Optimization 10-725/36-725 See supplements for reviews of basic real analysis basic multivariate calculus basic linear algebra Last time:

More information

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12,

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, 2000 45 4 Iterative methods 4.1 What a two year old child can do Suppose we want to find a number x such that cos x = x (in radians). This is a nonlinear

More information

Chapter 8. Matrices II: inverses. 8.1 What is an inverse?

Chapter 8. Matrices II: inverses. 8.1 What is an inverse? Chapter 8 Matrices II: inverses We have learnt how to add subtract and multiply matrices but we have not defined division. The reason is that in general it cannot always be defined. In this chapter, we

More information

Solutions for Chapter 8

Solutions for Chapter 8 Solutions for Chapter 8 Solutions for exercises in section 8. 2 8.2.1. The eigenvalues are σ (A ={12, 6} with alg mult A (6 = 2, and it s clear that 12 = ρ(a σ (A. The eigenspace N(A 12I is spanned by

More information

Facts About Eigenvalues

Facts About Eigenvalues Facts About Eigenvalues By Dr David Butler Definitions Suppose A is an n n matrix An eigenvalue of A is a number λ such that Av = λv for some nonzero vector v An eigenvector of A is a nonzero vector v

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

Eigenvalues and Markov Chains

Eigenvalues and Markov Chains Eigenvalues and Markov Chains Will Perkins April 15, 2013 The Metropolis Algorithm Say we want to sample from a different distribution, not necessarily uniform. Can we change the transition rates in such

More information

Matrix Norms. Tom Lyche. September 28, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Matrix Norms. Tom Lyche. September 28, Centre of Mathematics for Applications, Department of Informatics, University of Oslo Matrix Norms Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 28, 2009 Matrix Norms We consider matrix norms on (C m,n, C). All results holds for

More information

Convex Functions. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Convex Functions. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Convex Functions Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2016-17, HKUST, Hong Kong Outline of Lecture Definition convex function Examples

More information

Convex Optimization SVM s and Kernel Machines

Convex Optimization SVM s and Kernel Machines Convex Optimization SVM s and Kernel Machines S.V.N. Vishy Vishwanathan vishy@axiom.anu.edu.au National ICT of Australia and Australian National University Thanks to Alex Smola and Stéphane Canu S.V.N.

More information

Maximum and Minimum Values

Maximum and Minimum Values Jim Lambers MAT 280 Spring Semester 2009-10 Lecture 8 Notes These notes correspond to Section 11.7 in Stewart and Section 3.3 in Marsden and Tromba. Maximum and Minimum Values In single-variable calculus,

More information

Absolute Value Programming

Absolute Value Programming Computational Optimization and Aplications,, 1 11 (2006) c 2006 Springer Verlag, Boston. Manufactured in The Netherlands. Absolute Value Programming O. L. MANGASARIAN olvi@cs.wisc.edu Computer Sciences

More information

Nonlinear Programming Methods.S2 Quadratic Programming

Nonlinear Programming Methods.S2 Quadratic Programming Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

More information

Solutions to Linear Algebra Practice Problems

Solutions to Linear Algebra Practice Problems Solutions to Linear Algebra Practice Problems. Find all solutions to the following systems of linear equations. (a) x x + x 5 x x x + x + x 5 (b) x + x + x x + x + x x + x + 8x Answer: (a) We create the

More information

1. For each of the following matrices, determine whether it is in row echelon form, reduced row echelon form, or neither.

1. For each of the following matrices, determine whether it is in row echelon form, reduced row echelon form, or neither. Math Exam - Practice Problem Solutions. For each of the following matrices, determine whether it is in row echelon form, reduced row echelon form, or neither. (a) 5 (c) Since each row has a leading that

More information

Chapter 4: Binary Operations and Relations

Chapter 4: Binary Operations and Relations c Dr Oksana Shatalov, Fall 2014 1 Chapter 4: Binary Operations and Relations 4.1: Binary Operations DEFINITION 1. A binary operation on a nonempty set A is a function from A A to A. Addition, subtraction,

More information

Interpolating Polynomials Handout March 7, 2012

Interpolating Polynomials Handout March 7, 2012 Interpolating Polynomials Handout March 7, 212 Again we work over our favorite field F (such as R, Q, C or F p ) We wish to find a polynomial y = f(x) passing through n specified data points (x 1,y 1 ),

More information

General Framework for an Iterative Solution of Ax b. Jacobi s Method

General Framework for an Iterative Solution of Ax b. Jacobi s Method 2.6 Iterative Solutions of Linear Systems 143 2.6 Iterative Solutions of Linear Systems Consistent linear systems in real life are solved in one of two ways: by direct calculation (using a matrix factorization,

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every

More information

Lecture 7: Singular Value Decomposition

Lecture 7: Singular Value Decomposition 0368-3248-01-Algorithms in Data Mining Fall 2013 Lecturer: Edo Liberty Lecture 7: Singular Value Decomposition Warning: This note may contain typos and other inaccuracies which are usually discussed during

More information

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 31

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 31 MATH 56A SPRING 2008 STOCHASTIC PROCESSES 3.3. Invariant probability distribution. Definition.4. A probability distribution is a function π : S [0, ] from the set of states S to the closed unit interval

More information

NUMERICALLY EFFICIENT METHODS FOR SOLVING LEAST SQUARES PROBLEMS

NUMERICALLY EFFICIENT METHODS FOR SOLVING LEAST SQUARES PROBLEMS NUMERICALLY EFFICIENT METHODS FOR SOLVING LEAST SQUARES PROBLEMS DO Q LEE Abstract. Computing the solution to Least Squares Problems is of great importance in a wide range of fields ranging from numerical

More information

Advanced Techniques for Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz

Advanced Techniques for Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Advanced Techniques for Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Vectors Arrays of numbers Vectors represent a point in a n dimensional

More information

Practice Math 110 Final. Instructions: Work all of problems 1 through 5, and work any 5 of problems 10 through 16.

Practice Math 110 Final. Instructions: Work all of problems 1 through 5, and work any 5 of problems 10 through 16. Practice Math 110 Final Instructions: Work all of problems 1 through 5, and work any 5 of problems 10 through 16. 1. Let A = 3 1 1 3 3 2. 6 6 5 a. Use Gauss elimination to reduce A to an upper triangular

More information

Numerical Linear Algebra Chap. 4: Perturbation and Regularisation

Numerical Linear Algebra Chap. 4: Perturbation and Regularisation Numerical Linear Algebra Chap. 4: Perturbation and Regularisation Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation TUHH Heinrich Voss Numerical Linear

More information

Introduction to Linear Programming.

Introduction to Linear Programming. Chapter 1 Introduction to Linear Programming. This chapter introduces notations, terminologies and formulations of linear programming. Examples will be given to show how real-life problems can be modeled

More information

Math 4707: Introduction to Combinatorics and Graph Theory

Math 4707: Introduction to Combinatorics and Graph Theory Math 4707: Introduction to Combinatorics and Graph Theory Lecture Addendum, November 3rd and 8th, 200 Counting Closed Walks and Spanning Trees in Graphs via Linear Algebra and Matrices Adjacency Matrices

More information

1 Quiz on Linear Equations with Answers. This is Theorem II.3.1. There are two statements of this theorem in the text.

1 Quiz on Linear Equations with Answers. This is Theorem II.3.1. There are two statements of this theorem in the text. 1 Quiz on Linear Equations with Answers (a) State the defining equations for linear transformations. (i) L(u + v) = L(u) + L(v), vectors u and v. (ii) L(Av) = AL(v), vectors v and numbers A. or combine

More information

MATH 551 - APPLIED MATRIX THEORY

MATH 551 - APPLIED MATRIX THEORY MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

More information

Eigenvalues and eigenvectors of a matrix

Eigenvalues and eigenvectors of a matrix Eigenvalues and eigenvectors of a matrix Definition: If A is an n n matrix and there exists a real number λ and a non-zero column vector V such that AV = λv then λ is called an eigenvalue of A and V is

More information

Orthogonal Projections

Orthogonal Projections Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors

More information

Introduction to Flocking {Stochastic Matrices}

Introduction to Flocking {Stochastic Matrices} Supelec EECI Graduate School in Control Introduction to Flocking {Stochastic Matrices} A. S. Morse Yale University Gif sur - Yvette May 21, 2012 CRAIG REYNOLDS - 1987 BOIDS The Lion King CRAIG REYNOLDS

More information

Lecture Note 2: Convex function. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 2: Convex function. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 2: Convex function Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 10, 2016 2 Contents 1.1 Convex functions............................. 3 1.2 Elemental properties of convex

More information

Derivative Free Optimization

Derivative Free Optimization Department of Mathematics Derivative Free Optimization M.J.D. Powell LiTH-MAT-R--2014/02--SE Department of Mathematics Linköping University S-581 83 Linköping, Sweden. Three lectures 1 on Derivative Free

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Using the Singular Value Decomposition

Using the Singular Value Decomposition Using the Singular Value Decomposition Emmett J. Ientilucci Chester F. Carlson Center for Imaging Science Rochester Institute of Technology emmett@cis.rit.edu May 9, 003 Abstract This report introduces

More information

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

More information

Geometry of Linear Programming

Geometry of Linear Programming Chapter 2 Geometry of Linear Programming The intent of this chapter is to provide a geometric interpretation of linear programming problems. To conceive fundamental concepts and validity of different algorithms

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information