# Lecture 5 Principal Minors and the Hessian

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and the Hessian October 01, / 25 Principal minors Principal minors Let A be a symmetric n n matrix. We know that we can determine the definiteness of A by computing its eigenvalues. Another method is to use the principal minors. Definition A minor of A of order k is principal if it is obtained by deleting n k rows and the n k columns with the same numbers. The leading principal minor of A of order k is the minor of order k obtained by deleting the last n k rows and columns. For instance, in a principal minor where you have deleted row 1 and 3, you should also delete column 1 and 3. Notation We write D k for the leading principal minor of order k. There are ( ) n k principal minors of order k, and we write k for any of the principal minors of order k. Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and the Hessian October 01, / 25

2 Principal minors Two by two symmetric matrices Let A = ( ) a b b c be a symmetric 2 2 matrix. Then the leading principal minors are D 1 = a and D 2 = ac b 2. If we want to find all the principal minors, these are given by 1 = a and 1 = c (of order one) and 2 = ac b 2 (of order two). Let us compute what it means that the leading principal minors are positive for 2 2 matrices: Let A = ( ) a b b c be a symmetric 2 2 matrix. Show that if D1 = a > 0 and D 2 = ac b 2 > 0, then A is positive definite. Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and the Hessian October 01, / 25 Principal minors Leading principal minors: An example If D 1 = a > 0 and D 2 = ac b 2 > 0, then c > 0 also, since ac > b 2 0. The characteristic equation of A is λ 2 (a + c)λ + (ac b 2 ) = 0 and it has two solutions (since A is symmetric) given by λ = a + c 2 ± (a + c) 2 4(ac b 2 ) 2 Both solutions are positive, since (a + c) > (a + c) 2 4(ac b 2 ). This means that A is positive definite. Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and the Hessian October 01, / 25

3 Principal minors Definiteness and principal minors Theorem Let A be a symmetric n n matrix. Then we have: A is positive definite D k > 0 for all leading principal minors A is negative definite ( 1) k D k > 0 for all leading principal minors A is positive semidefinite k 0 for all principal minors A is negative semidefinite ( 1) k k 0 for all principal minors In the first two cases, it is enough to check the inequality for all the leading principal minors (i.e. for 1 k n). In the last two cases, we must check for all principal minors (i.e for each k with 1 k n and for each of the ( n k) principal minors of order k). Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and the Hessian October 01, / 25 Principal minors Definiteness: An example Determine the definiteness of the symmetric 3 3 matrix A = One may try to compute the eigenvalues of A. However, the characteristic equation is det(a λi ) = (1 λ)(λ 2 8λ + 11) 4(18 4λ) + 6(6λ 16) = 0 This equations (of order three with no obvious factorization) seems difficult to solve! Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and the Hessian October 01, / 25

4 Principal minors Definiteness: An example (Continued) Let us instead try to use the leading principal minors. They are: D 1 = 1, D 2 = = 14, D = = 109 Let us compare with the criteria in the theorem: Positive definite: D 1 > 0, D 2 > 0, D 3 > 0 Negative definite: D 1 < 0, D 2 > 0, D 3 < 0 Positive semidefinite: 1 0, 2 0, 3 0 for all principal minors Negative semidefinite: 1 0, 2 0, 3 0 for all principal minors The principal leading minors we have computed do not fit with any of these criteria. We can therefore conclude that A is indefinite. Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and the Hessian October 01, / 25 Principal minors Definiteness: Another example Determine the definiteness of the quadratic form Q(x 1, x 2, x 3 ) = 3x x 1 x 3 + x 2 2 4x 2 x 3 + 8x 2 3 The symmetric matrix associated with the quadratic form Q is A = Since the leading principal minors are positive, Q is positive definite: D 1 = 3, D 2 = = 3, D = = 3 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and the Hessian October 01, / 25

5 Optimization of functions of several variables The first part of this course was centered around matrices and linear algebra. The next part will be centered around optimization problems for functions f (x 1, x 2,..., x n ) in several variables. C 2 functions Let f (x 1, x 2,..., x n ) be a function in n variables. We say that f is C 2 if all second order partial derivatives of f exist and are continuous. All functions that we meet in this course are C 2 functions, so shall not check this in each case. Notation We use the notation x = (x 1, x 2,..., x n ). Hence we write f (x) in place of f (x 1, x 2,..., x n ). Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and the Hessian October 01, / 25 Stationary points of functions in several variables The partial derivatives of f (x) are written f 1, f 2,..., f n. The stationary points of f are the solutions of the first order conditions: Definition Let f (x) be a function in n variables. We say that x is a stationary point of f if f 1(x) = f 2(x) = = f n(x) = 0 Let us look at an example: Let f (x 1, x 2 ) = x 2 1 x 2 2 x 1x 2. Find the stationary points of f. Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and the Hessian October 01, / 25

6 Stationary points: An example We compute the partial derivatives of f (x 1, x 2 ) = x 2 1 x 2 2 x 1x 2 to be f 1(x 1, x 2 ) = 2x 1 x 2, f 2(x 1, x 2 ) = 2x 2 x 1 Hence the stationary points are the solutions of the following linear system 2x 1 x 2 = 0 x 1 2x 2 = 0 Since det ( ) = 5 0, the only stationary point is x = (0, 0). Given a stationary point of f (x), how do we determine its type? Is it a local minimum, a local maximum or perhaps a saddle point? Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and the Hessian October 01, / 25 The Hessian matrix Let f (x) be a function in n variables. The Hessian matrix of f is the matrix consisting of all the second order partial derivatives of f : Definition The Hessian matrix of f at the point x is the n n matrix f 11 (x) f 12 (x)... f 1n (x) f f 21 (x) f 22 (x)... f 2n (x) = (x) f n1 (x) f n2 (x)... f nn(x) Notice that each entry in the Hessian matrix is a second order partial derivative, and therefore a function in x. Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and the Hessian October 01, / 25

7 The Hessian matrix: An example Compute the Hessian matrix of the quadratic form f (x 1, x 2 ) = x 2 1 x 2 2 x 1 x 2 We computed the first order partial derivatives above, and found that Hence we get f 1(x 1, x 2 ) = 2x 1 x 2, f 2(x 1, x 2 ) = 2x 2 x 1 f 11(x 1, x 2 ) = 2 f 12(x 1, x 2 ) = 1 f 21(x 1, x 2 ) = 1 f 22(x 1, x 2 ) = 2 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and the Hessian October 01, / 25 The Hessian matrix: An example (Continued) The Hessian matrix is therefore given by ( ) f 2 1 (x) = 1 2 The following fact is useful to notice, as it will simplify our computations in the future: Proposition If f (x) is a C 2 function, then the Hessian matrix is symmetric. The proof of this fact is quite technical, and we will skip it in the lecture. Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and the Hessian October 01, / 25

8 Convexity and the Hessian Definition A subset S R n is open if it is without boundary. Subsets defined by inequalities with < or > are usually open. Also, R n is open by definition. The set S = {(x 1, x 2 ) : x 1 > 0, x 2 > 0} R 2 is an open set. Its boundary consists of the positive x 1 - and x 2 -axis, and these are not part of S. Theorem Let f (x) be a C 2 function in n variables defined on an open convex set S. Then we have: 1 f (x) is positive semidefinite for all x S f is convex in S 2 f (x) is negative semidefinite for all x S f is concave in S 3 f (x) is positive definite for all x S f is strictly convex in S 4 f (x) is negative definite for all x S f is strictly concave in S Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and the Hessian October 01, / 25 Convexity: An example Is f (x 1, x 2 ) = x 2 1 x 2 2 x 1x 2 convex or concave? We computed the Hessian of this function earlier. It is given by ( ) f 2 1 (x) = 1 2 Since the leading principal minors are D 1 = 2 and D 2 = 5, the Hessian is neither positive semidefinite or negative semidefinite. This means that f is neither convex nor concave. Notice that since f is a quadratic form, we could also have used the symmetric matrix of the quadratic form to conclude this. Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and the Hessian October 01, / 25

9 Convexity: Another example Is f (x 1, x 2 ) = 2x 1 x 2 x x 1x 2 x 2 2 convex or concave? We compute the Hessian of f. We have that f 1 = 2 2x 1 + x 2 and f 2 = 1 + x 1 2x 2. This gives Hessian ( ) f 2 1 (x) = 1 2 Since the leading principal minors are D 1 = 2 and D 2 = 3, the Hessian is negative definite. This means that f is strictly concave. Notice that since f is a sum of a quadratic and a linear form, we could have used earlier results about quadratic forms and properties of convex functions to conclude this. Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and the Hessian October 01, / 25 Convexity: An example of degree three Is f (x 1, x 2, x 3 ) = x 1 + x x 3 3 x 1x 2 3x 3 convex or concave? We compute the Hessian of f. We have that f 1 2, f 2 2 x 1 and f This gives Hessian matrix f (x) = x 3 Since the leading principal minors are D 1 = 0, D 2 = 1 and D 3 = 6x 3, the Hessian is indefinite for all x. This means that f is neither convex nor concave. Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and the Hessian October 01, / 25

10 Extremal points Definition Let f (x) be a function in n variables defined on a set S R n. A point x S is called a global maximum point for f if and a global minimum point for f if f (x ) f (x) for all x S f (x ) f (x) for all x S If x is a maximum or minimum point for f, then we call f (x ) the maximum or minimum value. How do we find global maxima and global minima points, or global extremal points, for a given function f (x)? Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and the Hessian October 01, / 25 First order conditions Proposition If x is a global maximum or minimum point of f that is not on the boundary of S, then x is a stationary point. This means that the stationary points are candidates for being global extremal points. On the other hand, we have the following important result: Theorem Suppose that f (x) is a function of n variables defined on a convex set S R n. Then we have: 1 If f is convex, then all stationary points are global minimum points. 2 If f is concave, then all stationary points are global maximum points. Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and the Hessian October 01, / 25

11 Global extremal points: An example Show that the function has a global maximum. f (x 1, x 2 ) = 2x 1 x 2 x x 1 x 2 x 2 2 Earlier, we found out that this function is (strictly) concave, so all stationary points are global maxima. We must find the stationary points. We computed the first order partial derivatives of f earlier, and use them to write down the first order conditions: f 1 = 2 2x 1 + x 2 = 0, f 2 = 1 + x 1 2x 2 = 0 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and the Hessian October 01, / 25 Global extremal points: An example (Continued) This leads to the linear system 2x 1 x 2 = 2 x 1 2x 2 = 1 We solve this linear system, and find that (x 1, x 2 ) = (1, 0) is the unique stationary point of f. This means that (x 1, x 2 ) = (1, 0) is a global maximum point for f. Find all extremal points of the function f given by f (x, y, z) = x 2 + 2y 2 + 3z 2 + 2xy + 2xz + 3 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and the Hessian October 01, / 25

12 Global extremal points: Another example We first find the partial derivatives of f : f x = 2x + 2y + 2z, f y = 4y + 2x, f z = 6z + 2x This gives Hessian matrix f (x) = We see that the leading principal minors are D 1 = 2, D 2 = 4 and D 3 = 8, so f is a (strictly) convex function and all stationary points are global minima. We therefore compute the stationary points by solving the first order conditions f x = f y = f z = 0. Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and the Hessian October 01, / 25 Global extremal points: More examples (Continued) The first order conditions give a linear system 2x + 2y + 2z = 0 2x + 4y = 0 2x + 6z = 0 We must solve this linear system. Since the 3 3 determinant of the coefficient matrix is 8 0, the system has only one solution x = (0, 0, 0). This means that (x 1, x 2, x 3 ) = (0, 0, 0) is a global minimum point for f. Find all extremal points of the function f (x 1, x 2, x 3 ) = x x x x x x 2 3 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and the Hessian October 01, / 25

13 Global extremal points: More examples The function f has one global minimum (0, 0, 0). See the lecture notes for details. So far, all examples where functions defined on all of R n. Let us look at one example where the function is defined on a smaller subset: Show that S = {(x 1, x 2 ) : x 1 > 0, x 2 > 0} is a convex set and that the function f (x 1, x 2 ) = x1 3 x 2 2 defined on S is a concave function. See the lecture notes for details. Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and the Hessian October 01, / 25

### GRA6035 Mathematics. Eivind Eriksen and Trond S. Gustavsen. Department of Economics

GRA635 Mathematics Eivind Eriksen and Trond S. Gustavsen Department of Economics c Eivind Eriksen, Trond S. Gustavsen. Edition. Edition Students enrolled in the course GRA635 Mathematics for the academic

### SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA This handout presents the second derivative test for a local extrema of a Lagrange multiplier problem. The Section 1 presents a geometric motivation for the

### Notes on Determinant

ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

### Similarity and Diagonalization. Similar Matrices

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

### Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

### Chapter 6. Orthogonality

6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

### Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

### Orthogonal Diagonalization of Symmetric Matrices

MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

### Linear Dependence Tests

Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks

### GROUPS SUBGROUPS. Definition 1: An operation on a set G is a function : G G G.

Definition 1: GROUPS An operation on a set G is a function : G G G. Definition 2: A group is a set G which is equipped with an operation and a special element e G, called the identity, such that (i) the

### Solution based on matrix technique Rewrite. ) = 8x 2 1 4x 1x 2 + 5x x1 2x 2 2x 1 + 5x 2

8.2 Quadratic Forms Example 1 Consider the function q(x 1, x 2 ) = 8x 2 1 4x 1x 2 + 5x 2 2 Determine whether q(0, 0) is the global minimum. Solution based on matrix technique Rewrite q( x1 x 2 = x1 ) =

### Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

### Introduction to Matrix Algebra

Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

### 2.3 Convex Constrained Optimization Problems

42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

### Inner Product Spaces

Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

### October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

### Summary of week 8 (Lectures 22, 23 and 24)

WEEK 8 Summary of week 8 (Lectures 22, 23 and 24) This week we completed our discussion of Chapter 5 of [VST] Recall that if V and W are inner product spaces then a linear map T : V W is called an isometry

### Zeros of a Polynomial Function

Zeros of a Polynomial Function An important consequence of the Factor Theorem is that finding the zeros of a polynomial is really the same thing as factoring it into linear factors. In this section we

### Section 2.1. Section 2.2. Exercise 6: We have to compute the product AB in two ways, where , B =. 2 1 3 5 A =

Section 2.1 Exercise 6: We have to compute the product AB in two ways, where 4 2 A = 3 0 1 3, B =. 2 1 3 5 Solution 1. Let b 1 = (1, 2) and b 2 = (3, 1) be the columns of B. Then Ab 1 = (0, 3, 13) and

### Math 115 Spring 2011 Written Homework 5 Solutions

. Evaluate each series. a) 4 7 0... 55 Math 5 Spring 0 Written Homework 5 Solutions Solution: We note that the associated sequence, 4, 7, 0,..., 55 appears to be an arithmetic sequence. If the sequence

### Diagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions

Chapter 3 Diagonalisation Eigenvalues and eigenvectors, diagonalisation of a matrix, orthogonal diagonalisation fo symmetric matrices Reading As in the previous chapter, there is no specific essential

### MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix.

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. Inverse matrix Definition. Let A be an n n matrix. The inverse of A is an n n matrix, denoted

### Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Pearson Education, Inc.

2 Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Theorem 8: Let A be a square matrix. Then the following statements are equivalent. That is, for a given A, the statements are either all true

Quadratic Functions, Optimization, and Quadratic Forms Robert M. Freund February, 2004 2004 Massachusetts Institute of echnology. 1 2 1 Quadratic Optimization A quadratic optimization problem is an optimization

### Assessment Schedule 2013

NCEA Level Mathematics (9161) 013 page 1 of 5 Assessment Schedule 013 Mathematics with Statistics: Apply algebraic methods in solving problems (9161) Evidence Statement ONE Expected Coverage Merit Excellence

### α = u v. In other words, Orthogonal Projection

Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

### Eigenvalues and eigenvectors of a matrix

Eigenvalues and eigenvectors of a matrix Definition: If A is an n n matrix and there exists a real number λ and a non-zero column vector V such that AV = λv then λ is called an eigenvalue of A and V is

### University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

### Nonlinear Programming Methods.S2 Quadratic Programming

Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective

### 1 Gaussian Elimination

Contents 1 Gaussian Elimination 1.1 Elementary Row Operations 1.2 Some matrices whose associated system of equations are easy to solve 1.3 Gaussian Elimination 1.4 Gauss-Jordan reduction and the Reduced

### Multi-variable Calculus and Optimization

Multi-variable Calculus and Optimization Dudley Cooke Trinity College Dublin Dudley Cooke (Trinity College Dublin) Multi-variable Calculus and Optimization 1 / 51 EC2040 Topic 3 - Multi-variable Calculus

### Math 2331 Linear Algebra

2.2 The Inverse of a Matrix Math 2331 Linear Algebra 2.2 The Inverse of a Matrix Jiwen He Department of Mathematics, University of Houston jiwenhe@math.uh.edu math.uh.edu/ jiwenhe/math2331 Jiwen He, University

### Matrices: 2.3 The Inverse of Matrices

September 4 Goals Define inverse of a matrix. Point out that not every matrix A has an inverse. Discuss uniqueness of inverse of a matrix A. Discuss methods of computing inverses, particularly by row operations.

### 1 Orthogonal projections and the approximation

Math 1512 Fall 2010 Notes on least squares approximation Given n data points (x 1, y 1 ),..., (x n, y n ), we would like to find the line L, with an equation of the form y = mx + b, which is the best fit

### MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix.

MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix. Matrices Definition. An m-by-n matrix is a rectangular array of numbers that has m rows and n columns: a 11

### Getting on For all the preceding functions, discuss, whenever possible, whether local min and max are global.

Problems Warming up Find the domain of the following functions, establish in which set they are differentiable, then compute critical points. Apply 2nd order sufficient conditions to qualify them as local

### Vector and Matrix Norms

Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

Math 210 Quadratic Polynomials Jerry L. Kazdan Polynomials in One Variable. After studying linear functions y = ax + b, the next step is to study quadratic polynomials, y = ax 2 + bx + c, whose graphs

### a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

### Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

### Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.

1. Differentiation The first derivative of a function measures by how much changes in reaction to an infinitesimal shift in its argument. The largest the derivative (in absolute value), the faster is evolving.

### ME128 Computer-Aided Mechanical Design Course Notes Introduction to Design Optimization

ME128 Computer-ided Mechanical Design Course Notes Introduction to Design Optimization 2. OPTIMIZTION Design optimization is rooted as a basic problem for design engineers. It is, of course, a rare situation

### MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences More than you wanted to know about quadratic forms KC Border Contents 1 Quadratic forms 1 1.1 Quadratic forms on the unit

### Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

### Cofactor Expansion: Cramer s Rule

Cofactor Expansion: Cramer s Rule MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Introduction Today we will focus on developing: an efficient method for calculating

### LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

### Question 2: How do you solve a matrix equation using the matrix inverse?

Question : How do you solve a matrix equation using the matrix inverse? In the previous question, we wrote systems of equations as a matrix equation AX B. In this format, the matrix A contains the coefficients

### Linear Systems and Gaussian Elimination

Eivind Eriksen Linear Systems and Gaussian Elimination September 2, 2011 BI Norwegian Business School Contents 1 Linear Systems................................................ 1 1.1 Linear Equations...........................................

### SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,

### Lecture 11: 0-1 Quadratic Program and Lower Bounds

Lecture : - Quadratic Program and Lower Bounds (3 units) Outline Problem formulations Reformulation: Linearization & continuous relaxation Branch & Bound Method framework Simple bounds, LP bound and semidefinite

### 5.5. Solving linear systems by the elimination method

55 Solving linear systems by the elimination method Equivalent systems The major technique of solving systems of equations is changing the original problem into another one which is of an easier to solve

### UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure

UNIT 2 MATRICES - I Matrices - I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress

### 2.5 Elementary Row Operations and the Determinant

2.5 Elementary Row Operations and the Determinant Recall: Let A be a 2 2 matrtix : A = a b. The determinant of A, denoted by det(a) c d or A, is the number ad bc. So for example if A = 2 4, det(a) = 2(5)

### MATH10212 Linear Algebra B Homework 7

MATH22 Linear Algebra B Homework 7 Students are strongly advised to acquire a copy of the Textbook: D C Lay, Linear Algebra and its Applications Pearson, 26 (or other editions) Normally, homework assignments

### Chapter 3. Cartesian Products and Relations. 3.1 Cartesian Products

Chapter 3 Cartesian Products and Relations The material in this chapter is the first real encounter with abstraction. Relations are very general thing they are a special type of subset. After introducing

### A FIRST COURSE IN OPTIMIZATION THEORY

A FIRST COURSE IN OPTIMIZATION THEORY RANGARAJAN K. SUNDARAM New York University CAMBRIDGE UNIVERSITY PRESS Contents Preface Acknowledgements page xiii xvii 1 Mathematical Preliminaries 1 1.1 Notation

### INTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL

SOLUTIONS OF THEORETICAL EXERCISES selected from INTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL Eighth Edition, Prentice Hall, 2005. Dr. Grigore CĂLUGĂREANU Department of Mathematics

### 9 Matrices, determinants, inverse matrix, Cramer s Rule

AAC - Business Mathematics I Lecture #9, December 15, 2007 Katarína Kálovcová 9 Matrices, determinants, inverse matrix, Cramer s Rule Basic properties of matrices: Example: Addition properties: Associative:

### Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e

### Math 54. Selected Solutions for Week Is u in the plane in R 3 spanned by the columns

Math 5. Selected Solutions for Week 2 Section. (Page 2). Let u = and A = 5 2 6. Is u in the plane in R spanned by the columns of A? (See the figure omitted].) Why or why not? First of all, the plane in

### We call this set an n-dimensional parallelogram (with one vertex 0). We also refer to the vectors x 1,..., x n as the edges of P.

Volumes of parallelograms 1 Chapter 8 Volumes of parallelograms In the present short chapter we are going to discuss the elementary geometrical objects which we call parallelograms. These are going to

### Zero: If P is a polynomial and if c is a number such that P (c) = 0 then c is a zero of P.

MATH 11011 FINDING REAL ZEROS KSU OF A POLYNOMIAL Definitions: Polynomial: is a function of the form P (x) = a n x n + a n 1 x n 1 + + a x + a 1 x + a 0. The numbers a n, a n 1,..., a 1, a 0 are called

### Row Echelon Form and Reduced Row Echelon Form

These notes closely follow the presentation of the material given in David C Lay s textbook Linear Algebra and its Applications (3rd edition) These notes are intended primarily for in-class presentation

### MA107 Precalculus Algebra Exam 2 Review Solutions

MA107 Precalculus Algebra Exam 2 Review Solutions February 24, 2008 1. The following demand equation models the number of units sold, x, of a product as a function of price, p. x = 4p + 200 a. Please write

### 2.2/2.3 - Solving Systems of Linear Equations

c Kathryn Bollinger, August 28, 2011 1 2.2/2.3 - Solving Systems of Linear Equations A Brief Introduction to Matrices Matrices are used to organize data efficiently and will help us to solve systems of

### 1 Eigenvalues and Eigenvectors

Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x

### Mathematics of Cryptography

CHAPTER 2 Mathematics of Cryptography Part I: Modular Arithmetic, Congruence, and Matrices Objectives This chapter is intended to prepare the reader for the next few chapters in cryptography. The chapter

### MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α

### Random Vectors and the Variance Covariance Matrix

Random Vectors and the Variance Covariance Matrix Definition 1. A random vector X is a vector (X 1, X 2,..., X p ) of jointly distributed random variables. As is customary in linear algebra, we will write

### Lecture Notes: Matrix Inverse. 1 Inverse Definition

Lecture Notes: Matrix Inverse Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Inverse Definition We use I to represent identity matrices,

### MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

### CLASS NOTES. We bring down (copy) the leading coefficient below the line in the same column.

SYNTHETIC DIVISION CLASS NOTES When factoring or evaluating polynomials we often find that it is convenient to divide a polynomial by a linear (first degree) binomial of the form x k where k is a real

### 1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

(d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

### Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two

### WHICH LINEAR-FRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE?

WHICH LINEAR-FRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE? JOEL H. SHAPIRO Abstract. These notes supplement the discussion of linear fractional mappings presented in a beginning graduate course

### 3.1 Solving Systems Using Tables and Graphs

Algebra 2 Chapter 3 3.1 Solve Systems Using Tables & Graphs 3.1 Solving Systems Using Tables and Graphs A solution to a system of linear equations is an that makes all of the equations. To solve a system

### Generating Valid 4 4 Correlation Matrices

Applied Mathematics E-Notes, 7(2007), 53-59 c ISSN 1607-2510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ Generating Valid 4 4 Correlation Matrices Mark Budden, Paul Hadavas, Lorrie

### Understanding Basic Calculus

Understanding Basic Calculus S.K. Chung Dedicated to all the people who have helped me in my life. i Preface This book is a revised and expanded version of the lecture notes for Basic Calculus and other

### 1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each)

Math 33 AH : Solution to the Final Exam Honors Linear Algebra and Applications 1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each) (1) If A is an invertible

### SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2014 Timo Koski () Mathematisk statistik 24.09.2014 1 / 75 Learning outcomes Random vectors, mean vector, covariance

### DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

### Matrices in Statics and Mechanics

Matrices in Statics and Mechanics Casey Pearson 3/19/2012 Abstract The goal of this project is to show how linear algebra can be used to solve complex, multi-variable statics problems as well as illustrate

### Scalar Valued Functions of Several Variables; the Gradient Vector

Scalar Valued Functions of Several Variables; the Gradient Vector Scalar Valued Functions vector valued function of n variables: Let us consider a scalar (i.e., numerical, rather than y = φ(x = φ(x 1,

### Linear Algebra Notes

Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note

MA 134 Lecture Notes August 20, 2012 Introduction The purpose of this lecture is to... Introduction The purpose of this lecture is to... Learn about different types of equations Introduction The purpose

### 1 Lecture: Integration of rational functions by decomposition

Lecture: Integration of rational functions by decomposition into partial fractions Recognize and integrate basic rational functions, except when the denominator is a power of an irreducible quadratic.

### Solving Linear Systems, Continued and The Inverse of a Matrix

, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

### Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

### MAT Solving Linear Systems Using Matrices and Row Operations

MAT 171 8.5 Solving Linear Systems Using Matrices and Row Operations A. Introduction to Matrices Identifying the Size and Entries of a Matrix B. The Augmented Matrix of a System of Equations Forming Augmented

### Math 4707: Introduction to Combinatorics and Graph Theory

Math 4707: Introduction to Combinatorics and Graph Theory Lecture Addendum, November 3rd and 8th, 200 Counting Closed Walks and Spanning Trees in Graphs via Linear Algebra and Matrices Adjacency Matrices

### Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

### 3.8 Finding Antiderivatives; Divergence and Curl of a Vector Field

3.8 Finding Antiderivatives; Divergence and Curl of a Vector Field 77 3.8 Finding Antiderivatives; Divergence and Curl of a Vector Field Overview: The antiderivative in one variable calculus is an important

### Linear Algebra Review. Vectors

Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

### Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

### A characterization of trace zero symmetric nonnegative 5x5 matrices

A characterization of trace zero symmetric nonnegative 5x5 matrices Oren Spector June 1, 009 Abstract The problem of determining necessary and sufficient conditions for a set of real numbers to be the