October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix



Similar documents
Section Inner Products and Norms

Linear Algebra Review. Vectors

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

α = u v. In other words, Orthogonal Projection

Vector and Matrix Norms

Similarity and Diagonalization. Similar Matrices

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

Inner products on R n, and more

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Orthogonal Diagonalization of Symmetric Matrices

Chapter 6. Orthogonality

DATA ANALYSIS II. Matrix Algorithms

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

1 Introduction to Matrices

Inner Product Spaces and Orthogonality

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

[1] Diagonal factorization

Inner Product Spaces

5. Orthogonal matrices

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

Numerical Analysis Lecture Notes

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

More than you wanted to know about quadratic forms

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Lecture 5 Principal Minors and the Hessian

MATH APPLIED MATRIX THEORY

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

1 VECTOR SPACES AND SUBSPACES

Numerical Methods I Eigenvalue Problems

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

Linear Algebraic Equations, SVD, and the Pseudo-Inverse

Math 215 HW #6 Solutions

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

Introduction to Matrix Algebra

Solutions to Math 51 First Exam January 29, 2015

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

Data Mining: Algorithms and Applications Matrix Math Review

4 MT210 Notebook Eigenvalues and Eigenvectors Definitions; Graphical Illustrations... 3

Notes on Symmetric Matrices

Systems of Linear Equations

Similar matrices and Jordan form

LINEAR ALGEBRA W W L CHEN

Linear Algebra: Vectors

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Quadratic forms Cochran s theorem, degrees of freedom, and all that

by the matrix A results in a vector which is a reflection of the given

3 Orthogonal Vectors and Matrices

Lecture 5: Singular Value Decomposition SVD (1)

CS3220 Lecture Notes: QR factorization and orthogonal transformations

Section Continued

University of Lille I PC first year list of exercises n 7. Review

Math 312 Homework 1 Solutions

x = + x 2 + x

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors

Some probability and statistics

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

LINEAR ALGEBRA. September 23, 2010

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

3. INNER PRODUCT SPACES

1 Norms and Vector Spaces

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

7 Gaussian Elimination and LU Factorization

Orthogonal Projections

NOTES ON LINEAR TRANSFORMATIONS

Linear Algebra: Determinants, Inverses, Rank

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Applied Linear Algebra I Review page 1

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

160 CHAPTER 4. VECTOR SPACES

Eigenvalues and Eigenvectors

Notes on Determinant

17. Inner product spaces Definition Let V be a real vector space. An inner product on V is a function

Numerical Analysis Lecture Notes

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Finite Dimensional Hilbert Spaces and Linear Inverse Problems

Linear Algebra Notes

Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra Review and Reference

BANACH AND HILBERT SPACE REVIEW

Matrix Representations of Linear Transformations and Changes of Coordinates

1.3. DOT PRODUCT If θ is the angle (between 0 and π) between two non-zero vectors u and v,

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Continuity of the Perron Root

Question 2: How do you solve a matrix equation using the matrix inverse?

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Multivariate normal distribution and testing for means (see MKB Ch 3)

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Name: Section Registered In:

Chapter 17. Orthogonal Matrices and Symmetries of Space

Matrices and Linear Algebra

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

MAT 242 Test 2 SOLUTIONS, FORM T

Transcription:

Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012

Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,..., N.

Estimation of r and C The expected return is approximated by T ˆ r n = 1 T t=1 r t n and the covariance is approximated by ĉ mn = 1 T 1 or in matrix/vector notation Ĉ = 1 T 1 T (rn t ˆ r n )(rm t ˆ r m ), t=1 T (r t ˆ r)(r t ˆ r) an outer product. t=1

Importance of C The expected return r is often estimated exogenously (e.g. not statistically estimated). But statistical estimation of C is an important topic, is used in practice, and is the backbone of many finance strategies. A major part of Markowitz theory was the assumption for C to be a covariance matrix it must be symmetric positive definite (SPD).

Real Matrices/Vectors Let s focus on real matrices, and only use complex numbers if necessary. A vector x R N and a matrix A R M N are x 1 a 11 a 12... a 1N x 2 x =. and A = a 21 a 22... a 2N..... a M1 a M2... a MN x N

Matrix/Vector Multiplication The matrix A times a vector x is a new vector a 11 a 12... a 1N x 1 a 21 a 22... a 2N x 2 Ax =...... a M1 a M2... a MN x N a 11 x 1 + a 12 x 2 + + a 1N x N a 21 x 1 + a 22 x 2 + + a 2N x N =. RM. a M1 x 1 + a M2 x 2 + + a MN x N Matrix/matrix multiplication is an extension of this.

Matrix/Vector transpose Their transposes are x 1 x x 2 =. x N a 11 a 12... a 1N and A a 21 a 22... a 2N =..... a M1 a M2... a MN Note that (Ax) = x A. = (x 1, x 2,..., x N ) a 11 a 21... a M1 a 12 a 22... a M2 =...... a 1N a 2N... a MN

Vector Inner/outer Product The vector x R N inner product with another vector y R N is x y = y x = x 1 y 1 + x 2 y 2 +... x N y n. There is also the outer product, x 1 y 1 x 1 y 2... x 1 y N xy x 2 y 1 x 2 y 2... x 2 y N =..... yx. x N y 1 x N y 2... X N Y N

Matrix/Vector Norm The norm on the vector space R N is, and for any x R N we have x = x x. Properties of the norm x 0 for any x, x = 0 iff x = 0, x + y x + y (triangle inequality), x y x y with equality iff x = ay for some a R (Cauchy-Schwartz), x + y 2 = x 2 + y 2 if x y = 0 (Pythagorean thm).

Linear Independence & Orthogonality Two vectors x, y are linear independent if there is no scalar constant a R s.t. y = ax. These two vectors are orthogonal if x y = 0. Clearly, orthogonality linear independent.

Span A set of vectors x 1, x 2,..., x n is said to span a subspace V R N if for any y V there are constants a 1, a 2,..., a n such y = a 1 x 2 + a 2 x 2 + + a n x n. We write, span(x 1,..., x n ) = V. In particular, a set x 1, x 2,..., x N will span R N iff they are linearly independent: span(x 1, x 2,..., x N ) = R N x i linearly independent of x j for all i, j. If so, then x 1, x 2,..., X N are a basis for R N.

Orthogonal Basis Definition A basis of R N is orthogonal if all its elements are orthogonal, span(x 1, x 2,..., x N ) = R N and x i x j = 0 i, j.

Invertibility Definition A square matrix A R N N is invertible if for any b R N there exists x s.t. Ax = b. If so, then x = A 1 b. A is invertible if any of the following hold: A has rows that are linearly independent A has columns that are linearly independent det(a) 0 there is no non-zero vector x s.t. Ax = 0. In fact, these 4 statements are equivalent.

Eigenvalues Let A be an N N matrix. A scalar λ C is an eigenvalue of A if Ax = λx for some vector x = re(x) + 1 im(x) C N. If so x is an eigenvector. By 4th statement of previous slide, A 1 exists zero is not an eigenvalue.

Eigenvalue Diagonalization Let A be an N N matrix with N linearly independent eigenvectors. There is an N N matrix X and an N N diagonal matrix Λ such that A = X ΛX 1, where Λ ii is an eigenvalue of A, and the columns of X = [x 1, x 2,..., x N ] are eigenvectors, Ax i = Λ ii x i.

Real Symmetric Matrices A matrix is symmetric if A = A. Proposition A symmetric matrix has real eigenvalues. pf: If Ax = λx then λ 2 x 2 = λ 2 x x = x A 2 x = x A Ax = Ax 2, which is real and nonnegative (x = re(x) 1 im(x) i.e the conjugate transpose). Hence, λ is cannot be imaginary or complex.

Real Symmetric Matrices Proposition Let A = A be a real matrix. A s eigenvectors form an orthogonal basis of R N, and the diagonalization is simplified: i.e. X 1 = X. A = X ΛX, Basic Idea of Proof: Suppose that Ax = λx and Ay = αy with α λ. Then λx y = x A y = x Ay = αx y, and so it must be that x y = 0.

Skew Symmetric Matrices A matrix is skew symmetric if A = A. Proposition A skew symmetric matrix has purely imaginary eigenvalues. pf: If Ax = λx then λ 2 x 2 = λ 2 x x = x A 2 x = x A Ax = Ax 2 which is less than zero. Hence, λ 2 < 0 must be purely imaginary.

Positive Definitness Definition A matrix A R N N is positive definite iff x Ax > 0 x R N. It is only positive semi-definite iff x Ax 0 x R N.

Positive Definitness Symmetric Proposition If A R N N is positive definite, then A = A, i.e. it is symmetric positive definite (SPD). pf: For any x R N we have 0 < x Ax = 1 x (A + A 2 } {{ } )x + x ( A A } {{ } )x, sym. skew-sym. but if A A 0 then there exists eigenvector x s.t. (A A )x = αx where α C \ R, which contradicts the inequality.

Eigenvalue of an SPD Matrix Proposition If A is SPD, then it has all positive eigenvalues. pf: By definition for any y R N \ {0} it holds that y Ay > 0. In particular, for any λ and x s.t. Ax = λx, we have λ x 2 = x Ax > 0 and so λ > 0. Consequence: for A SPD there s no x s.t. Ax = 0, and so A 1 exists.

So Can/How Do We Invert C? The invertibility of the covariance matrix is very important: we ve seen a need for it in the efficient frontiers and Markowitz problem, if it s not invertible then there are redundant assets, we ll need it to be invertible when we discuss multivariate Gaussian distributions. If there is no risk-free, our covariance matrix is practically invertible by definition: ( ) 0 < var w i r i = w i w j C ij = w Cw i i,j where w R N \ {0} is any allocation vector (w may not sum to 1). Hence, C is SPD C 1 exists.

So How Do We Know Ĉ is Covariance Matrix? Recall, Ĉ = 1 T 1 T t=1 (r t r)(r t r). Each summand is symmetric, (r t ˆ r)(r t ˆ r) is symmetric, and sum of symmetric matrices is symmetric. But each summand is not invertible because there exists a vector y s.t. y (r t ˆ r) = 0, which means that zero is an eigenvalue, (r t ˆ r) (r t ˆ r) y = 0. } {{ } =0

So How Do We Know Ĉ is Covariance Matrix? If span(r 1 ˆ r, r 2 ˆ r,..., r T ˆ r) = R N then Ĉ is invertible. So need T N. In order for Ĉ to be close C will need T N. Ĉ is also SPD: ( y Ĉy = y 1 T 1 ) T (r t ˆ r)(r t ˆ r) y t=1 = 1 T 1 T t=1 y (r t ˆ r)(r t ˆ r) y = 1 T 1 because at least 1 t s.t. y (r t ˆ r) 0. T (y (r t ˆ r)) 2 > 0, t=1

A Simple Covariance Matrix C = (1 ρ)i + ρu where I is the identity and U ij = 1 for all i, j. U1 = N1 Ux = 0 if x 1 = 0 where 1. = (1, 1,..., 1). Hence, for any x we have Cx = (1 ρ)ix + ρux = (1 ρ)x + ρ(1 x)1, which means x is an eigenvector iff 1 x = 0 or x = a1. The eigenvalues of C are 1 ρ + Nρ and 1 ρ, and the eigenvectors are 1 and the N 1 vectors orthogonal to 1, respectively.

Positive Definite (Complex Hermitian Matrices) Let C be a square matrix, i.e. C C N N. C is positive definite if z Cz > 0 z C N \ {0} where C N is N-dimensional complex vectors, and z is conjugate transpose, C is positive semi-definite if z = (z r + iz i ) = z r iz i. z Cz 0 z C N \ {0} If C positive definite and real, then it must also be symmetric i.e. C nm = C mn m, n.

Gershgorin Circles Proposition All eigenvalues of the matrix A lie in one of the Gershgorin circles, λ A ii j i A ij for at least 1 i N, where λ is an eigenvalue of A. pf: Let λ be an eigenvalue and let x be an eigenvector. Choose i so that x i = max j x j. We have j A ijx j = λx j. W.L.O.G. we can take x i = 1 so that j i A ijx j = λ A ii, and taking absolute values yields the result, λ A ii = A ij x j j i A ij x j A ij. j i j i