Linear Algebra Review

Size: px
Start display at page:

Download "Linear Algebra Review"

Transcription

1 Linear Algebra Review Yang Feng Yang Feng (Columbia University) Linear Algebra Review 1 / 45

2 Definition of Matrix Rectangular array of elements arranged in rows and columns A matrix has dimensions The dimension of a matrix is its number of rows and columns It is expressed as 3 2 (in this case) Yang Feng (Columbia University) Linear Algebra Review 2 / 45

3 Indexing a Matrix Rectangular array of elements arranged in rows and columns [ ] a11 a A = 12 a 13 a 21 a 22 a 23 A matrix can also be notated A = [a ij ], i = 1, 2; j = 1, 2, 3 Yang Feng (Columbia University) Linear Algebra Review 3 / 45

4 Square Matrix and Column Vector A square matrix has equal number of rows and columns [ ] a a 12 a 13 a a 22 a 23 a 31 a 32 a 33 A column vector is a matrix with a single column c 1 c 2 c 3 c 4 c 5 All vectors (row or column) are matrices, all scalars are 1 1 matrices. Yang Feng (Columbia University) Linear Algebra Review 4 / 45

5 Transpose The transpose of a matrix is another matrix in which the rows and columns have been interchanged 2 5 A = [ ] A = Yang Feng (Columbia University) Linear Algebra Review 5 / 45

6 Equality of Matrices Two matrices are the same if they have the same dimension and all the elements are equal a 1 4 A = a 2 B = 7 a 3 3 A = B implies a 1 = 4, a 2 = 7, a 3 = 3 Yang Feng (Columbia University) Linear Algebra Review 6 / 45

7 Matrix Addition and Substraction Then A = 2 5 B = A + B = Yang Feng (Columbia University) Linear Algebra Review 7 / 45

8 Multiplication of a Matrix by a Scalar [ ] 2 7 A = 9 3 [ ] [ ] 2 7 2k 7k ka = k = 9 3 9k 3k Yang Feng (Columbia University) Linear Algebra Review 8 / 45

9 Multiplication of two Matrices where (AB) ij = for i = 1,, l and j = 1,, n. A l m B m n = (AB) l n, m A ik B jk, k=1 Yang Feng (Columbia University) Linear Algebra Review 9 / 45

10 Another Matrix Multiplication Example A = AB = [ 1 3 ] [ B = 5 2 ] [ ] = 41 2 Yang Feng (Columbia University) Linear Algebra Review 10 / 45

11 Special Matrices If A = A, then A is a symmetric matrix A = A = If the off-diagonal elements of a matrix are all zeros it is then called a diagonal matrix a A = 0 a 2 0 B = a Yang Feng (Columbia University) Linear Algebra Review 11 / 45

12 Identity Matrix A diagonal matrix whose diagonal entries are all ones is an identity matrix. Multiplication by an identity matrix leaves the pre or post multiplied matrix unchanged I = and AI = IA = A Yang Feng (Columbia University) Linear Algebra Review 12 / 45

13 Vector and matrix with all elements equal to one n =..... J n = n n n n 1 n =. [ ] n 1 = n 1 n n = J n Yang Feng (Columbia University) Linear Algebra Review 13 / 45

14 Linear Dependence and Rank of Matrix Consider A = and think of this as a matrix of a collection of column vectors. Note that the third column vector is a multiple of the first column vector. Yang Feng (Columbia University) Linear Algebra Review 14 / 45

15 Linear Dependence When m scalars k 1,..., k m not all zero, can be found such that: k 1 A k m A m = 0 where 0 denotes the zero column vector and A i is the i th column of matrix A, the m column vectors are called linearly dependent. If the only set of scalars for which the equality holds is k 1 = 0,..., k m = 0, the set of m column vectors is linearly independent. In the previous example matrix the columns are linearly dependent = Yang Feng (Columbia University) Linear Algebra Review 15 / 45

16 Rank of Matrix The rank of a matrix is defined to be the maximum number of linearly independent columns in the matrix. Rank properties include The rank of a matrix is unique The rank of a matrix can equivalently be defined as the maximum number of linearly independent rows The rank of an r c matrix cannot exceed min(r, c) The row and column rank of a matrix are equal The rank of a matrix is preserved under nonsingular transformations., i.e. Let A (n n) and C (k k) be nonsingular matrices. Then for any n k matrix B we have rank(b) = rank(ab) = rank(bc) Yang Feng (Columbia University) Linear Algebra Review 16 / 45

17 Inverse of Matrix Like a reciprocal 6 1/6 = 1/6 6 = 1 x 1 x = 1 But for matrices AA 1 = A 1 A = I Yang Feng (Columbia University) Linear Algebra Review 17 / 45

18 Example More generally, [ ] 2 4 A = 3 1 [ ] A =.3.2 [ ] A A = 0 1 [ ] a b A = c d [ ] d b where D = ad bc A 1 = 1 D c a Yang Feng (Columbia University) Linear Algebra Review 18 / 45

19 Inverses of Diagonal Matrices are Easy then A = /3 0 0 A 1 = 0 1/ /2 Yang Feng (Columbia University) Linear Algebra Review 19 / 45

20 Finding the inverse Finding an inverse takes (for general matrices with no special structure) O(n 3 ) operations (when n is the number of rows in the matrix) We will assume that numerical packages can do this for us in R: solve(a) gives the inverse of matrix A Yang Feng (Columbia University) Linear Algebra Review 20 / 45

21 Example Solving a system of simultaneous equations 2y 1 + 4y 2 = 20 3y 1 + y 2 = 10 ] [ ] ] y1 = [ [ y1 y 2 ] = y 2 [ [ ] 1 [ ] Yang Feng (Columbia University) Linear Algebra Review 21 / 45

22 List of Useful Matrix Properties A + B = B + A (A + B) + C = A + (B + C) (AB)C = A(BC) C(A + B) = CA + CB k(a + B) = ka + kb (A ) = A (A + B) = A + B (AB) = B A (ABC) = C B A (AB) 1 = B 1 A 1 (ABC) 1 = C 1 B 1 A 1 (A 1 ) 1 = A, (A ) 1 = (A 1 ) Yang Feng (Columbia University) Linear Algebra Review 22 / 45

23 Random Vectors and Matrices Let s say we have a vector consisting of three random variables Y 1 Y = Y 2 Y 3 The expectation of a random vector is defined as E(Y 1 ) E(Y) = E(Y 2 ) E(Y 3 ) Yang Feng (Columbia University) Linear Algebra Review 23 / 45

24 Expectation of a Random Matrix The expectation of a random matrix is defined similarly E(Y) = [E(Y ij )] i = 1,...n; j = 1,..., p Yang Feng (Columbia University) Linear Algebra Review 24 / 45

25 Variance-covariance Matrix of a Random Vector The variances of three random variables σ 2 (Y i ) and the covariances between any two of the three random variables σ(y i, Y j ), are assembled in the variance-covariance matrix of Y σ 2 (Y 1 ) σ(y 1, Y 2 ) σ(y 1, Y 3 ) cov(y) = σ 2 {Y} = σ(y 2, Y 1 ) σ 2 (Y 2 ) σ(y 2, Y 3 ) σ(y 3, Y 1 ) σ(y 3, Y 2 ) σ 2 (Y 3 ) remember σ(y 2, Y 1 ) = σ(y 1, Y 2 ) so the covariance matrix is symmetric Yang Feng (Columbia University) Linear Algebra Review 25 / 45

26 Derivation of Covariance Matrix In vector terms the variance-covariance matrix is defined by σ 2 {Y} = E[(Y E(Y))(Y E(Y)) ] because Y 1 E(Y 1 ) σ 2 {Y} = E( Y 2 E(Y 2 ) ( Y 1 E(Y 1 ) Y 2 E(Y 2 ) Y 3 E(Y 3 ) ) ) Y 3 E(Y 3 ) Yang Feng (Columbia University) Linear Algebra Review 26 / 45

27 Regression Example Take a regression example with n = 3 with constant error terms σ 2 (ɛ i ) and are uncorrelated so that σ 2 (ɛ i, ɛ j ) = 0 for all i j The variance-covariance matrix for the random vector ɛ is σ σ 2 (ɛ) = 0 σ σ 2 which can be written as σ 2 {ɛ} = σ 2 I Yang Feng (Columbia University) Linear Algebra Review 27 / 45

28 Basic Results If A is a constant matrix and Y is a random matrix then W = AY is a random matrix E(A) = A E(W) = E(AY) = A E(Y) σ 2 {W} = σ 2 {AY} = Aσ 2 {Y}A Yang Feng (Columbia University) Linear Algebra Review 28 / 45

29 Multivariate Normal Density Let Y be a vector of p observations Y 1 Y 2 Y =... Y p Let µ be a vector of the means of each of the p observations µ = µ p Yang Feng (Columbia University) Linear Algebra Review 29 / 45 µ 1 µ 2...

30 Multivariate Normal Density let Σ be the variance-covariance matrix of Y σ 2 1 σ σ 1p σ 21 σ σ 2p Σ =... σ p1 σ p2... σp 2 Then the multivariate normal density is given by P(Y µ, Σ) = 1 (2π) p/2 Σ 1/2 exp[ 1 2 (Y µ) Σ 1 (Y µ)] Yang Feng (Columbia University) Linear Algebra Review 30 / 45

31 Matrix Simple Linear Regression Nothing new-only matrix formalism for previous results Remember the normal error regression model Y i = β 0 + β 1 X i + ɛ i, ɛ i N(0, σ 2 ), i = 1,..., n Expanded out this looks like Y 1 = β 0 + β 1 X 1 + ɛ 1 Y 2 = β 0 + β 1 X 2 + ɛ 2... Y n = β 0 + β 1 X n + ɛ n which points towards an obvious matrix formulation. Yang Feng (Columbia University) Linear Algebra Review 31 / 45

32 Regression Matrices If we identify the following matrices Y 1 Y 2 Y =... Y n 1 X 1 1 X 2 X =... 1 X n β = ( β0 β 1 ɛ 1 ) ɛ 2 ɛ =... ɛ n We can write the linear regression equations in a compact form Y = Xβ + ɛ Yang Feng (Columbia University) Linear Algebra Review 32 / 45

33 Regression Matrices Of course, in the normal regression model the expected value of each of the ɛ s is zero, we can write E(Y) = Xβ This is because E(ɛ) = 0 E(ɛ 1 ) E(ɛ 2 )... E(ɛ n ) = Yang Feng (Columbia University) Linear Algebra Review 33 / 45

34 Error Covariance Because the error terms are independent and have constant variance σ 2 σ σ 2 {ɛ} = 0 σ σ 2 σ 2 {ɛ} = σ 2 I Yang Feng (Columbia University) Linear Algebra Review 34 / 45

35 Matrix Normal Regression Model In matrix terms the normal regression model can be written as Y = Xβ + ɛ where ɛ N(0, σ 2 I) Yang Feng (Columbia University) Linear Algebra Review 35 / 45

36 Least Square Estimation If we remember both the starting normal equations that we derived and the fact that nb 0 + b 1 Xi = Y i b 0 Xi + b 1 X 2 i = X i Y i 1 X 1 [ ] X X 2 [ ] X = X 1 X 2... X n.... = n Xi Xi X 2 i 1 X n Y 1 [ ] X Y 2 Y = X 1 X 2... X n.. = Y n [ ] Yi Xi Y i Yang Feng (Columbia University) Linear Algebra Review 36 / 45

37 Least Square Estimation Then we can see that these equations are equivalent to the following matrix operations with X X b = X Y b = ( b0 b 1 with the solution to this equation given by when (X X) 1 exists. ) b = (X X) 1 X Y Yang Feng (Columbia University) Linear Algebra Review 37 / 45

38 Fitted Value Ŷ = Xb Because: Ŷ = Ŷ 1 Ŷ 2... Ŷ n 1 X 1 1 X 2 = X n ( b0 b 1 b 0 + b 1 X 1 ) b 0 + b 1 X 2 =... b 0 + b 1 X n Yang Feng (Columbia University) Linear Algebra Review 38 / 45

39 Fitted Values, Hat Matrix plug in b = (X X) 1 X Y We have Ŷ = Xb = X(X X) 1 X Y or where Ŷ = HY H = X(X X) 1 X is called the hat matrix. Property of hat matrix H: 1 symmetric 2 idempotent: HH = H. Yang Feng (Columbia University) Linear Algebra Review 39 / 45

40 Residuals Then e = Y Ŷ = Y HY = (I H)Y e = (I H)Y The matrix I H is also symmetric and idempotent. The variance-covariance matrix of e is And we can estimate it by σ 2 {e} = σ 2 (I H) s 2 {e} = MSE(I H) Yang Feng (Columbia University) Linear Algebra Review 40 / 45

41 Analysis of Variance Results We know SSTO = (Y i Ȳ )2 = Y 2 i ( Y i ) 2 n Y Y = Y 2 i and J is the matrix with entries all equal to 1. Then we have As a result: ( Y i ) 2 n = 1 n Y JY SSTO = Y Y 1 n Y JY Yang Feng (Columbia University) Linear Algebra Review 41 / 45

42 Analysis of Variance Results Also, SSE = e 2 i = (Y i Ŷ i ) 2 can be represented as SSE = e e = Y (I H) (I H)Y = Y (I H)Y Notice that H1 = 1, then (I H)J = 0 Finally by similarly reasoning, SSR = ([H 1 n J]Y) ([H 1 n J]Y) = Y [H 1 n J]Y Easy to check that SSTO = SSE + SSR Yang Feng (Columbia University) Linear Algebra Review 42 / 45

43 Sums of Squares as Quadratic Forms When n = 2, an example of quadratic forms: 5Y Y 1 Y 2 + 4Y 2 2 can be expressed as matrix term as ( ) ( ) ( ) 5 3 Y1 Y1 Y 2 = Y AY 3 4 In general, a quadratic term is defined as : Y 2 Y AY = n n A ij Y i Y j i=1 j=1 where A ij = A ji Here, A is a symmetric n n matrix, the matrix of the quadratic form. Yang Feng (Columbia University) Linear Algebra Review 43 / 45

44 Quadratic forms for ANOVA SSTO = Y [I 1 n J]Y SSE = Y [I H]Y SSR = Y [H 1 n J]Y Yang Feng (Columbia University) Linear Algebra Review 44 / 45

45 Inference in Regression Analysis Regression Coefficients: The variance-covariance matrix of b is σ 2 {b} = σ 2 (X X) 1 Mean ( Response: ) To estimate the mean response at X h, define 1 X h = Then X h Ŷ h = X h b And the variance-covariance matrix of Ŷ h is σ 2 {Ŷ h } = X h σ2 {b}x h = σ 2 X h (X X) 1 X h Prediction of New Observation: s 2 {pred} = MSE(1 + X h (X X) 1 X h ) Yang Feng (Columbia University) Linear Algebra Review 45 / 45

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

Quadratic forms Cochran s theorem, degrees of freedom, and all that

Quadratic forms Cochran s theorem, degrees of freedom, and all that Quadratic forms Cochran s theorem, degrees of freedom, and all that Dr. Frank Wood Frank Wood, fwood@stat.columbia.edu Linear Regression Models Lecture 1, Slide 1 Why We Care Cochran s theorem tells us

More information

Notes on Applied Linear Regression

Notes on Applied Linear Regression Notes on Applied Linear Regression Jamie DeCoster Department of Social Psychology Free University Amsterdam Van der Boechorststraat 1 1081 BT Amsterdam The Netherlands phone: +31 (0)20 444-8935 email:

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

Lecture 2 Matrix Operations

Lecture 2 Matrix Operations Lecture 2 Matrix Operations transpose, sum & difference, scalar multiplication matrix multiplication, matrix-vector product matrix inverse 2 1 Matrix transpose transpose of m n matrix A, denoted A T or

More information

Introduction to General and Generalized Linear Models

Introduction to General and Generalized Linear Models Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions. 3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

More information

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

More information

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. ( 1 1 5 2 0 6

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. ( 1 1 5 2 0 6 Chapter 7 Matrices Definition An m n matrix is an array of numbers set out in m rows and n columns Examples (i ( 1 1 5 2 0 6 has 2 rows and 3 columns and so it is a 2 3 matrix (ii 1 0 7 1 2 3 3 1 is a

More information

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

More information

3.1 Least squares in matrix form

3.1 Least squares in matrix form 118 3 Multiple Regression 3.1 Least squares in matrix form E Uses Appendix A.2 A.4, A.6, A.7. 3.1.1 Introduction More than one explanatory variable In the foregoing chapter we considered the simple regression

More information

Matrix Differentiation

Matrix Differentiation 1 Introduction Matrix Differentiation ( and some other stuff ) Randal J. Barnes Department of Civil Engineering, University of Minnesota Minneapolis, Minnesota, USA Throughout this presentation I have

More information

Introduction to Matrices for Engineers

Introduction to Matrices for Engineers Introduction to Matrices for Engineers C.T.J. Dodson, School of Mathematics, Manchester Universit 1 What is a Matrix? A matrix is a rectangular arra of elements, usuall numbers, e.g. 1 0-8 4 0-1 1 0 11

More information

Typical Linear Equation Set and Corresponding Matrices

Typical Linear Equation Set and Corresponding Matrices EWE: Engineering With Excel Larsen Page 1 4. Matrix Operations in Excel. Matrix Manipulations: Vectors, Matrices, and Arrays. How Excel Handles Matrix Math. Basic Matrix Operations. Solving Systems of

More information

Solution to Homework 2

Solution to Homework 2 Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if

More information

MAT188H1S Lec0101 Burbulla

MAT188H1S Lec0101 Burbulla Winter 206 Linear Transformations A linear transformation T : R m R n is a function that takes vectors in R m to vectors in R n such that and T (u + v) T (u) + T (v) T (k v) k T (v), for all vectors u

More information

Sections 2.11 and 5.8

Sections 2.11 and 5.8 Sections 211 and 58 Timothy Hanson Department of Statistics, University of South Carolina Stat 704: Data Analysis I 1/25 Gesell data Let X be the age in in months a child speaks his/her first word and

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,

More information

Math 312 Homework 1 Solutions

Math 312 Homework 1 Solutions Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

More information

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj Section 5. l j v j = [ u u j u m ] l jj = l jj u j + + l mj u m. l mj Section 5. 5.. Not orthogonal, the column vectors fail to be perpendicular to each other. 5..2 his matrix is orthogonal. Check that

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2014 Timo Koski () Mathematisk statistik 24.09.2014 1 / 75 Learning outcomes Random vectors, mean vector, covariance

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra 4 Introduction to Matrix Algebra In the previous chapter, we learned the algebraic results that form the foundation for the study of factor analysis and structural equation modeling. These results, powerful

More information

Linear Algebra: Determinants, Inverses, Rank

Linear Algebra: Determinants, Inverses, Rank D Linear Algebra: Determinants, Inverses, Rank D 1 Appendix D: LINEAR ALGEBRA: DETERMINANTS, INVERSES, RANK TABLE OF CONTENTS Page D.1. Introduction D 3 D.2. Determinants D 3 D.2.1. Some Properties of

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

Topic 3. Chapter 5: Linear Regression in Matrix Form

Topic 3. Chapter 5: Linear Regression in Matrix Form Topic Overview Statistics 512: Applied Linear Models Topic 3 This topic will cover thinking in terms of matrices regression on multiple predictor variables case study: CS majors Text Example (NKNW 241)

More information

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Matrix Algebra A. Doerr Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Some Basic Matrix Laws Assume the orders of the matrices are such that

More information

Similar matrices and Jordan form

Similar matrices and Jordan form Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

More information

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89 by Joseph Collison Copyright 2000 by Joseph Collison All rights reserved Reproduction or translation of any part of this work beyond that permitted by Sections

More information

9 MATRICES AND TRANSFORMATIONS

9 MATRICES AND TRANSFORMATIONS 9 MATRICES AND TRANSFORMATIONS Chapter 9 Matrices and Transformations Objectives After studying this chapter you should be able to handle matrix (and vector) algebra with confidence, and understand the

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

Some probability and statistics

Some probability and statistics Appendix A Some probability and statistics A Probabilities, random variables and their distribution We summarize a few of the basic concepts of random variables, usually denoted by capital letters, X,Y,

More information

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Review Jeopardy. Blue vs. Orange. Review Jeopardy Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?

More information

Regression Analysis. Regression Analysis MIT 18.S096. Dr. Kempthorne. Fall 2013

Regression Analysis. Regression Analysis MIT 18.S096. Dr. Kempthorne. Fall 2013 Lecture 6: Regression Analysis MIT 18.S096 Dr. Kempthorne Fall 2013 MIT 18.S096 Regression Analysis 1 Outline Regression Analysis 1 Regression Analysis MIT 18.S096 Regression Analysis 2 Multiple Linear

More information

Name: Section Registered In:

Name: Section Registered In: Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are

More information

DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II. Matrix Algorithms DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

More information

A Primer on Index Notation

A Primer on Index Notation A Primer on John Crimaldi August 28, 2006 1. Index versus Index notation (a.k.a. Cartesian notation) is a powerful tool for manipulating multidimensional equations. However, there are times when the more

More information

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

More information

Using row reduction to calculate the inverse and the determinant of a square matrix

Using row reduction to calculate the inverse and the determinant of a square matrix Using row reduction to calculate the inverse and the determinant of a square matrix Notes for MATH 0290 Honors by Prof. Anna Vainchtein 1 Inverse of a square matrix An n n square matrix A is called invertible

More information

expression is written horizontally. The Last terms ((2)( 4)) because they are the last terms of the two polynomials. This is called the FOIL method.

expression is written horizontally. The Last terms ((2)( 4)) because they are the last terms of the two polynomials. This is called the FOIL method. A polynomial of degree n (in one variable, with real coefficients) is an expression of the form: a n x n + a n 1 x n 1 + a n 2 x n 2 + + a 2 x 2 + a 1 x + a 0 where a n, a n 1, a n 2, a 2, a 1, a 0 are

More information

Brief Introduction to Vectors and Matrices

Brief Introduction to Vectors and Matrices CHAPTER 1 Brief Introduction to Vectors and Matrices In this chapter, we will discuss some needed concepts found in introductory course in linear algebra. We will introduce matrix, vector, vector-valued

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

Lecture 4: Partitioned Matrices and Determinants

Lecture 4: Partitioned Matrices and Determinants Lecture 4: Partitioned Matrices and Determinants 1 Elementary row operations Recall the elementary operations on the rows of a matrix, equivalent to premultiplying by an elementary matrix E: (1) multiplying

More information

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

[1] Diagonal factorization

[1] Diagonal factorization 8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

MATHEMATICS FOR ENGINEERS BASIC MATRIX THEORY TUTORIAL 2

MATHEMATICS FOR ENGINEERS BASIC MATRIX THEORY TUTORIAL 2 MATHEMATICS FO ENGINEES BASIC MATIX THEOY TUTOIAL This is the second of two tutorials on matrix theory. On completion you should be able to do the following. Explain the general method for solving simultaneous

More information

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every

More information

Linearly Independent Sets and Linearly Dependent Sets

Linearly Independent Sets and Linearly Dependent Sets These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for in-class presentation

More information

Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices

Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices Solving square systems of linear equations; inverse matrices. Linear algebra is essentially about solving systems of linear equations,

More information

Solving Systems of Linear Equations Using Matrices

Solving Systems of Linear Equations Using Matrices Solving Systems of Linear Equations Using Matrices What is a Matrix? A matrix is a compact grid or array of numbers. It can be created from a system of equations and used to solve the system of equations.

More information

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

More information

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes Solution by Inverse Matrix Method 8.2 Introduction The power of matrix algebra is seen in the representation of a system of simultaneous linear equations as a matrix equation. Matrix algebra allows us

More information

Question 2: How do you solve a matrix equation using the matrix inverse?

Question 2: How do you solve a matrix equation using the matrix inverse? Question : How do you solve a matrix equation using the matrix inverse? In the previous question, we wrote systems of equations as a matrix equation AX B. In this format, the matrix A contains the coefficients

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

MATH 551 - APPLIED MATRIX THEORY

MATH 551 - APPLIED MATRIX THEORY MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

Elementary Matrices and The LU Factorization

Elementary Matrices and The LU Factorization lementary Matrices and The LU Factorization Definition: ny matrix obtained by performing a single elementary row operation (RO) on the identity (unit) matrix is called an elementary matrix. There are three

More information

Solving simultaneous equations using the inverse matrix

Solving simultaneous equations using the inverse matrix Solving simultaneous equations using the inverse matrix 8.2 Introduction The power of matrix algebra is seen in the representation of a system of simultaneous linear equations as a matrix equation. Matrix

More information

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Recall that two vectors in are perpendicular or orthogonal provided that their dot Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

More information

5. Orthogonal matrices

5. Orthogonal matrices L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances

Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances It is possible to construct a matrix X of Cartesian coordinates of points in Euclidean space when we know the Euclidean

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

Part II. Multiple Linear Regression

Part II. Multiple Linear Regression Part II Multiple Linear Regression 86 Chapter 7 Multiple Regression A multiple linear regression model is a linear model that describes how a y-variable relates to two or more xvariables (or transformations

More information

Matrix Representations of Linear Transformations and Changes of Coordinates

Matrix Representations of Linear Transformations and Changes of Coordinates Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under

More information

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form Section 1.3 Matrix Products A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form (scalar #1)(quantity #1) + (scalar #2)(quantity #2) +...

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

Linear Algebra and TI 89

Linear Algebra and TI 89 Linear Algebra and TI 89 Abdul Hassen and Jay Schiffman This short manual is a quick guide to the use of TI89 for Linear Algebra. We do this in two sections. In the first section, we will go over the editing

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Principle Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression

Principle Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression Principle Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression Saikat Maitra and Jun Yan Abstract: Dimension reduction is one of the major tasks for multivariate

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

Multivariate normal distribution and testing for means (see MKB Ch 3)

Multivariate normal distribution and testing for means (see MKB Ch 3) Multivariate normal distribution and testing for means (see MKB Ch 3) Where are we going? 2 One-sample t-test (univariate).................................................. 3 Two-sample t-test (univariate).................................................

More information

Partial Least Squares (PLS) Regression.

Partial Least Squares (PLS) Regression. Partial Least Squares (PLS) Regression. Hervé Abdi 1 The University of Texas at Dallas Introduction Pls regression is a recent technique that generalizes and combines features from principal component

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution

More information

Outline. Topic 4 - Analysis of Variance Approach to Regression. Partitioning Sums of Squares. Total Sum of Squares. Partitioning sums of squares

Outline. Topic 4 - Analysis of Variance Approach to Regression. Partitioning Sums of Squares. Total Sum of Squares. Partitioning sums of squares Topic 4 - Analysis of Variance Approach to Regression Outline Partitioning sums of squares Degrees of freedom Expected mean squares General linear test - Fall 2013 R 2 and the coefficient of correlation

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree of PhD of Engineering in Informatics

Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree of PhD of Engineering in Informatics INTERNATIONAL BLACK SEA UNIVERSITY COMPUTER TECHNOLOGIES AND ENGINEERING FACULTY ELABORATION OF AN ALGORITHM OF DETECTING TESTS DIMENSIONALITY Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system 1. Systems of linear equations We are interested in the solutions to systems of linear equations. A linear equation is of the form 3x 5y + 2z + w = 3. The key thing is that we don t multiply the variables

More information

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Linear Systems, Continued and The Inverse of a Matrix , Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

More information

1 Determinants and the Solvability of Linear Systems

1 Determinants and the Solvability of Linear Systems 1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped

More information

Chapter 13 Introduction to Nonlinear Regression( 非 線 性 迴 歸 )

Chapter 13 Introduction to Nonlinear Regression( 非 線 性 迴 歸 ) Chapter 13 Introduction to Nonlinear Regression( 非 線 性 迴 歸 ) and Neural Networks( 類 神 經 網 路 ) 許 湘 伶 Applied Linear Regression Models (Kutner, Nachtsheim, Neter, Li) hsuhl (NUK) LR Chap 10 1 / 35 13 Examples

More information

Matrix Multiplication

Matrix Multiplication Matrix Multiplication CPS343 Parallel and High Performance Computing Spring 2016 CPS343 (Parallel and HPC) Matrix Multiplication Spring 2016 1 / 32 Outline 1 Matrix operations Importance Dense and sparse

More information

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

6. Cholesky factorization

6. Cholesky factorization 6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix

More information

Chapter Two: Matrix Algebra and its Applications 1 CHAPTER TWO. Matrix Algebra and its applications

Chapter Two: Matrix Algebra and its Applications 1 CHAPTER TWO. Matrix Algebra and its applications Chapter Two: Matrix Algebra and its Applications 1 CHAPTER TWO Matrix Algebra and its applications Algebra - is a part of mathematics that deals with operations (+, -, x ). Matrix is A RECTANGULAR ARRAY

More information

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Lecture L3 - Vectors, Matrices and Coordinate Transformations S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between

More information

Least Squares Estimation

Least Squares Estimation Least Squares Estimation SARA A VAN DE GEER Volume 2, pp 1041 1045 in Encyclopedia of Statistics in Behavioral Science ISBN-13: 978-0-470-86080-9 ISBN-10: 0-470-86080-4 Editors Brian S Everitt & David

More information

F Matrix Calculus F 1

F Matrix Calculus F 1 F Matrix Calculus F 1 Appendix F: MATRIX CALCULUS TABLE OF CONTENTS Page F1 Introduction F 3 F2 The Derivatives of Vector Functions F 3 F21 Derivative of Vector with Respect to Vector F 3 F22 Derivative

More information

Linear Algebra Notes

Linear Algebra Notes Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note

More information

160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

More information

Orthogonal Projections

Orthogonal Projections Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors

More information