The Matrix Cookbook. Kaare Brandt Petersen Michael Syskind Pedersen. Version: January 5, 2005
|
|
- Darren Lawrence
- 7 years ago
- Views:
Transcription
1 The Matrix Cookbook Kaare Brandt Petersen Michael Syskind Pedersen Version: January 5, 2005 What is this? These pages are a collection of facts (identities, approximations, inequalities, relations,...) about matrices and matters relating to them. It is collected in this form for the convenience of anyone who wants a quick desktop reference. Disclaimer: The identities, approximations and relations presented here were obviously not invented but collected, borrowed and copied from a large amount of sources. These sources include similar but shorter notes found on the internet and appendices in books - see the references for a full list. Errors: Very likely there are errors, typos, and mistakes for which we apologize and would be grateful to receive corrections at kbp@imm.dtu.dk. Its ongoing: The project of keeping a large repository of relations involving matrices is naturally ongoing and the version will be apparent from the date in the header. Suggestions: Your suggestion for additional content or elaboration of some topics is most welcome at kbp@imm.dtu.dk. Acknowledgements: We would like to thank the following for discussions, proofreading, extensive corrections and suggestions: Esben Hoegh-Rasmussen and Vasile Sima. Keywords: Matrix algebra, matrix relations, matrix identities, derivative of determinant, derivative of inverse matrix, differentiate a matrix.
2 CONTENTS CONTENTS Contents Basics 5. Trace and Determinants The Special Case 2x Derivatives 7 2. Derivatives of a Determinant Derivatives of an Inverse Derivatives of Matrices, Vectors and Scalar Forms Derivatives of Traces Derivatives of Structured Matrices Inverses 4 3. Exact Relations Implication on Inverses Approximations Generalized Inverse Pseudo Inverse Complex Matrices 7 4. Complex Derivatives Decompositions Eigenvalues and Eigenvectors Singular Value Decomposition Triangular Decomposition General Statistics and Probability Moments of any distribution Expectations Gaussians One Dimensional Basics Moments Miscellaneous One Dimensional Mixture of Gaussians Mixture of Gaussians Miscellaneous 3 8. Functions and Series Indices, Entries and Vectors Solutions to Systems of Equations Block matrices Matrix Norms Positive Definite and Semi-definite Matrices Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 2
3 CONTENTS CONTENTS 8.7 Integral Involving Dirac Delta Functions Miscellaneous A Proofs and Details 4 Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 3
4 CONTENTS CONTENTS Notation and Nomenclature A A ij A i A ij A n A A + A /2 (A) ij A ij a a i a i a Matrix Matrix indexed for some purpose Matrix indexed for some purpose Matrix indexed for some purpose Matrix indexed for some purpose or The n.th power of a square matrix The inverse matrix of the matrix A The pseudo inverse matrix of the matrix A The square root of a matrix (if unique), not elementwise The (i, j).th entry of the matrix A The (i, j).th entry of the matrix A Vector Vector indexed for some purpose The i.th element of the vector a Scalar Rz Rz RZ Iz Iz IZ det(a) A A T A A H A B A B Real part of a scalar Real part of a vector Real part of a matrix Imaginary part of a scalar Imaginary part of a vector Imaginary part of a matrix Determinant of A Matrix norm (subscript if any denotes what norm) Transposed matrix Complex conjugated matrix Transposed and complex conjugated matrix Hadamard (elementwise) product Kronecker product 0 The null matrix. Zero in all entries. I The identity matrix The single-entry matrix, at (i, j) and zero elsewhere Σ A positive definite matrix Λ A diagonal matrix J ij Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 4
5 BASICS Basics (AB) = B A (ABC...) =...C B A (A T ) = (A ) T (A + B) T = A T + B T (AB) T = B T A T (ABC...) T =...C T B T A T (A H ) = (A ) H (A + B) H = A H + B H (AB) H = B H A H (ABC...) H =...C H B H A H. Trace and Determinants Tr(A) = i A ii = i λ i, λ i = eig(a) Tr(A) = Tr(A T ) Tr(AB) = Tr(BA) Tr(A + B) = Tr(A) + Tr(B) Tr(ABC) = Tr(BCA) = Tr(CAB) det(a) = i λ i λ i = eig(a) det(ab) = det(a) det(b), det(a ) = if A and B are invertible det(a) Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 5
6 .2 The Special Case 2x2 BASICS.2 The Special Case 2x2 Consider the matrix A A A A = 2 A 2 A 22 Determinant and trace Eigenvalues λ = Tr(A) + Tr(A) 2 4 det(a) 2 λ + λ 2 = Tr(A) Eigenvectors Inverse v det(a) = A A 22 A 2 A 2 Tr(A) = A + A 22 λ 2 λ Tr(A) + det(a) = 0 A 2 λ A A = λ 2 = Tr(A) Tr(A) 2 4 det(a) 2 λ λ 2 = det(a) v 2 A22 A 2 det(a) A 2 A A 2 λ 2 A Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 6
7 2 DERIVATIVES 2 Derivatives This section is covering differentiation of a number of expressions with respect to a matrix X. Note that it is always assumed that X has no special structure, i.e. that the elements of X are independent (e.g. not symmetric, Toeplitz, positive definite). See section 2.5 for differentiation of structured matrices. The basic assumptions can be written in a formula as X kl X ij = δ ik δ lj that is for e.g. vector forms, x = x i y y i x = x y i y i x = x i y ij y j The following rules are general and very useful when deriving the differential of an expression (0): A = 0 (A is a constant) () (αx) = αx (2) (X + Y) = X + Y (3) (Tr(X)) = Tr(X) (4) (XY) = (X)Y + X(Y) (5) (X Y) = (X) Y + X (Y) (6) (X Y) = (X) Y + X (Y) (7) (X ) = X (X)X (8) (det(x)) = det(x)tr(x X) (9) (ln(det(x))) = Tr(X X) (0) X T = (X) T () X H = (X) H (2) 2. Derivatives of a Determinant 2.. General form 2..2 Linear forms det(axb) X det(y) x det(x) X = det(y)tr Y Y = det(x)(x ) T x = det(axb)(x ) T = det(axb)(x T ) Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 7
8 2.2 Derivatives of an Inverse 2 DERIVATIVES 2..3 Square forms If X is square and invertible, then det(x T AX) X If X is not square but A is symmetric, then det(x T AX) X = 2 det(x T AX)X T = 2 det(x T AX)AX(X T AX) If X is not square and A is not symmetric, then det(x T AX) X 2..4 Other nonlinear forms Some special cases are (See 8) = det(x T AX)(AX(X T AX) + A T X(X T A T X) ) (3) See 7. ln det(x T X) X = 2(X + ) T ln det(x T X) X + = 2X T ln det(x) = (X ) T = (X T ) X det(x k ) = k det(x k )X T X 2.2 Derivatives of an Inverse From 5 we have the basic identity from which it follows Y x Y = Y x Y (X ) kl X ij = (X ) ki (X ) jl a T X b = X T ab T X T X det(x ) = det(x )(X ) T X Tr(AX B) = (X BAX ) T X Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 8
9 2.3 Derivatives of Matrices, Vectors and Scalar Forms 2 DERIVATIVES 2.3 Derivatives of Matrices, Vectors and Scalar Forms 2.3. First Order x T b x a T Xa X a T Xb X a T X T b X = bt x x = abt = bat = b = at X T a X = aat X = J ij X ij (XA) ij X mn (X T A) ij X mn = δ im (A) nj = (J mn A) ij = δ in (A) mj = (J nm A) ij Second Order X ij klmn X kl X mn = 2 kl b T X T Xc X (Bx + b) T C(Dx + d) x (X T BX) kl X ij X kl = X(bc T + cb T ) = B T C(Dx + d) + D T C T (Bx + b) = δ lj (X T B) ki + δ kj (BX) il (X T BX) X ij = X T BJ ij + J ji BX (J ij ) kl = δ ik δ jl See Sec 8.2 for useful properties of the Single-entry matrix J ij x T Bx x = (B + B T )x b T X T DXc = D T Xbc T + DXcb T X X (Xb + c)t D(Xb + c) = (D + D T )(Xb + c)b T Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 9
10 2.3 Derivatives of Matrices, Vectors and Scalar Forms 2 DERIVATIVES Assume W is symmetric, then s (x As)T W(x As) = 2A T W(x As) x (x As)T W(x As) = 2W(x As) A (x As)T W(x As) = 2W(x As)s T Higher order and non-linear n X at X n b = (X r ) T ab T (X n r ) T (4) r=0 X at (X n ) T X n b = n X n r ab T (X n ) T X r r=0 +(X r ) T X n ab T (X n r ) T (5) See A.0. for a proof. Assume s and r are functions of x, i.e. s = s(x), r = r(x), and that A is a constant, then T s x st As = (A + A T )s x T T s r x st Ar = As + A T r x x Gradient and Hessian Using the above we have for the gradient and the hessian f = x T Ax + b T x x f = f x = (A + AT )x + b 2 f xx T = A + AT Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 0
11 2.4 Derivatives of Traces 2 DERIVATIVES 2.4 Derivatives of Traces 2.4. First Order Second Order X Tr(X) = I X Tr(XB) = BT (6) X Tr(BXC) = BT C T X Tr(BXT C) = CB X Tr(XT C) = C X Tr(BXT ) = B See 7. X Tr(X2 ) = 2X X Tr(X2 B) = (XB + BX) T X Tr(XT BX) = BX + B T X X Tr(XBXT ) = XB T + XB X Tr(XT X) = 2X X Tr(BXXT ) = (B + B T )X X Tr(BT X T CXB) = C T XBB T + CXBB T X Tr X T BXC = BXC + B T XC T X Tr(AXBXT C) = A T C T XB T + CAXB X Tr (AXb + c)(axb + c) T = 2A T (AXb + c)b T Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page
12 2.5 Derivatives of Structured Matrices 2 DERIVATIVES Higher Order X Tr(Xk ) = k(x k ) T k X Tr(AXk ) = (X r AX k r ) T r=0 X Tr B T X T CXX T CXB = CXX T CXBB T + C T XBB T X T C T X Other + CXBB T X T CX + C T XX T C T XBB T X Tr(AX B) = (X BAX ) T = X T A T B T X T Assume B and C to be symmetric, then X Tr (X T CX) A = (CX(X T CX) )(A + A T )(X T CX) X Tr (X T CX) (X T BX) = 2CX(X T CX) X T BX(X T CX) +2BX(X T CX) See Derivatives of Structured Matrices Assume that the matrix A has some structure, i.e. is symmetric, toeplitz, etc. In that case the derivatives of the previous section does not apply in general. In stead, consider the following general rule for differentiating a scalar function f(a) df = T f A kl f A = Tr da ij A kl A ij A A ij kl The matrix differentiated with respect to itself is in this document referred to as the structure matrix of A and is defined simply by A A ij = S ij If A has no special structure we have simply S ij = J ij, that is, the structure matrix is simply the singleentry matrix. Many structures have a representation in singleentry matrices, see Sec for more examples of structure matrices. Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 2
13 2.5 Derivatives of Structured Matrices 2 DERIVATIVES 2.5. Symmetric If A is symmetric, then S ij = J ij + J ji J ij J ij and therefore That is, e.g., (5, 6): T df f f f da = + diag A A A Tr(AX) X det(x) X ln det(x) X = A + A T (A I), see (20) (7) = 2X (X I) (8) = 2X (X I) (9) Diagonal If X is diagonal, then (0): Tr(AX) X = A I (20) Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 3
14 3 INVERSES 3 Inverses 3. Exact Relations 3.. The Woodbury identity (A + CBC T ) = A A C(B + C T A C) C T A If P, R are positive definite, then (see 7) (P + B T R B) B T R = PB T (BPB T + R) 3..2 The Kailath Variant (A + BC) = A A B(I + CA B) CA See 4 page The Searle Set of Identities The following set of identities, can be found in 3, page 5, (I + A ) = A(A + I) (A + BB T ) B = A B(I + B T A B) (A + B ) = A(A + B) B = B(A + B) A A A(A + B) A = B B(A + B) B A + B = A (A + B)B (I + AB) = I A(I + BA) B (I + AB) A = A(I + AB) 3.2 Implication on Inverses See 3. (A + B) = A + B AB A = BA B 3.2. A PosDef identity Assume P, R to be positive definite and invertible, then (P + B T R B) B T R = PB T (BPB T + R) See?. Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 4
15 3.3 Approximations 3 INVERSES 3.3 Approximations If σ 2 is small then (I + A) = I A + A 2 A A A(I + A) A = I A 3.4 Generalized Inverse 3.4. Definition if A large and symmetric (Q + σ 2 M) = Q σ 2 Q MQ A generalized inverse matrix of the matrix A is any matrix A such that The matrix A is not unique. 3.5 Pseudo Inverse 3.5. Definition AA A = A The pseudo inverse (or Moore-Penrose inverse) of a matrix A is the matrix A + that fulfils I AA + A = A II A + AA + = A + III IV AA + symmetric A + A symmetric The matrix A + is unique and does always exist Properties Assume A + to be the pseudo-inverse of A, then (See 3) (A + ) + = A (A T ) + = (A + ) T (ca) + = (/c)a + (A T A) + = A + (A T ) + (AA T ) + = (A T ) + A + Assume A to have full rank, then (AA + )(AA + ) = AA + (A + A)(A + A) = A + A Tr(AA + ) = rank(aa + ) (See 4) Tr(A + A) = rank(a + A) (See 4) Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 5
16 3.5 Pseudo Inverse 3 INVERSES Construction Assume that A has full rank, then A n n Square rank(a) = n A + = A A n m Broad rank(a) = n A + = A T (AA T ) A n m Tall rank(a) = m A + = (A T A) A T Assume A does not have full rank, i.e. A is n m and rank(a) = r < min(n, m). The pseudo inverse A + can be constructed from the singular value decomposition A = UDV T, by A + = VD + U T A different way is this: There does always exists two matrices C n r and D r m of rank r, such that A = CD. Using these matrices it holds that See 3. A + = D T (DD T ) (C T C) C T Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 6
17 4 COMPLEX MATRICES 4 Complex Matrices 4. Complex Derivatives In order to differentiate an expression f(z) with respect to a complex z, the Cauchy-Riemann equations have to be satisfied (7): df(z) dz and df(z) dz or in a more compact form: = R(f(z)) Rz = i R(f(z)) Iz + i I(f(z)) Rz + I(f(z)) Iz (2) (22) f(z) Iz = if(z) Rz. (23) A complex function that satisfies the Cauchy-Riemann equations for points in a region R is said yo be analytic in this region R. In general, expressions involving complex conjugate or conjugate transpose do not satisfy the Cauchy-Riemann equations. In order to avoid this problem, a more generalized definition of complex derivative is used (2, 6): Generalized Complex Derivative: df(z) dz = 2 Conjugate Complex Derivative ( f(z) ) Rz if(z) Iz df(z) dz = ( f(z) ) 2 Rz + if(z) Iz (24) (25) The Generalized Complex Derivative equals the normal derivative, when f is an analytic function. For a non-analytic function such as f(z) = z, the derivative equals zero. The Conjugate Complex Derivative equals zero, when f is an analytic function. The Conjugate Complex Derivative has e.g been used by when deriving a complex gradient. Notice: df(z) dz f(z) Rz + if(z) Iz (26) Complex Gradient Vector: If f is a real function of a complex vector z, then the complex gradient vector is given by (9, p. 798) f(z) = 2 df(z) dz (27) = f(z) Rz + if(z) Iz Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 7
18 4. Complex Derivatives 4 COMPLEX MATRICES Complex Gradient Matrix: If f is a real function of a complex matrix Z, then the complex gradient matrix is given by (2) f(z) = 2 df(z) dz (28) = f(z) RZ + if(z) IZ These expressions can be used for gradient descent algorithms. 4.. The Chain Rule for complex numbers The chain rule is a little more complicated when the function of a complex u = f(x) is non-analytic. For a non-analytic function, the following chain rule can be applied (?) g(u) x = g u u x + g u u x = g u ( g ) u u x + u x (29) Notice, if the function is analytic, the second term reduces to zero, and the function is reduced to the normal well-known chain rule. For the matrix derivative of a scalar function g(u), the chain rule can be written the following way: g(u) X g(u) Tr(( = U )T U) + X 4..2 Complex Derivatives of Traces Tr(( g(u) U ) T U ) X. (30) If the derivatives involve complex numbers, the conjugate transpose is often involved. The most useful way to show complex derivative is to show the derivative with respect to the real and the imaginary part separately. An easy example is: Tr(X ) RX = Tr(XH ) RX i Tr(X ) IX = ) itr(xh IX = I (3) = I (32) Since the two results have the same sign, the conjugate complex derivative (25) should be used. Tr(X) RX = Tr(XT ) RX i Tr(X) IX = ) itr(xt IX = I (33) = I (34) Here, the two results have different signs, the generalized complex derivative (24) should be used. Hereby, it can be seen that (??) holds even if X is a Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 8
19 4. Complex Derivatives 4 COMPLEX MATRICES complex number. Tr(AX H ) RX i Tr(AXH ) IX Tr(AX ) RX i Tr(AX ) IX = A (35) = A (36) = A T (37) = A T (38) Tr(XX H ) RX i Tr(XXH ) IX = Tr(XH X) RX = i Tr(XH X) IX By inserting (39) and (40) in (24) and (25), it can be seen that = 2RX (39) = i2ix (40) Tr(XX H ) = X X (4) Tr(XX H ) X = X (42) Since the function Tr(XX H ) is a real function of the complex matrix X, the complex gradient matrix (28) is given by Tr(XX H ) = 2 Tr(XXH ) X = 2X (43) 4..3 Complex Derivative Involving Determinants Here, a calculation example is provided. The objective is to find the derivative of det(x H AX) with respect to X C m n. The derivative is found with respect to the real part and the imaginary part of X, by use of (9) and (5), det(x H AX) can be calculated as (see Sec. A.0.2 for details) det(x H AX) X and the complex conjugate derivative yields = ( det(x H AX) i det(xh AX) ) 2 RX IX = det(x H AX) ( (X H AX) X H A ) T (44) det(x H AX) X = ( det(x H AX) + i det(xh AX) ) 2 RX IX = det(x H AX)AX(X H AX) (45) Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 9
20 5 DECOMPOSITIONS 5 Decompositions 5. Eigenvalues and Eigenvectors 5.. Definition The eigenvectors v and eigenvalues λ are the ones satisfying Av i = λ i v i AV = VD, (D) ij = δ ij λ i where the columns of V are the vectors v i 5..2 General Properties 5..3 Symmetric eig(ab) = eig(ba) A is n m At most min(n, m) distinct λ i rank(a) = r At most r non-zero λ i Assume A is symmetric, then VV T = I (i.e. V is orthogonal) λ i R (i.e. λ i is real) Tr(A p ) = i λp i eig(i + ca) = + cλ i eig(a ) = λ i 5.2 Singular Value Decomposition Any n m matrix A can be written as where A = UDV T U = eigenvectors of AA T n n D = diag(eig(aa T )) n m V = eigenvectors of A T A m m 5.2. Symmetric Square decomposed into squares Assume A to be n n and symmetric. Then A = V D V T where D is diagonal with the eigenvalues of A and V is orthogonal and the eigenvectors of A. Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 20
21 5.3 Triangular Decomposition 5 DECOMPOSITIONS Square decomposed into squares Assume A to be n n. Then A = V D U T where D is diagonal with the square root of the eigenvalues of AA T, V is the eigenvectors of AA T and U T is the eigenvectors of A T A Square decomposed into rectangular Assume V D U T = 0 then we can expand the SVD of A into D 0 U T A = V V 0 D where the SVD of A is A = VDU T Rectangular decomposition I U T Assume A is n m A = V D U T where D is diagonal with the square root of the eigenvalues of AA T, V is the eigenvectors of AA T and U T is the eigenvectors of A T A Rectangular decomposition II Assume A is n m A = V D U T Rectangular decomposition III Assume A is n m A = V D U T where D is diagonal with the square root of the eigenvalues of AA T, V is the eigenvectors of AA T and U T is the eigenvectors of A T A. 5.3 Triangular Decomposition 5.3. Cholesky-decomposition Assume A is positive definite, then A = B T B where B is a unique upper triangular matrix. Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 2
22 6 GENERAL STATISTICS AND PROBABILITY 6 General Statistics and Probability 6. Moments of any distribution 6.. Mean and covariance of linear forms Assume X and x to be a matrix and a vector of random variables. Then EAXB + C = AEXB + C See 4. VarAx = AVarxA T CovAx, By = ACovx, yb T 6..2 Mean and Variance of Square Forms Assume A is symmetric, c = Ex and Σ = Varx. Assume also that all coordinates x i are independent, have the same central moments µ, µ 2, µ 3, µ 4 and denote a = diag(a). Then See 4 Ex T Ax = Tr(AΣ) + c T Ac Varx T Ax = 2µ 2 2Tr(A 2 ) + 4µ 2 c T A 2 c + 4µ 3 c T Aa + (µ 4 3µ 2 2)a T a 6.2 Expectations Assume x to be a stochastic vector with mean m, covariance M and central moments v r = E(x m) r Linear Forms Quadratic Forms EAx + b = Am + b EAx = Am Ex + b = m + b E(Ax + a)(bx + b) T = AMB T + (Am + a)(bm + b) T Exx T = M + mm T Exa T x = (M + mm T )a Ex T ax T = a T (M + mm T ) E(Ax)(Ax) T = A(M + mm T )A T E(x + a)(x + a) T = M + (m + a)(m + a) T Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 22
23 6.2 Expectations 6 GENERAL STATISTICS AND PROBABILITY E(Ax + a) T (Bx + b) = Tr(AMB T ) + (Am + a) T (Bm + b) Ex T x = Tr(M) + m T m Ex T Ax = Tr(AM) + m T Am E(Ax) T (Ax) = Tr(AMA T ) + (Am) T (Am) E(x + a) T (x + a) = Tr(M) + (m + a) T (m + a) See Cubic Forms Assume x to be independent, then E(Ax + a)(bx + b) T (Cx + c) = Adiag(B T C)v 3 +Tr(BMC T )(Am + a) +AMC T (Bm + b) +(AMB T + (Am + a)(bm + b) T )(Cm + c) Exx T x = v 3 + 2Mm + (Tr(M) + m T m)m E(Ax + a)(ax + a) T (Ax + a) = Adiag(A T A)v 3 +2AMA T + (Ax + a)(ax + a) T (Am + a) +Tr(AMA T )(Am + a) E(Ax + a)b T (Cx + c)(dx + d) T = (Ax + a)b T (CMD T + (Cm + c)(dm + d) T ) +(AMC T + (Am + a)(cm + c) T )b(dm + d) T +b T (Cm + c)(amd T (Am + a)(dm + d) T ) See 7. Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 23
24 7 GAUSSIANS 7 Gaussians 7. One Dimensional 7.. Density and Normalization The density is Normalization integrals p(s) = ( ) exp (s µ)2 2πσ 2 2σ 2 e (s µ) 2 2σ 2 ds = 2πσ 2 π b e (ax2 +bx+c) 2 dx = a exp 4ac 4a e c 2x 2 +c x+c 0 π c 2 dx = exp 4c 2 c 0 c 2 4c Completing the Squares c 2 x 2 + c x + c 0 = a(x b) 2 + w or a = c 2 b = c w = 2 c 2 4 c 2 c 2 + c 0 c 2 x 2 + c x + c 0 = 2σ 2 (x µ)2 + d µ = c 2c 2 σ 2 = 2c 2 d = c 0 c2 4c Moments If the density is expressed by p(x) = exp (s µ)2 2πσ 2 2σ 2 or p(x) = C exp(c 2 x 2 + c x) then the first few basic moments are x = µ = c 2c 2 x 2 = σ 2 + µ 2 = 2c 2 + x 3 = 3σ 2 µ + µ 3 = x 4 = µ 4 + 6µ 2 σ 2 + 3σ 4 = and the central moments are c (2c 2 ) 2 ( ) 2 c 2c 2 3 c2 ( ) 4 ( c 2c c 2c 2 ) 2 ( ) ( 2c 2 2c ) 2 2c 2 Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 24
25 7.2 Basics 7 GAUSSIANS (x µ) = 0 = 0 (x µ) 2 = σ 2 = 2c 2 (x µ) 3 = 0 = 0 (x µ) 4 = 3σ 4 = 3 2 2c 2 A kind of pseudo-moments (un-normalized integrals) can easily be derived as π c exp(c 2 x 2 + c x)x n dx = Z x n 2 = exp x n c 2 4c 2 From the un-centralized moments one can derive other entities like 7.2 Basics x 2 x 2 = σ 2 = 2c 2 x 3 x 2 x = 2σ 2 µ = x 4 x 2 2 = 2σ 4 + 4µ 2 σ 2 = Density and normalization 2c (2c 2) 2 (2c 2) 2 4 c2 The density of x N (m, Σ) is p(x) = exp det(2πσ) 2 (x m)t Σ (x m) 2c 2 Note that if x is d-dimensional, then det(2πσ) = (2π) d det(σ). Integration and normalization exp 2 (x m)t Σ (x m) dx = det(2πσ) exp 2 xt Ax + b T x dx = det(2πa ) exp exp 2 Tr(ST AS) + Tr(B T S) ds = det(2πa ) exp The derivatives of the density are p(x) x = p(x)σ (x m) 2 p ( xx T = p(x) Σ (x m)(x m) T Σ Σ ) Linear combination Assume x N (m x, Σ x ) and y N (m y, Σ y ) then 2 bt A b 2 Tr(BT A B) Ax + By + c N (Am x + Bm y + c, AΣ x A T + BΣ y B T ) Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 25
26 7.2 Basics 7 GAUSSIANS Rearranging Means N Ax m, Σ = det(2π(at Σ A) ) det(2πσ) N x A m, (A T Σ A) Rearranging into squared form If A is symmetric, then 2 xt Ax + b T x = 2 (x A b) T A(x A b) + 2 bt A b 2 Tr(XT AX)+Tr(B T X) = 2 Tr(X A B) T A(X A B)+ 2 Tr(BT A B) Sum of two squared forms In vector formulation (assuming Σ, Σ 2 are symmetric) 2 (x m ) T Σ (x m ) 2 (x m 2) T Σ 2 (x m 2) = 2 (x m c) T Σ c (x m c ) + C Σ c = Σ + Σ 2 m c = (Σ + Σ 2 ) (Σ m + Σ 2 m 2) C = 2 (mt Σ + m T 2 Σ 2 )(Σ + Σ ( ) m T Σ 2 m + m T 2 Σ 2 m 2 2 ) (Σ In a trace formulation (assuming Σ, Σ 2 are symmetric) 2 Tr((X M ) T Σ (X M )) 2 Tr((X M 2) T Σ 2 (X M 2)) = 2 Tr(X M c) T Σ c (X M c ) + C m + Σ 2 m 2) Σ c = Σ + Σ 2 M c = (Σ + Σ 2 ) (Σ M + Σ 2 M 2) C = 2 Tr (Σ M + Σ 2 M 2) T (Σ + Σ 2 ) (Σ M + Σ 2 M 2) 2 Tr(MT Σ M + M T 2 Σ 2 M 2) Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 26
27 7.3 Moments 7 GAUSSIANS Product of gaussian densities Let N x (m, Σ) denote a density of x, then N x (m, Σ ) N x (m 2, Σ 2 ) = c c N x (m c, Σ c ) c c = N m (m 2, (Σ + Σ 2 )) = det(2π(σ + Σ 2 )) exp 2 (m m 2 ) T (Σ + Σ 2 ) (m m 2 ) m c = (Σ + Σ 2 ) (Σ m + Σ 2 m 2) Σ c = (Σ + Σ 2 ) but note that the product is not normalized as a density of x. 7.3 Moments 7.3. Mean and covariance of linear forms First and second moments. Assume x N (m, Σ) E(x) = m Cov(x, x) = Var(x) = Σ = E(xx T ) E(x)E(x T ) = E(xx T ) mm T As for any other distribution is holds for gaussians that EAx = AEx VarAx = AVarxA T CovAx, By = ACovx, yb T Mean and variance of square forms Mean and variance of square forms: Assume x N (m, Σ) E(xx T ) = Σ + mm T Ex T Ax = Tr(AΣ) + m T Am Var(x T Ax) = 2σ 4 Tr(A 2 ) + 4σ 2 m T A 2 m E(x m ) T A(x m ) = (m m ) T A(m m ) + Tr(AΣ) Assume x N (0, σ 2 I) and A and B to be symmetric, then Cov(x T Ax, x T Bx) = 2σ 4 Tr(AB) Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 27
28 7.3 Moments 7 GAUSSIANS Cubic forms Exb T xx T = mb T (M + mm T ) + (M + mm T )bm T Mean of Quartic Forms +b T m(m mm T ) Exx T xx T = 2(Σ + mm T ) 2 + m T m(σ mm T ) +Tr(Σ)(Σ + mm T ) Exx T Axx T = (Σ + mm T )(A + A T )(Σ + mm T ) +m T Am(Σ mm T ) + TrAΣ(Σ + mm T ) Ex T xx T x = 2Tr(Σ 2 ) + 4m T Σm + (Tr(Σ) + m T m) 2 Ex T Axx T Bx = TrAΣ(B + B T )Σ + m T (A + A T )Σ(B + B T )m Ea T xb T xc T xd T x +(Tr(AΣ) + m T Am)(Tr(BΣ) + m T Bm) = (a T (Σ + mm T )b)(c T (Σ + mm T )d) +(a T (Σ + mm T )c)(b T (Σ + mm T )d) +(a T (Σ + mm T )d)(b T (Σ + mm T )c) 2a T mb T mc T md T m E(Ax + a)(bx + b) T (Cx + c)(dx + d) T = AΣB T + (Am + a)(bm + b) T CΣD T + (Cm + c)(dm + d) T +AΣC T + (Am + a)(cm + c) T BΣD T + (Bm + b)(dm + d) T +(Bm + b) T (Cm + c)aσd T (Am + a)(dm + d) T +Tr(BΣC T )AΣD T + (Am + a)(dm + d) T E(Ax + a) T (Bx + b)(cx + c) T (Dx + d) = TrAΣ(C T D + D T C)ΣB T See 7. +(Am + a) T B + (Bm + b) T AΣC T (Dm + d) + D T (Cm + c) +Tr(AΣB T ) + (Am + a) T (Bm + b)tr(cσd T ) + (Cm + c) T (Dm + d) Moments Cov(x) = k Ex = ρ k m k k ρ k ρ k (Σ k + m k m T k m k m T k ) k Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 28
29 7.4 Miscellaneous 7 GAUSSIANS 7.4 Miscellaneous 7.4. Whitening Assume x N (m, Σ) then z = Σ /2 (x m) N (0, I) Conversely having z N (0, I) one can generate data x N (m, Σ) by setting x = Σ /2 z + m N (m, Σ) Note that Σ /2 means the matrix which fulfils Σ /2 Σ /2 = Σ, and that it exists and is unique since Σ is positive definite The Chi-Square connection Assume x N (m, Σ) and x to be n dimensional, then Entropy z = (x m) T Σ (x m) χ 2 n Entropy of a D-dimensional gaussian H(x) = N (m, Σ) ln N (m, Σ)dx = ln det(2πσ) D One Dimensional Mixture of Gaussians 7.5. Density and Normalization p(s) = K k ρ k exp 2πσ 2 k 2 (s µ k ) 2 σ 2 k Moments An useful fact of MoG, is that x n = k ρ k x n k where k denotes average with respect to the k.th component. We can calculate the first four moments from the densities p(x) = ρ k exp (x µ k ) 2 k 2πσ 2 k 2 σk 2 p(x) = ρ k C k exp c k2 x 2 + c k x k as Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 29
30 7.6 Mixture of Gaussians 7 GAUSSIANS x = k ρ kµ k = k ρ ck k x 2 = k ρ k(σk 2 + µ2 k ) = ( ) 2 k ρ k 2c k2 + ck 2c k2 x 3 = k ρ k(3σk 2µ k + µ 3 k ) = k ρ ck k (2c k2 ) 3 c2 2 k 2c k2 2c k2 x 4 = k ρ k(µ 4 k + 6µ2 k σ2 k + 3σ4 k ) = k ρ k ( 2c k2 ) 2 ( ck 2c k2 ) 2 6 c 2 k 2c k2 + 3 If all the gaussians are centered, i.e. µ k = 0 for all k, then x = 0 = 0 x 2 = k ρ kσk 2 = k ρ k x 3 = 0 = 0 x 4 = k ρ k3σk 4 = k ρ k3 2c k2 2 2c k2 From the un-centralized moments one can derive other entities like x 2 x 2 = k,k ρ kρ k µ 2 k + σk 2 µ kµ k x 3 x 2 x = k,k ρ kρ k 3σ 2 k µ k + µ 3 k (σ2 k + µ2 k )µ k x 4 x 2 2 = k,k ρ kρ k µ 4 k + 6µ 2 k σ2 k + 3σ4 k (σ2 k + µ2 k )(σ2 k + µ2 k ) 7.6 Mixture of Gaussians 7.6. Density The variable x is distributed as a mixture of gaussians if it has the density p(x) = K k= ρ k det(2πσk ) exp 2 (x m k) T Σ k (x m k) where ρ k sum to and the Σ k all are positive definite. Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 30
31 8 MISCELLANEOUS 8 Miscellaneous 8. Functions and Series 8.. Finite Series (X n I)(X I) = I + X + X X n 8..2 Taylor Expansion of Scalar Function Consider some scalar function f(x) which takes the vector x as an argument. This we can Taylor expand around x 0 f(x) = f(x 0 ) + g(x 0 ) T (x x 0 ) + 2 (x x 0) T H(x 0 )(x x 0 ) where g(x 0 ) = f(x) x x0 H(x 0 ) = 2 f(x) xx T x Taylor Expansion of Vector Functions 8..4 Matrix Functions by Infinite Series As for analytical functions in one dimension, one can define a matrix function for square matrices X by an infinite series f(x) = c n X n n=0 assuming the limit exists and is finite. If the coefficients c n fulfils n c nx n <, then one can prove that the above series exists and is finite, see. Thus for any analytical function f(x) there exists a corresponding matrix function f(x) constructed by the Taylor expansion. Using this one can prove the following results: ) A matrix A is a zero of its own characteristic polynomium : p(λ) = det(iλ A) = n c n λ n p(a) = 0 2) If A is square it holds that A = UBU f(a) = Uf(B)U 3) A useful fact when using power series is that A n 0 for n if A < Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 3
32 8.2 Indices, Entries and Vectors 8 MISCELLANEOUS 8..5 Exponential Matrix Function In analogy to the ordinary scalar exponential function, one can define exponential and logarithmic matrix functions: e A = e A = e ta = ln(i + A) = n=0 n=0 n=0 n! An = I + A + 2 A n! ( )n A n = I A + 2 A2... n! (ta)n = I + ta + 2 t2 A ( ) n A n = A n 2 A2 + 3 A3... n= Some of the properties of the exponential function are e A e B = e A+B if AB = BA (e A ) = e A d dt eta = Ae ta, 8..6 Trigonometric Functions sin(a) = cos(a) = n=0 ( ) n A 2n+ (2n + )! t R = A 3! A3 + 5! A5... ( ) n A 2n = I (2n)! 2! A2 + 4! A4... n=0 8.2 Indices, Entries and Vectors Let e i denote the column vector which is on entry i and zero elsewhere, i.e. (e i ) j = δ ij, and let J ij denote the matrix which is on entry (i, j) and zero elsewhere Rows and Columns i.th row of A = e T i A j.th column of A = Ae j Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 32
33 8.2 Indices, Entries and Vectors 8 MISCELLANEOUS Permutations Let P be some permutation matrix, e.g. 0 0 P = = e 2 e e 3 = e T 2 e T e T 3 then AP = Ae 2 Ae Ae 2 PA = e T 2 A e T A e T 3 A That is, the first is a matrix which has columns of A but in permuted sequence and the second is a matrix which has the rows of A but in the permuted sequence Swap and Zeros Assume A to be n m and J ij to be m p AJ ij = A i... 0 i.e. an n p matrix of zeros with the i.th column of A in the placed of the j.th column. Assume A to be n m and J ij to be p n 0 0 J ij A =. A j.. 0 i.e. an p m matrix of zeros with the j.th row of A in the placed of the i.th row Rewriting product of elements A ki B jl = (Ae i e T j B) kl = (AJ ij B) kl A ik B lj = (A T e i e T j B T ) kl = (A T J ij B T ) kl A ik B jl = (A T e i e T j B) kl = (A T J ij B) kl A ki B lj = (Ae i e T j B T ) kl = (AJ ij B T ) kl Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 33
34 8.2 Indices, Entries and Vectors 8 MISCELLANEOUS Properties of the Singleentry Matrix If i = j If i j J ij J ij = J ij (J ij ) T (J ij ) T = J ij J ij (J ij ) T = J ij (J ij ) T J ij = J ij J ij J ij = 0 (J ij ) T (J ij ) T = 0 J ij (J ij ) T = J ii (J ij ) T J ij = J jj The Singleentry Matrix in Scalar Expressions Assume A is n m and J is m n, then Tr(AJ ij ) = Tr(J ij A) = (A T ) ij Assume A is n n, J is n m and B is m n, then Tr(AJ ij B) = (A T B T ) ij Tr(AJ ji B) = (BA) ij Tr(AJ ij J ij B) = diag(a T B T ) ij Assume A is n n, J ij is n m B is m n, then Structure Matrices The structure matrix is defined by If A has no special structure then x T AJ ij Bx = (A T xx T B T ) ij x T AJ ij J ij Bx = diag(a T xx T B T ) ij A A ij = S ij S ij = J ij If A is symmetric then S ij = J ij + J ji J ij J ij Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 34
35 8.3 Solutions to Systems of Equations 8 MISCELLANEOUS 8.3 Solutions to Systems of Equations 8.3. Existence in Linear Systems Assume A is n m and consider the linear system Ax = b Construct the augmented matrix B = A b then Condition rank(a) = rank(b) = m rank(a) = rank(b) < m rank(a) < rank(b) Solution Unique solution x Many solutions x No solutions x Standard Square Assume A is square and invertible, then Degenerated Square Ax = b x = A b Over-determined Rectangular Assume A to be n m, n > m (tall) and rank(a) = m, then Ax = b x = (A T A) A T b = A + b that is if there exists a solution x at all! If there is no solution the following can be useful: Ax = b x min = A + b Now x min is the vector x which minimizes Ax b 2, i.e. the vector which is least wrong. The matrix A + is the pseudo-inverse of A. See Under-determined Rectangular Assume A is n m and n < m ( broad ). Ax = b x min = A T (AA T ) b The equation have many solutions x. But x min is the solution which minimizes Ax b 2 and also the solution with the smallest norm x 2. The same holds for a matrix version: Assume A is n m, X is m n and B is n n, then AX = B X min = A + B The equation have many solutions X. But X min is the solution which minimizes AX B 2 and also the solution with the smallest norm X 2. See 3. Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 35
36 8.4 Block matrices 8 MISCELLANEOUS Similar but different: Assume A is square n n and the matrices B 0, B are n N, where N > n, then if B 0 has maximal rank AB 0 = B A min = B B T 0 (B 0 B T 0 ) where A min denotes the matrix which is optimal in a least square sense. An interpretation is that A is the linear approximation which maps the columns vectors of B 0 into the columns vectors of B Linear form and zeros Ax = 0, x A = Square form and zeros If A is symmetric, then x T Ax = 0, x A = Block matrices Let A ij denote the ij.th block of A Multiplication Assuming the dimensions of the blocks matches we have A A 2 B B 2 A B = + A 2 B 2 A B 2 + A 2 B 22 A 2 A 22 B 2 B 22 A 2 B + A 22 B 2 A 2 B 2 + A 22 B The Determinant The determinant can be expressed as by the use of C = A A 2 A 22 A 2 C 2 = A 22 A 2 A A 2 as ( A A det 2 A 2 A 22 ) = det(a 22 ) det(c ) = det(a ) det(c 2 ) The Inverse The inverse can be expressed as by the use of C = A A 2 A 22 A 2 C 2 = A 22 A 2 A A 2 Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 36
37 8.5 Matrix Norms 8 MISCELLANEOUS as A A 2 = = C A A 2C 2 C 2 A 2A C 2 A 2 A 22 A + A A 2C 2 A 2A C A 2A Block diagonal A 22 A 2C A 22 + A 22 A 2C A 2A 22 For block diagonal matrices we have A 0 0 A 22 = ( A 0 det 0 A Matrix Norms 8.5. Definitions A matrix norm is a mapping which fulfils Examples (A ) 0 0 (A 22 ) ) = det(a ) det(a 22 ) A 0 A = 0 A = 0 ca = c A, c R A + B A + B A = max A ij j i A 2 = max eig(a T A) A p = ( max Ax p) /p x p= A = max A ij i j A F = A ij 2 (Frobenius) ij A max = max A ij ij A KF = sing(a) (Ky Fan) where sing(a) is the vector of singular values of the matrix A. Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 37
38 8.6 Positive Definite and Semi-definite Matrices 8 MISCELLANEOUS Inequalities E. H. Rasmussen has in yet unpublished material derived and collected the following inequalities. They are collected in a table as below, assuming A is an m n, and d = min{m, n} A max A A A 2 A F A KF A max A m m m m m A n n n n n A 2 mn n m A F mn n m d A KF mnd nd md d d which are to be read as, e.g. A 2 m A 8.6 Positive Definite and Semi-definite Matrices 8.6. Definitions A matrix A is positive definite if and only if x T Ax > 0, x A matrix A is positive semi-definite if and only if x T Ax 0, x Note that if A is positive definite, then A is also positive semi-definite Eigenvalues The following holds with respect to the eigenvalues: Trace A pos. def. eig(a) > 0 A pos. semi-def. eig(a) 0 The following holds with respect to the trace: Inverse A pos. def. Tr(A) > 0 A pos. semi-def. Tr(A) 0 If A is positive definite, then A is invertible and A is also positive definite. Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 38
39 8.7 Integral Involving Dirac Delta Functions 8 MISCELLANEOUS Diagonal If A is positive definite, then A ii > 0, i Decomposition I The matrix A is positive semi-definite of rank r there exists a matrix B of rank r such that A = BB T The matrix A is positive definite there exists an invertible matrix B such that A = BB T Decomposition II Assume A is an n n positive semi-definite, then there exists an n r matrix B of rank r such that B T AB = I Equation with zeros Assume A is positive semi-definite, then X T AX = 0 AX = Rank of product Assume A is positive definite, then rank(bab T ) = rank(b) Positive definite property If A is n n positive definite and B is r n of rank r, then BAB T is positive definite Outer Product If X is n r of rank r, then XX T is positive definite Small pertubations If A is positive definite and B is symmetric, then A tb is positive definite for sufficiently small t. 8.7 Integral Involving Dirac Delta Functions Assuming A to be square, then p(s)δ(x As)ds = det(a) p(a x) Assuming A to be underdetermined, i.e. tall, then { p(s)δ(x As)ds = det(a T A) p(a+ x) if x = AA + x 0 elsewhere } Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 39
40 8.8 Miscellaneous 8 MISCELLANEOUS See Miscellaneous For any A it holds that rank(a) = rank(a T ) = rank(aa T ) = rank(a T A) Assume A is positive definite. Then rank(b T AB) = rank(b) A is positive definite B invertible, such that A = BB T Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 40
41 A PROOFS AND DETAILS A Proofs and Details A.0. Proof of Equation 4 Essentially we need to calculate (X n ) kl X ij = X ij u,...,u n X k,u X u,u 2...X un,l = δ k,i δ u,jx u,u 2...X un,l.. +X k,u δ u,iδ u2,j...x un,l = = +X k,u X u,u 2...δ un,iδ l,j n (X r ) ki (X n r ) jl r=0 n (X r J ij X n r ) kl r=0 Using the properties of the single entry matrix found in Sec , the result follows easily. A.0.2 Details on Eq. 47 det(x H AX) = det(x H AX)Tr(X H AX) (X H AX) = det(x H AX)Tr(X H AX) ((X H )AX + X H (AX)) = det(x H AX) ( Tr(X H AX) (X H )AX + Tr(X H AX) X H (AX) ) = det(x H AX) ( TrAX(X H AX) (X H ) + Tr(X H AX) X H A(X) ) First, the derivative is found with respect to the real part of X det(x H AX) RX ( TrAX(X = det(x H H AX) (X H ) AX) + Tr(XH AX) X H A(X) ) RX RX = det(x H AX) ( AX(X H AX) + ((X H AX) X H A) T ) Through the calculations, (6) and (35) were used. In addition, by use of (36), the derivative is found with respect to the imaginary part of X i det(xh AX) IX ( TrAX(X = i det(x H H AX) (X H ) AX) + Tr(XH AX) X H A(X) ) IX IX = det(x H AX) ( AX(X H AX) ((X H AX) X H A) T ) Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 4
42 A PROOFS AND DETAILS Hence, derivative yields det(x H AX) X = ( det(x H AX) i det(xh AX) ) 2 RX IX = det(x H AX) ( (X H AX) X H A ) T and the complex conjugate derivative yields det(x H AX) X = ( det(x H AX) + i det(xh AX) ) 2 RX IX = det(x H AX)AX(X H AX) Notice, for real X, A, the sum of (44) and (45) is reduced to (3). Similar calculations yield det(xax H ) X = ( det(xax H ) i det(xaxh ) ) 2 RX IX = det(xax H ) ( AX H (XAX H ) ) T (46) and det(xax H ) X = ( det(xax H ) + i det(xaxh ) ) 2 RX IX = det(xax H )(XAX H ) XA (47) Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 42
43 REFERENCES REFERENCES References Karl Gustav Andersson and Lars-Christer Boiers. Ordinaera differentialekvationer. Studenterlitteratur, Jörn Anemüller, Terrence J. Sejnowski, and Scott Makeig. Complex independent component analysis of frequency-domain electroencephalographic data. Neural Networks, 6(9):3 323, November S. Barnet. Matrices. Methods and Applications. Oxford Applied Mathematics and Computin Science Series. Clarendon Press, Christoffer Bishop. Neural Networks for Pattern Recognition. Oxford University Press, Robert J. Boik. Lecture notes: Statistics 550. Online, April Notes. 6 D. H. Brandwood. A complex gradient operator and its application in adaptive array theory. IEE Proceedings, 30(): 6, February 983. PTS. F and H. 7 M. Brookes. Matrix reference manual, Website May 20, Mads Dyrholm. Some matrix results, Website August 23, Simon Haykin. Adaptive Filter Theory. Prentice Hall, Upper Saddle River, NJ, 4th edition, Thomas P. Minka. Old and new matrix algebra useful for statistics, December Notes. L. Parra and C. Spence. Convolutive blind separation of non-stationary sources. In IEEE Transactions Speech and Audio Processing, pages , May Laurent Schwartz. Cours d Analyse, volume II. Hermann, Paris, 967. As referenced in 9. 3 Shayle R. Searle. Matrix Algebra Useful for Statistics. John Wiley and Sons, G. Seber and A. Lee. Linear Regression Analysis. John Wiley and Sons, S. M. Selby. Standard Mathematical Tables. CRC Press, Inna Stainvas. Matrix algebra in differential calculus. Neural Computing Research Group, Information Engeneering, Aston University, UK, August Notes. 7 Max Welling. The kalman filter. Lecture Note. Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 43
The Matrix Cookbook. [ http://matrixcookbook.com ] Kaare Brandt Petersen Michael Syskind Pedersen. Version: November 15, 2012
The Matrix Cookbook http://matrixcookbook.com ] Kaare Brandt Petersen Michael Syskind Pedersen Version: November 15, 01 1 Introduction What is this? These pages are a collection of facts (identities, approximations,
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More information1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
More informationIntroduction to Matrix Algebra
Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary
More informationNotes on Symmetric Matrices
CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.
More informationThe Characteristic Polynomial
Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem
More informationCONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation
Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in
More information15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
More informationEigenvalues, Eigenvectors, Matrix Factoring, and Principal Components
Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationLinear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
More informationSF2940: Probability theory Lecture 8: Multivariate Normal Distribution
SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,
More informationSF2940: Probability theory Lecture 8: Multivariate Normal Distribution
SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2014 Timo Koski () Mathematisk statistik 24.09.2014 1 / 75 Learning outcomes Random vectors, mean vector, covariance
More informationPUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5.
PUTNAM TRAINING POLYNOMIALS (Last updated: November 17, 2015) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include
More informationLecture 2 Matrix Operations
Lecture 2 Matrix Operations transpose, sum & difference, scalar multiplication matrix multiplication, matrix-vector product matrix inverse 2 1 Matrix transpose transpose of m n matrix A, denoted A T or
More informationMatrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.
Matrix Algebra A. Doerr Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Some Basic Matrix Laws Assume the orders of the matrices are such that
More informationFactorization Theorems
Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization
More information13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.
3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms
More informationMatrix Differentiation
1 Introduction Matrix Differentiation ( and some other stuff ) Randal J. Barnes Department of Civil Engineering, University of Minnesota Minneapolis, Minnesota, USA Throughout this presentation I have
More informationDecember 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation
More informationLecture 5: Singular Value Decomposition SVD (1)
EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system
More informationChapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. ( 1 1 5 2 0 6
Chapter 7 Matrices Definition An m n matrix is an array of numbers set out in m rows and n columns Examples (i ( 1 1 5 2 0 6 has 2 rows and 3 columns and so it is a 2 3 matrix (ii 1 0 7 1 2 3 3 1 is a
More informationLINEAR ALGEBRA. September 23, 2010
LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................
More informationNotes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
More information3.1 Least squares in matrix form
118 3 Multiple Regression 3.1 Least squares in matrix form E Uses Appendix A.2 A.4, A.6, A.7. 3.1.1 Introduction More than one explanatory variable In the foregoing chapter we considered the simple regression
More informationMath 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.
Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(
More informationLinear Algebra Review and Reference
Linear Algebra Review and Reference Zico Kolter (updated by Chuong Do) September 30, 2015 Contents 1 Basic Concepts and Notation 2 11 Basic Notation 2 2 Matrix Multiplication 3 21 Vector-Vector Products
More informationVector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty
More informationApplied Linear Algebra I Review page 1
Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties
More informationMAT188H1S Lec0101 Burbulla
Winter 206 Linear Transformations A linear transformation T : R m R n is a function that takes vectors in R m to vectors in R n such that and T (u + v) T (u) + T (v) T (k v) k T (v), for all vectors u
More information3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.
Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R
More information9 MATRICES AND TRANSFORMATIONS
9 MATRICES AND TRANSFORMATIONS Chapter 9 Matrices and Transformations Objectives After studying this chapter you should be able to handle matrix (and vector) algebra with confidence, and understand the
More informationEigenvalues and Eigenvectors
Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution
More informationLinear Algebraic Equations, SVD, and the Pseudo-Inverse
Linear Algebraic Equations, SVD, and the Pseudo-Inverse Philip N. Sabes October, 21 1 A Little Background 1.1 Singular values and matrix inversion For non-smmetric matrices, the eigenvalues and singular
More informationThe Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression
The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every
More informationT ( a i x i ) = a i T (x i ).
Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)
More informationMATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.
MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar
More information1 Determinants and the Solvability of Linear Systems
1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped
More informationLinear Algebra: Determinants, Inverses, Rank
D Linear Algebra: Determinants, Inverses, Rank D 1 Appendix D: LINEAR ALGEBRA: DETERMINANTS, INVERSES, RANK TABLE OF CONTENTS Page D.1. Introduction D 3 D.2. Determinants D 3 D.2.1. Some Properties of
More informationInner Product Spaces and Orthogonality
Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,
More informationBindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8
Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e
More informationMatrices and Linear Algebra
Chapter 2 Matrices and Linear Algebra 2. Basics Definition 2... A matrix is an m n array of scalars from a given field F. The individual values in the matrix are called entries. Examples. 2 3 A = 2 4 2
More informationInner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
More informationUsing row reduction to calculate the inverse and the determinant of a square matrix
Using row reduction to calculate the inverse and the determinant of a square matrix Notes for MATH 0290 Honors by Prof. Anna Vainchtein 1 Inverse of a square matrix An n n square matrix A is called invertible
More informationLinear Algebra Review. Vectors
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length
More information8 Square matrices continued: Determinants
8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You
More informationAu = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.
Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry
More informationSolution to Homework 2
Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if
More information1 VECTOR SPACES AND SUBSPACES
1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such
More informationOctober 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix
Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,
More informationSection 6.1 - Inner Products and Norms
Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,
More information1 2 3 1 1 2 x = + x 2 + x 4 1 0 1
(d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which
More information1.5. Factorisation. Introduction. Prerequisites. Learning Outcomes. Learning Style
Factorisation 1.5 Introduction In Block 4 we showed the way in which brackets were removed from algebraic expressions. Factorisation, which can be considered as the reverse of this process, is dealt with
More informationLS.6 Solution Matrices
LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions
More informationCS3220 Lecture Notes: QR factorization and orthogonal transformations
CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss
More informationChapter 17. Orthogonal Matrices and Symmetries of Space
Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length
More informationFactor analysis. Angela Montanari
Factor analysis Angela Montanari 1 Introduction Factor analysis is a statistical model that allows to explain the correlations between a large number of observed correlated variables through a small number
More informationx1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.
Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability
More informationMathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
More informationNotes on Matrix Calculus
Notes on Matrix Calculus Paul L. Fackler North Carolina State University September 27, 2005 Matrix calculus is concerned with rules for operating on functions of matrices. For example, suppose that an
More informationSolving Linear Systems, Continued and The Inverse of a Matrix
, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing
More informationMATH PROBLEMS, WITH SOLUTIONS
MATH PROBLEMS, WITH SOLUTIONS OVIDIU MUNTEANU These are free online notes that I wrote to assist students that wish to test their math skills with some problems that go beyond the usual curriculum. These
More informationLogistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression
Logistic Regression Department of Statistics The Pennsylvania State University Email: jiali@stat.psu.edu Logistic Regression Preserve linear classification boundaries. By the Bayes rule: Ĝ(x) = arg max
More informationLINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,
More informationLecture 4: Partitioned Matrices and Determinants
Lecture 4: Partitioned Matrices and Determinants 1 Elementary row operations Recall the elementary operations on the rows of a matrix, equivalent to premultiplying by an elementary matrix E: (1) multiplying
More informationSome Lecture Notes and In-Class Examples for Pre-Calculus:
Some Lecture Notes and In-Class Examples for Pre-Calculus: Section.7 Definition of a Quadratic Inequality A quadratic inequality is any inequality that can be put in one of the forms ax + bx + c < 0 ax
More informationRESULTANT AND DISCRIMINANT OF POLYNOMIALS
RESULTANT AND DISCRIMINANT OF POLYNOMIALS SVANTE JANSON Abstract. This is a collection of classical results about resultants and discriminants for polynomials, compiled mainly for my own use. All results
More informationBrief Introduction to Vectors and Matrices
CHAPTER 1 Brief Introduction to Vectors and Matrices In this chapter, we will discuss some needed concepts found in introductory course in linear algebra. We will introduce matrix, vector, vector-valued
More information1 Lecture: Integration of rational functions by decomposition
Lecture: Integration of rational functions by decomposition into partial fractions Recognize and integrate basic rational functions, except when the denominator is a power of an irreducible quadratic.
More informationF Matrix Calculus F 1
F Matrix Calculus F 1 Appendix F: MATRIX CALCULUS TABLE OF CONTENTS Page F1 Introduction F 3 F2 The Derivatives of Vector Functions F 3 F21 Derivative of Vector with Respect to Vector F 3 F22 Derivative
More informationSOLVING LINEAR SYSTEMS
SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis
More informationSimilarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
More informationMath 312 Homework 1 Solutions
Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please
More informationNOTES ON LINEAR TRANSFORMATIONS
NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all
More informationMean value theorem, Taylors Theorem, Maxima and Minima.
MA 001 Preparatory Mathematics I. Complex numbers as ordered pairs. Argand s diagram. Triangle inequality. De Moivre s Theorem. Algebra: Quadratic equations and express-ions. Permutations and Combinations.
More informationTHE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS
THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS KEITH CONRAD 1. Introduction The Fundamental Theorem of Algebra says every nonconstant polynomial with complex coefficients can be factored into linear
More informationSimilar matrices and Jordan form
Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive
More informationTHREE DIMENSIONAL GEOMETRY
Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,
More informationVERTICES OF GIVEN DEGREE IN SERIES-PARALLEL GRAPHS
VERTICES OF GIVEN DEGREE IN SERIES-PARALLEL GRAPHS MICHAEL DRMOTA, OMER GIMENEZ, AND MARC NOY Abstract. We show that the number of vertices of a given degree k in several kinds of series-parallel labelled
More informationLeast-Squares Intersection of Lines
Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a
More information1.7. Partial Fractions. 1.7.1. Rational Functions and Partial Fractions. A rational function is a quotient of two polynomials: R(x) = P (x) Q(x).
.7. PRTIL FRCTIONS 3.7. Partial Fractions.7.. Rational Functions and Partial Fractions. rational function is a quotient of two polynomials: R(x) = P (x) Q(x). Here we discuss how to integrate rational
More informationThe Matrix Elements of a 3 3 Orthogonal Matrix Revisited
Physics 116A Winter 2011 The Matrix Elements of a 3 3 Orthogonal Matrix Revisited 1. Introduction In a class handout entitled, Three-Dimensional Proper and Improper Rotation Matrices, I provided a derivation
More information1 Sets and Set Notation.
LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most
More informationAlgebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year.
This document is designed to help North Carolina educators teach the Common Core (Standard Course of Study). NCDPI staff are continually updating and improving these tools to better serve teachers. Algebra
More informationInner products on R n, and more
Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +
More informationStatistical Machine Learning
Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes
More informationLinear Threshold Units
Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear
More informationLecture notes on linear algebra
Lecture notes on linear algebra David Lerner Department of Mathematics University of Kansas These are notes of a course given in Fall, 2007 and 2008 to the Honors sections of our elementary linear algebra
More informationISOMETRIES OF R n KEITH CONRAD
ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x
More information(Quasi-)Newton methods
(Quasi-)Newton methods 1 Introduction 1.1 Newton method Newton method is a method to find the zeros of a differentiable non-linear function g, x such that g(x) = 0, where g : R n R n. Given a starting
More informationContinued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
More informationCURVE FITTING LEAST SQUARES APPROXIMATION
CURVE FITTING LEAST SQUARES APPROXIMATION Data analysis and curve fitting: Imagine that we are studying a physical system involving two quantities: x and y Also suppose that we expect a linear relationship
More informationNOV - 30211/II. 1. Let f(z) = sin z, z C. Then f(z) : 3. Let the sequence {a n } be given. (A) is bounded in the complex plane
Mathematical Sciences Paper II Time Allowed : 75 Minutes] [Maximum Marks : 100 Note : This Paper contains Fifty (50) multiple choice questions. Each question carries Two () marks. Attempt All questions.
More informationDATA ANALYSIS II. Matrix Algorithms
DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where
More informationSome probability and statistics
Appendix A Some probability and statistics A Probabilities, random variables and their distribution We summarize a few of the basic concepts of random variables, usually denoted by capital letters, X,Y,
More informationContinuous Groups, Lie Groups, and Lie Algebras
Chapter 7 Continuous Groups, Lie Groups, and Lie Algebras Zeno was concerned with three problems... These are the problem of the infinitesimal, the infinite, and continuity... Bertrand Russell The groups
More informationTorgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances
Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances It is possible to construct a matrix X of Cartesian coordinates of points in Euclidean space when we know the Euclidean
More informationLecture 3: Linear methods for classification
Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,
More informationA note on companion matrices
Linear Algebra and its Applications 372 (2003) 325 33 www.elsevier.com/locate/laa A note on companion matrices Miroslav Fiedler Academy of Sciences of the Czech Republic Institute of Computer Science Pod
More informationa 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given
More information