The Matrix Cookbook. [ ] Kaare Brandt Petersen Michael Syskind Pedersen. Version: September 5, 2007
|
|
- Aron Gilmore
- 7 years ago
- Views:
Transcription
1 The Matrix Cookbook ] Kaare Brandt Petersen Michael Syskind Pedersen Version: September 5, 007 What is this? These pages are a collection of facts (identities, approximations, inequalities, relations,...) about matrices and matters relating to them. It is collected in this form for the convenience of anyone who wants a quick desktop reference. Disclaimer: The identities, approximations and relations presented here were obviously not invented but collected, borrowed and copied from a large amount of sources. These sources include similar but shorter notes found on the internet and appendices in books - see the references for a full list. Errors: Very likely there are errors, typos, and mistakes for which we apologize and would be grateful to receive corrections at cookbook@30.dk. Its ongoing: The project of keeping a large repository of relations involving matrices is naturally ongoing and the version will be apparent from the date in the header. Suggestions: Your suggestion for additional content or elaboration of some topics is most welcome at cookbook@30.dk. Keywords: Matrix algebra, matrix relations, matrix identities, derivative of determinant, derivative of inverse matrix, differentiate a matrix. Acknowledgements: We would like to thank the following for contributions and suggestions: Bill Baxter, Christian Rishøj, Douglas L. Theobald, Esben Hoegh-Rasmussen, Jan Larsen, Korbinian Strimmer, Lars Christiansen, Lars Kai Hansen, Leland Wilkinson, Liguo He, Loic Thibaut, Ole Winther, Stephan Hattinger, and Vasile Sima. We would also like thank The Oticon Foundation for funding our PhD studies. 1
2 CONTENTS CONTENTS Contents 1 Basics Trace and Determinants The Special Case x Derivatives 7.1 Derivatives of a Determinant Derivatives of an Inverse Derivatives of Eigenvalues Derivatives of Matrices, Vectors and Scalar Forms Derivatives of Traces Derivatives of vector norms Derivatives of matrix norms Derivatives of Structured Matrices Inverses Basic Exact Relations Implication on Inverses Approximations Generalized Inverse Pseudo Inverse Complex Matrices 4.1 Complex Derivatives Decompositions Eigenvalues and Eigenvectors Singular Value Decomposition Triangular Decomposition Statistics and Probability Definition of Moments Expectation of Linear Combinations Weighted Scalar Variable Multivariate Distributions Student s t Cauchy Gaussian Multinomial Dirichlet Normal-Inverse Gamma Wishart Inverse Wishart Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page
3 CONTENTS CONTENTS 8 Gaussians Basics Moments Miscellaneous Mixture of Gaussians Special Matrices Orthogonal, Ortho-symmetric, and Ortho-skew Units, Permutation and Shift The Singleentry Matrix Symmetric and Antisymmetric Orthogonal matrices Vandermonde Matrices Toeplitz Matrices The DFT Matrix Positive Definite and Semi-definite Matrices Block matrices Functions and Operators Functions and Series Kronecker and Vec Operator Solutions to Systems of Equations Vector Norms Matrix Norms Rank Integral Involving Dirac Delta Functions Miscellaneous A One-dimensional Results 59 A.1 Gaussian A. One Dimensional Mixture of Gaussians B Proofs and Details 6 B.1 Misc Proofs Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 3
4 CONTENTS CONTENTS Notation and Nomenclature A Matrix A ij Matrix indexed for some purpose A i Matrix indexed for some purpose A ij Matrix indexed for some purpose A n Matrix indexed for some purpose or The n.th power of a square matrix A 1 The inverse matrix of the matrix A A + The pseudo inverse matrix of the matrix A (see Sec. 3.6) A 1/ The square root of a matrix (if unique), not elementwise (A) ij The (i, j).th entry of the matrix A A ij The (i, j).th entry of the matrix A A] ij The ij-submatrix, i.e. A with i.th row and j.th column deleted a Vector a i Vector indexed for some purpose a i The i.th element of the vector a a Scalar Rz Rz RZ Iz Iz IZ Real part of a scalar Real part of a vector Real part of a matrix Imaginary part of a scalar Imaginary part of a vector Imaginary part of a matrix det(a) Determinant of A Tr(A) Trace of the matrix A diag(a) Diagonal matrix of the matrix A, i.e. (diag(a)) ij = δ ij A ij vec(a) The vector-version of the matrix A (see Sec. 10..) sup Supremum of a set A Matrix norm (subscript if any denotes what norm) A T Transposed matrix A Complex conjugated matrix A H Transposed and complex conjugated matrix (Hermitian) A B A B Hadamard (elementwise) product Kronecker product 0 The null matrix. Zero in all entries. I The identity matrix The single-entry matrix, 1 at (i, j) and zero elsewhere Σ A positive definite matrix Λ A diagonal matrix J ij Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 4
5 1 BASICS 1 Basics (AB) 1 = B 1 A 1 (1) (ABC...) 1 =...C 1 B 1 A 1 () (A T ) 1 = (A 1 ) T (3) (A + B) T = A T + B T (4) (AB) T = B T A T (5) (ABC...) T =...C T B T A T (6) (A H ) 1 = (A 1 ) H (7) (A + B) H = A H + B H (8) (AB) H = B H A H (9) (ABC...) H =...C H B H A H (10) 1.1 Trace and Determinants Tr(A) = i A ii (11) Tr(A) = i λ i, λ i = eig(a) (1) Tr(A) = Tr(A T ) (13) Tr(AB) = Tr(BA) (14) Tr(A + B) = Tr(A) + Tr(B) (15) Tr(ABC) = Tr(BCA) = Tr(CAB) (16) det(a) = i λ i λ i = eig(a) (17) det(ca) = c n det(a), if A R n n (18) det(ab) = det(a) det(b) (19) det(a 1 ) = 1/ det(a) (0) det(a n ) = det(a) n (1) det(i + uv T ) = 1 + u T v () det(i + εa) = 1 + εtr(a), ε small (3) 1. The Special Case x Consider the matrix A Determinant and trace A = ] A11 A 1 A 1 A det(a) = A 11 A A 1 A 1 (4) Tr(A) = A 11 + A (5) Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 5
6 1. The Special Case x 1 BASICS Eigenvalues λ λ Tr(A) + det(a) = 0 λ 1 = Tr(A) + Tr(A) 4 det(a) λ 1 + λ = Tr(A) λ = Tr(A) Tr(A) 4 det(a) λ 1 λ = det(a) Eigenvectors v 1 ] A 1 λ 1 A 11 v ] A 1 λ A 11 Inverse A 1 = det(a) A 1 A 11 1 A A 1 ] (6) Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 6
7 DERIVATIVES Derivatives This section is covering differentiation of a number of expressions with respect to a matrix X. Note that it is always assumed that X has no special structure, i.e. that the elements of X are independent (e.g. not symmetric, Toeplitz, positive definite). See section.8 for differentiation of structured matrices. The basic assumptions can be written in a formula as X kl X ij = δ ik δ lj (7) that is for e.g. vector forms, ] x = x i y y i ] x = x y i y i ] x = x i y ij y j The following rules are general and very useful when deriving the differential of an expression (18]): A = 0 (A is a constant) (8) (αx) = αx (9) (X + Y) = X + Y (30) (Tr(X)) = Tr(X) (31) (XY) = (X)Y + X(Y) (3) (X Y) = (X) Y + X (Y) (33) (X Y) = (X) Y + X (Y) (34) (X 1 ) = X 1 (X)X 1 (35) (det(x)) = det(x)tr(x 1 X) (36) (ln(det(x))) = Tr(X 1 X) (37) X T = (X) T (38) X H = (X) H (39).1 Derivatives of a Determinant.1.1 General form det(y) x = det(y)tr Y 1 Y x ] (40).1. Linear forms det(x) X det(axb) X = det(x)(x 1 ) T (41) = det(axb)(x 1 ) T = det(axb)(x T ) 1 (4) Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 7
8 . Derivatives of an Inverse DERIVATIVES.1.3 Square forms If X is square and invertible, then det(x T AX) X If X is not square but A is symmetric, then det(x T AX) X If X is not square and A is not symmetric, then det(x T AX) X.1.4 Other nonlinear forms Some special cases are (See 9, 7]) = det(x T AX)X T (43) = det(x T AX)AX(X T AX) 1 (44) = det(x T AX)(AX(X T AX) 1 + A T X(X T A T X) 1 ) (45) ln det(x T X) X = (X + ) T (46) ln det(x T X) X + = X T (47) ln det(x) X = (X 1 ) T = (X T ) 1 (48) det(x k ) X = k det(x k )X T (49). Derivatives of an Inverse From 6] we have the basic identity from which it follows Y 1 x Y = Y 1 x Y 1 (50) (X 1 ) kl X ij = (X 1 ) ki (X 1 ) jl (51) a T X 1 b X det(x 1 ) X Tr(AX 1 B) X = X T ab T X T (5) = det(x 1 )(X 1 ) T (53) = (X 1 BAX 1 ) T (54) Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 8
9 .3 Derivatives of Eigenvalues DERIVATIVES.3 Derivatives of Eigenvalues eig(x) = Tr(X) = I (55) X X eig(x) = X X det(x) = det(x)x T (56).4 Derivatives of Matrices, Vectors and Scalar Forms.4.1 First Order x T a x a T Xb X a T X T b X a T Xa X.4. Second Order X ij = at x x = a (57) = ab T (58) = ba T (59) = at X T a X = aa T (60) X X ij = J ij (61) (XA) ij X mn = δ im (A) nj = (J mn A) ij (6) (X T A) ij X mn = δ in (A) mj = (J nm A) ij (63) X kl (64) X kl X mn klmn = kl b T X T Xc X = X(bc T + cb T ) (65) (Bx + b) T C(Dx + d) x = B T C(Dx + d) + D T C T (Bx + b) (66) (X T BX) kl X ij = δ lj (X T B) ki + δ kj (BX) il (67) (X T BX) X ij = X T BJ ij + J ji BX (J ij ) kl = δ ik δ jl (68) Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 9
10 .4 Derivatives of Matrices, Vectors and Scalar Forms DERIVATIVES See Sec 9.3 for useful properties of the Single-entry matrix J ij x T Bx x = (B + B T )x (69) b T X T DXc X = D T Xbc T + DXcb T (70) X (Xb + c)t D(Xb + c) = (D + D T )(Xb + c)b T (71) Assume W is symmetric, then s (x As)T W(x As) = A T W(x As) (7) s (x s)t W(x s) = W(x s) (73) x (x As)T W(x As) = W(x As) (74) A (x As)T W(x As) = W(x As)s T (75).4.3 Higher order and non-linear For proof of the above, see B.1.1. (X n n 1 ) kl = (X r J ij X n 1 r ) kl (76) X ij r=0 n 1 X at X n b = (X r ) T ab T (X n 1 r ) T (77) r=0 X at (X n ) T X n b = n 1 X n 1 r ab T (X n ) T X r r=0 +(X r ) T X n ab T (X n 1 r ) T ] (78) See B.1.1 for a proof. Assume s and r are functions of x, i.e. s = s(x), r = r(x), and that A is a constant, then x st Ar = ] T s Ar + x ] T r A T s (79) x Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 10
11 .5 Derivatives of Traces DERIVATIVES.4.4 Gradient and Hessian Using the above we have for the gradient and the hessian.5 Derivatives of Traces.5.1 First Order f = x T Ax + b T x (80) x f = f x = (A + AT )x + b (81) f xx T = A + A T (8) Tr(X) X = I (83) X Tr(XA) = AT (84) X Tr(AXB) = AT B T (85) X Tr(AXT B) = BA (86) X Tr(XT A) = A (87) X Tr(AXT ) = A (88) Tr(A X) X = Tr(A)I (89) Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 11
12 .5 Derivatives of Traces DERIVATIVES.5. Second Order See 7]. X Tr(X ) = X T (90) X Tr(X B) = (XB + BX) T (91) X Tr(XT BX) = BX + B T X (9) X Tr(XBXT ) = XB T + XB (93) X Tr(AXBX) = AT X T B T + B T X T A T (94) X Tr(XT X) = X (95) X Tr(BXXT ) = (B + B T )X (96) X Tr(BT X T CXB) = C T XBB T + CXBB T (97) X Tr X T BXC ] = BXC + B T XC T (98) X Tr(AXBXT C) = A T C T XB T + CAXB (99) X Tr (AXb + c)(axb + c) T = A T (AXb + c)b T (100) Tr(X X) = Tr(X)Tr(X) = Tr(X)I (101) X X.5.3 Higher Order X Tr(Xk ) = k(x k 1 ) T (10) X Tr(AXk ) = k 1 (X r AX k r 1 ) T (103) r=0 X Tr B T X T CXX T CXB ] = CXX T CXBB T +C T XBB T X T C T X +CXBB T X T CX +C T XX T C T XBB T (104).5.4 Other X Tr(AX 1 B) = (X 1 BAX 1 ) T = X T A T B T X T (105) Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 1
13 .6 Derivatives of vector norms DERIVATIVES Assume B and C to be symmetric, then ] X Tr (X T CX) 1 A = (CX(X T CX) 1 )(A + A T )(X T CX) 1 (106) ] X Tr (X T CX) 1 (X T BX) = CX(X T CX) 1 X T BX(X T CX) 1 See 7]. norms og matrix norms..6 Derivatives of vector norms.6.1 Two-norm +BX(X T CX) 1 (107) x x a = x a (108) x a x a I (x a)(x a)t = x x a x a x a 3.7 Derivatives of matrix norms For more on matrix norms, see Sec Frobenius norm See (01). (109) x x = xt x = 1 x x (110) X X F =.8 Derivatives of Structured Matrices X Tr(XXH ) = X (111) Assume that the matrix A has some structure, i.e. symmetric, toeplitz, etc. In that case the derivatives of the previous section does not apply in general. Instead, consider the following general rule for differentiating a scalar function f(a) df = ] ] T f A kl f A = Tr (11) da ij A kl A ij A A ij kl The matrix differentiated with respect to itself is in this document referred to as the structure matrix of A and is defined simply by A A ij = S ij (113) If A has no special structure we have simply S ij = J ij, that is, the structure matrix is simply the singleentry matrix. Many structures have a representation in singleentry matrices, see Sec for more examples of structure matrices. Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 13
14 .8 Derivatives of Structured Matrices DERIVATIVES.8.1 The Chain Rule Sometimes the objective is to find the derivative of a matrix which is a function of another matrix. Let U = f(x), the goal is to find the derivative of the function g(u) with respect to X: g(u) X = g(f(x)) X Then the Chain Rule can then be written the following way: (114) g(u) X = g(u) x ij = M N k=1 l=1 Using matrix notation, this can be written as:.8. Symmetric g(u) X ij = Tr ( g(u) U g(u) u kl (115) u kl x ij )T U X ij ]. (116) If A is symmetric, then S ij = J ij + J ji J ij J ij and therefore ] ] T ] df f f f da = + diag A A A (117) That is, e.g., (5]): Tr(AX) X det(x) X ln det(x) X = A + A T (A I), see (11) (118) = det(x)(x 1 (X 1 I)) (119) = X 1 (X 1 I) (10).8.3 Diagonal If X is diagonal, then (18]): Tr(AX) X = A I (11).8.4 Toeplitz Like symmetric matrices and diagonal matrices also Toeplitz matrices has a special structure which should be taken into account when the derivative with respect to a matrix with Toeplitz structure. Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 14
15 .8 Derivatives of Structured Matrices DERIVATIVES Tr(AT) T = Tr(TA) T = Tr(A) Tr(A T ] n1 ) Tr(A T ] 1n ] n 1, ) A n1 Tr(A T ] 1n )) Tr(A T ] 1n ],n 1 ) α(a) Tr(A) Tr(A T ]1n ] n 1, ) Tr(A T ]n1 ) A 1n Tr(A T ] 1n ],n 1 ) Tr(A T ] 1n )) Tr(A) (1) As it can be seen, the derivative α(a) also has a Toeplitz structure. Each value in the diagonal is the sum of all the diagonal valued in A, the values in the diagonals next to the main diagonal equal the sum of the diagonal next to the main diagonal in A T. This result is only valid for the unconstrained Toeplitz matrix. If the Toeplitz matrix also is symmetric, the same derivative yields Tr(AT) T = Tr(TA) T = α(a) + α(a) T α(a) I (13) Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 15
16 3 INVERSES 3 Inverses 3.1 Basic Definition The inverse A 1 of a matrix A C n n is defined such that AA 1 = A 1 A = I, (14) where I is the n n identity matrix. If A 1 exists, A is said to be nonsingular. Otherwise, A is said to be singular (see e.g. 1]) Cofactors and Adjoint The submatrix of a matrix A, denoted by A] ij is a (n 1) (n 1) matrix obtained by deleting the ith row and the jth column of A. The (i, j) cofactor of a matrix is defined as cof(a, i, j) = ( 1) i+j det(a] ij ), (15) The matrix of cofactors can be created from the cofactors cof(a, 1, 1) cof(a, 1, n) cof(a) =. cof(a, i, j). cof(a, n, 1) cof(a, n, n) The adjoint matrix is the transpose of the cofactor matrix (16) adj(a) = (cof(a)) T, (17) Determinant The determinant of a matrix A C n n is defined as (see 1]) Construction det(a) = = n ( 1) j+1 A 1j det (A] 1j ) (18) j=1 n A 1j cof(a, 1, j). (19) j=1 The inverse matrix can be constructed, using the adjoint matrix, by A 1 = For the case of matrices, see section adj(a) (130) det(a) Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 16
17 3. Exact Relations 3 INVERSES Condition number The condition number of a matrix c(a) is the ratio between the largest and the smallest singular value of a matrix (see Section 5. on singular values), c(a) = d + d (131) The condition number can be used to measure how singular a matrix is. If the condition number is large, it indicates that the matrix is nearly singular. The condition number can also be estimated from the matrix norms. Here c(a) = A A 1, (13) where is a norm such as e.g the 1-norm, the -norm, the -norm or the Frobenius norm (see Sec 10.5 for more on matrix norms). The -norm of A equals (max(eig(a H A))) 1, p.57]. For a symmetric matrix, this reduces to A = max( eig(a) ) 1, p.394]. If the matrix ia symmetric and positive definite, A = max(eig(a)). The condition number based on the -norm thus reduces to A A 1 = max(eig(a)) max(eig(a 1 )) = max(eig(a)) min(eig(a)). (133) 3. Exact Relations 3..1 Basic 3.. The Woodbury identity (AB) 1 = B 1 A 1 (134) The Woodbury identity comes in many variants. The latter of the two can be found in 1] (A + CBC T ) 1 = A 1 A 1 C(B 1 + C T A 1 C) 1 C T A 1 (135) (A + UBV) 1 = A 1 A 1 U(B 1 + VA 1 U) 1 VA 1 (136) If P, R are positive definite, then (see 9]) (P 1 + B T R 1 B) 1 B T R 1 = PB T (BPB T + R) 1 (137) 3..3 The Kailath Variant See 4, page 153]. (A + BC) 1 = A 1 A 1 B(I + CA 1 B) 1 CA 1 (138) Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 17
18 3. Exact Relations 3 INVERSES 3..4 The Searle Set of Identities The following set of identities, can be found in 4, page 151], (I + A 1 ) 1 = A(A + I) 1 (139) (A + BB T ) 1 B = A 1 B(I + B T A 1 B) 1 (140) (A 1 + B 1 ) 1 = A(A + B) 1 B = B(A + B) 1 A (141) A A(A + B) 1 A = B B(A + B) 1 B (14) A 1 + B 1 = A 1 (A + B)B 1 (143) (I + AB) 1 = I A(I + BA) 1 B (144) (I + AB) 1 A = A(I + BA) 1 (145) 3..5 Rank-1 update of Moore-Penrose Inverse The following is a rank-1 update for the Moore-Penrose pseudo-inverse and proof can be found in 17]. The matrix G is defined below: Using the the notation (A + cd T ) + = A + + G (146) β = 1 + d T A + c (147) v = A + c (148) n = (A + ) T d (149) w = (I AA + )c (150) m = (I A + A) T d (151) the solution is given as six different cases, depending on the entities w, m, and β. Please note, that for any (column) vector v it holds that v + = v T (v T v) 1 = vt v. The solution is: Case 1 of 6: If w = 0 and m = 0. Then G = vw + (m + ) T n T + β(m + ) T w + (15) = 1 w vwt 1 β m mnt + m w mwt (153) Case of 6: If w = 0 and m = 0 and β = 0. Then G = vv + A + (m + ) T n T (154) = 1 v vvt A + 1 m mnt (155) Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 18
19 3.3 Implication on Inverses 3 INVERSES Case 3 of 6: If w = 0 and β 0. Then G = 1 ( ) ( ) β mvt A + β v m T v m + β β m + v β (A+ ) T v + n (156) Case 4 of 6: If w = 0 and m = 0 and β = 0. Then G = A + nn + vw + (157) = 1 n A+ nn T 1 w vwt (158) Case 5 of 6: If m = 0 and β 0. Then G = 1 β A+ nw T β n w + β ( w Case 6 of 6: If w = 0 and m = 0 and β = 0. Then β ) ( ) n T A+ n + v β w + n (159) G = vv + A + A + nn + + v + A + nvn + (160) = 1 v vvt A + 1 n A+ nn T + vt A + n v n vnt (161) 3.3 Implication on Inverses See 4]. If (A + B) 1 = A 1 + B 1 then AB 1 A = BA 1 B (16) A PosDef identity Assume P, R to be positive definite and invertible, then See 9]. (P 1 + B T R 1 B) 1 B T R 1 = PB T (BPB T + R) 1 (163) 3.4 Approximations The following is a Taylor expansion (I + A) 1 = I A + A A (164) The following approximation is from 1] and holds when A large and symmetric If σ is small compared to Q and M then A A(I + A) 1 A = I A 1 (165) (Q + σ M) 1 = Q 1 σ Q 1 MQ 1 (166) Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 19
20 3.5 Generalized Inverse 3 INVERSES 3.5 Generalized Inverse Definition A generalized inverse matrix of the matrix A is any matrix A such that (see 5]) AA A = A (167) The matrix A is not unique. 3.6 Pseudo Inverse Definition The pseudo inverse (or Moore-Penrose inverse) of a matrix A is the matrix A + that fulfils I AA + A = A II A + AA + = A + III IV AA + symmetric A + A symmetric The matrix A + is unique and does always exist. Note that in case of complex matrices, the symmetric condition is substituted by a condition of being Hermitian Properties Assume A + to be the pseudo-inverse of A, then (See 3]) Assume A to have full rank, then Construction (A + ) + = A (168) (A T ) + = (A + ) T (169) (ca) + = (1/c)A + (170) (A T A) + = A + (A T ) + (171) (AA T ) + = (A T ) + A + (17) (AA + )(AA + ) = AA + (173) (A + A)(A + A) = A + A (174) Assume that A has full rank, then Tr(AA + ) = rank(aa + ) (See 5]) (175) Tr(A + A) = rank(a + A) (See 5]) (176) Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 0
21 3.6 Pseudo Inverse 3 INVERSES A n n Square rank(a) = n A + = A 1 A n m Broad rank(a) = n A + = A T (AA T ) 1 A n m Tall rank(a) = m A + = (A T A) 1 A T Assume A does not have full rank, i.e. A is n m and rank(a) = r < min(n, m). The pseudo inverse A + can be constructed from the singular value decomposition A = UDV T, by A + = V r D 1 r U T r (177) where U r, D r, and V r are the matrices with the degenerated rows and columns deleted. A different way is this: There do always exist two matrices C n r and D r m of rank r, such that A = CD. Using these matrices it holds that See 3]. A + = D T (DD T ) 1 (C T C) 1 C T (178) Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 1
22 4 COMPLEX MATRICES 4 Complex Matrices 4.1 Complex Derivatives In order to differentiate an expression f(z) with respect to a complex z, the Cauchy-Riemann equations have to be satisfied (7]): df(z) dz and df(z) dz or in a more compact form: = R(f(z)) Rz = i R(f(z)) Iz + i I(f(z)) Rz + I(f(z)) Iz (179) (180) f(z) Iz = if(z) Rz. (181) A complex function that satisfies the Cauchy-Riemann equations for points in a region R is said yo be analytic in this region R. In general, expressions involving complex conjugate or conjugate transpose do not satisfy the Cauchy-Riemann equations. In order to avoid this problem, a more generalized definition of complex derivative is used (3], 6]): Generalized Complex Derivative: df(z) dz = 1 Conjugate Complex Derivative ( f(z) ) Rz if(z). (18) Iz df(z) dz = 1 ( f(z) ) Rz + if(z). (183) Iz The Generalized Complex Derivative equals the normal derivative, when f is an analytic function. For a non-analytic function such as f(z) = z, the derivative equals zero. The Conjugate Complex Derivative equals zero, when f is an analytic function. The Conjugate Complex Derivative has e.g been used by 0] when deriving a complex gradient. Notice: df(z) dz f(z) Rz + if(z) Iz. (184) Complex Gradient Vector: If f is a real function of a complex vector z, then the complex gradient vector is given by (14, p. 798]) f(z) = df(z) dz (185) = f(z) Rz + if(z) Iz. Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page
23 4.1 Complex Derivatives 4 COMPLEX MATRICES Complex Gradient Matrix: If f is a real function of a complex matrix Z, then the complex gradient matrix is given by (]) f(z) = df(z) dz (186) = f(z) RZ + if(z) IZ. These expressions can be used for gradient descent algorithms The Chain Rule for complex numbers The chain rule is a little more complicated when the function of a complex u = f(x) is non-analytic. For a non-analytic function, the following chain rule can be applied (7]) g(u) x = g u u x + g u u x = g u ( g ) u u x + u x (187) Notice, if the function is analytic, the second term reduces to zero, and the function is reduced to the normal well-known chain rule. For the matrix derivative of a scalar function g(u), the chain rule can be written the following way: g(u) X g(u) Tr(( = U )T U) + X 4.1. Complex Derivatives of Traces Tr(( g(u) U ) T U ) X. (188) If the derivatives involve complex numbers, the conjugate transpose is often involved. The most useful way to show complex derivative is to show the derivative with respect to the real and the imaginary part separately. An easy example is: Tr(X ) RX = Tr(XH ) RX i Tr(X ) IX = ) itr(xh IX = I (189) = I (190) Since the two results have the same sign, the conjugate complex derivative (183) should be used. Tr(X) RX = Tr(XT ) RX i Tr(X) IX = ) itr(xt IX = I (191) = I (19) Here, the two results have different signs, and the generalized complex derivative (18) should be used. Hereby, it can be seen that (84) holds even if X is a Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 3
24 4.1 Complex Derivatives 4 COMPLEX MATRICES complex number. Tr(AX H ) RX i Tr(AXH ) IX Tr(AX ) RX i Tr(AX ) IX = A (193) = A (194) = A T (195) = A T (196) Tr(XX H ) RX i Tr(XXH ) IX = Tr(XH X) RX = i Tr(XH X) IX = RX (197) = iix (198) By inserting (197) and (198) in (18) and (183), it can be seen that Tr(XX H ) = X X (199) Tr(XX H ) X = X (00) Since the function Tr(XX H ) is a real function of the complex matrix X, the complex gradient matrix (186) is given by Tr(XX H ) = Tr(XXH ) X = X (01) Complex Derivative Involving Determinants Here, a calculation example is provided. The objective is to find the derivative of det(x H AX) with respect to X C m n. The derivative is found with respect to the real part and the imaginary part of X, by use of (36) and (3), det(x H AX) can be calculated as (see App. B.1. for details) det(x H AX) X and the complex conjugate derivative yields = 1 ( det(x H AX) i det(xh AX) ) RX IX = det(x H AX) ( (X H AX) 1 X H A ) T (0) det(x H AX) X = 1 ( det(x H AX) + i det(xh AX) ) RX IX = det(x H AX)AX(X H AX) 1 (03) Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 4
25 5 DECOMPOSITIONS 5 Decompositions 5.1 Eigenvalues and Eigenvectors Definition The eigenvectors v and eigenvalues λ are the ones satisfying Av i = λ i v i (04) AV = VD, (D) ij = δ ij λ i, (05) where the columns of V are the vectors v i 5.1. General Properties Symmetric eig(ab) = eig(ba) (06) A is n m At most min(n, m) distinct λ i (07) rank(a) = r At most r non-zero λ i (08) Assume A is symmetric, then VV T = I (i.e. V is orthogonal) (09) λ i R (i.e. λ i is real) (10) Tr(A p ) = i λp i (11) eig(i + ca) = 1 + cλ i (1) eig(a ci) = λ i c (13) eig(a 1 ) = λ 1 i (14) For a symmetric, positive matrix A, eig(a T A) = eig(aa T ) = eig(a) eig(a) (15) 5. Singular Value Decomposition Any n m matrix A can be written as A = UDV T, (16) where U = eigenvectors of AA T n n D = diag(eig(aa T )) n m V = eigenvectors of A T A m m (17) Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 5
26 Assume A R n n. Then A ] = V ] D ] U T ], (19) 5. Singular Value Decomposition 5 DECOMPOSITIONS 5..1 Symmetric Square decomposed into squares Assume A to be n n and symmetric. Then ] ] ] A = V D V T ], (18) where D is diagonal with the eigenvalues of A, and V is orthogonal and the eigenvectors of A. 5.. Square decomposed into squares where D is diagonal with the square root of the eigenvalues of AA T, V is the eigenvectors of AA T and U T is the eigenvectors of A T A Square decomposed into rectangular Assume V D U T = 0 then we can expand the SVD of A into ] ] ] ] D 0 U T A = V V, (0) 0 D where the SVD of A is A = VDU T Rectangular decomposition I Assume A is n m, V is n n, D is n n, U T is n m A ] = V ] D ] U T ], (1) where D is diagonal with the square root of the eigenvalues of AA T, V is the eigenvectors of AA T and U T is the eigenvectors of A T A Rectangular decomposition II Assume A is n m, V is n m, D is m m, U T is m m A ] = V ] D U T () U T 5..6 Rectangular decomposition III Assume A is n m, V is n n, D is n m, U T is m m A ] = V ] D ] U T, (3) where D is diagonal with the square root of the eigenvalues of AA T, V is the eigenvectors of AA T and U T is the eigenvectors of A T A. Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 6
27 5.3 Triangular Decomposition 5 DECOMPOSITIONS 5.3 Triangular Decomposition Cholesky-decomposition Assume A is positive definite, then A = B T B, (4) where B is a unique upper triangular matrix. Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 7
28 6 STATISTICS AND PROBABILITY 6 Statistics and Probability 6.1 Definition of Moments Assume x R n 1 is a random variable Mean The vector of means, m, is defined by (m) i = x i (5) 6.1. Covariance The matrix of covariance M is defined by (M) ij = (x i x i )(x j x j ) (6) or alternatively as M = (x m)(x m) T (7) Third moments The matrix of third centralized moments in some contexts referred to as coskewness is defined using the notation as m (3) ijk = (x i x i )(x j x j )(x k x k ) (8) M 3 = ] m (3) ::1 m(3) ::...m(3) ::n (9) where : denotes all elements within the given index. M 3 can alternatively be expressed as M 3 = (x m)(x m) T (x m) T (30) Fourth moments The matrix of fourth centralized moments in some contexts referred to as cokurtosis is defined using the notation m (4) ijkl = (x i x i )(x j x j )(x k x k )(x l x l ) (31) as M 4 = ] m (4) ::11 m(4) ::1...m(4) ::n1 m(4) ::1 m(4) ::...m(4) ::n... m(4) ::1n m(4) ::n...m(4) ::nn (3) or alternatively as M 4 = (x m)(x m) T (x m) T (x m) T (33) Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 8
29 6. Expectation of Linear Combinations 6 STATISTICS AND PROBABILITY 6. Expectation of Linear Combinations 6..1 Linear Forms Assume X and x to be a matrix and a vector of random variables. Then (see See 5]) EAXB + C] = AEX]B + C (34) VarAx] = AVarx]A T (35) CovAx, By] = ACovx, y]b T (36) Assume x to be a stochastic vector with mean m, then (see 7]) 6.. Quadratic Forms EAx + b] = Am + b (37) EAx] = Am (38) Ex + b] = m + b (39) Assume A is symmetric, c = Ex] and Σ = Varx]. Assume also that all coordinates x i are independent, have the same central moments µ 1, µ, µ 3, µ 4 and denote a = diag(a). Then (See 5]) Ex T Ax] = Tr(AΣ) + c T Ac (40) Varx T Ax] = µ Tr(A ) + 4µ c T A c + 4µ 3 c T Aa + (µ 4 3µ )a T a (41) Also, assume x to be a stochastic vector with mean m, and covariance M. Then (see 7]) E(Ax + a)(bx + b) T ] = AMB T + (Am + a)(bm + b) T (4) Exx T ] = M + mm T (43) Exa T x] = (M + mm T )a (44) Ex T ax T ] = a T (M + mm T ) (45) E(Ax)(Ax) T ] = A(M + mm T )A T (46) E(x + a)(x + a) T ] = M + (m + a)(m + a) T (47) E(Ax + a) T (Bx + b)] = Tr(AMB T ) + (Am + a) T (Bm + b) (48) Ex T x] = Tr(M) + m T m (49) Ex T Ax] = Tr(AM) + m T Am (50) E(Ax) T (Ax)] = Tr(AMA T ) + (Am) T (Am) (51) E(x + a) T (x + a)] = Tr(M) + (m + a) T (m + a) (5) See 7]. Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 9
30 6.3 Weighted Scalar Variable 6 STATISTICS AND PROBABILITY 6..3 Cubic Forms Assume x to be a stochastic vector with independent coordinates, mean m, covariance M and central moments v 3 = E(x m) 3 ]. Then (see 7]) E(Ax + a)(bx + b) T (Cx + c)] = Adiag(B T C)v 3 +Tr(BMC T )(Am + a) +AMC T (Bm + b) +(AMB T + (Am + a)(bm + b) T )(Cm + c) Exx T x] = v 3 + Mm + (Tr(M) + m T m)m E(Ax + a)(ax + a) T (Ax + a)] = Adiag(A T A)v 3 +AMA T + (Ax + a)(ax + a) T ](Am + a) +Tr(AMA T )(Am + a) E(Ax + a)b T (Cx + c)(dx + d) T ] = (Ax + a)b T (CMD T + (Cm + c)(dm + d) T ) +(AMC T + (Am + a)(cm + c) T )b(dm + d) T 6.3 Weighted Scalar Variable +b T (Cm + c)(amd T (Am + a)(dm + d) T ) Assume x R n 1 is a random variable, w R n 1 is a vector of constants and y is the linear combination y = w T x. Assume further that m, M, M 3, M 4 denotes the mean, covariance, and central third and fourth moment matrix of the variable x. Then it holds that y = w T m (53) (y y ) = w T M w (54) (y y ) 3 = w T M 3 w w (55) (y y ) 4 = w T M 4 w w w (56) Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 30
31 7 MULTIVARIATE DISTRIBUTIONS 7 Multivariate Distributions 7.1 Student s t The density of a Student-t distributed vector t R P 1, is given by ν+p P/ Γ( p(t µ, Σ, ν) = (πν) ) Γ(ν/) det(σ) 1/ 1 + ν 1 (t µ) T Σ 1 (t µ) ] (ν+p )/ (57) where µ is the location, the scale matrix Σ is symmetric, positive definite, ν is the degrees of freedom, and Γ denotes the gamma function. For ν = 1, the Student-t distribution becomes the Cauchy distribution (see sec 7.) Mean E(t) = µ, ν > 1 (58) 7.1. Variance cov(t) = ν Σ, ν > (59) ν Mode The notion mode meaning the position of the most probable value Full Matrix Version mode(t) = µ (60) If instead of a vector t R P 1 one has a matrix T R P N, then the Student-t distribution for T is p(t M, Ω, Σ, ν) = π NP/ P p=1 Γ (ν + P p + 1)/] Γ (ν p + 1)/] ν det(ω) ν/ det(σ) N/ det Ω 1 + (T M)Σ 1 (T M) T] (ν+p )/ (61) where M is the location, Ω is the rescaling matrix, Σ is positive definite, ν is the degrees of freedom, and Γ denotes the gamma function. 7. Cauchy The density function for a Cauchy distributed vector t R P 1, is given by 1+P P/ Γ( p(t µ, Σ) = π ) Γ(1/) det(σ) 1/ 1 + (t µ)t Σ 1 (t µ) ] (1+P )/ (6) where µ is the location, Σ is positive definite, and Γ denotes the gamma function. The Cauchy distribution is a special case of the Student-t distribution. Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 31
32 7.3 Gaussian 7 MULTIVARIATE DISTRIBUTIONS 7.3 Gaussian See sec Multinomial If the vector n contains counts, i.e. (n) i 0, 1,,..., then the discrete multinomial disitrbution for n is given by P (n a, n) = n! n 1!... n d! d i a ni i, d n i = n (63) i where a i are probabilities, i.e. 0 a i 1 and i a i = Dirichlet The Dirichlet distribution is a kind of inverse distribution compared to the multinomial distribution on the bounded continuous variate x = x 1,..., x P ] 16, p. 44] ( P ) Γ p α p P p(x α) = P p Γ(α x αp 1 p p) 7.6 Normal-Inverse Gamma 7.7 Wishart The central Wishart distribution for M R P P, M is positive definite, where m can be regarded as a degree of freedom parameter 16, equation 3.8.1] 8, section.5],11] p p(m Σ, m) = 1 mp/ π P (P 1)/4 P p Γ 1 (m + 1 p)] det(σ) m/ det(m) (m P 1)/ exp 1 ] Tr(Σ 1 M) (64) Mean E(M) = mσ (65) Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 3
33 7.8 Inverse Wishart 7 MULTIVARIATE DISTRIBUTIONS 7.8 Inverse Wishart The (normal) Inverse Wishart distribution for M R P P, M is positive definite, where m can be regarded as a degree of freedom parameter 11] p(m Σ, m) = 1 mp/ π P (P 1)/4 P p Γ 1 (m + 1 p)] det(σ) m/ det(m) (m P 1)/ exp 1 ] Tr(ΣM 1 ) (66) Mean 1 E(M) = Σ m P 1 (67) Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 33
34 8 GAUSSIANS 8 Gaussians 8.1 Basics Density and normalization The density of x N (m, Σ) is 1 p(x) = exp 1 ] det(πσ) (x m)t Σ 1 (x m) (68) Note that if x is d-dimensional, then det(πσ) = (π) d det(σ). Integration and normalization exp 1 ] (x m)t Σ 1 (x m) dx = det(πσ) exp 1 ] xt Ax + b T x dx = ] 1 det(πa 1 ) exp bt A 1 b exp 1 ] Tr(ST AS) + Tr(B T S) ds = ] 1 det(πa 1 ) exp Tr(BT A 1 B) The derivatives of the density are p(x) = p(x)σ 1 (x m) (69) x p ( xx T = p(x) Σ 1 (x m)(x m) T Σ 1 Σ 1) (70) 8.1. Marginal Distribution Assume x N x (µ, Σ) where ] xa x = x b µ = µa µ b ] Σa Σ Σ = c Σ T c Σ b ] (71) then Conditional Distribution p(x a ) = N xa (µ a, Σ a ) (7) p(x b ) = N xb (µ b, Σ b ) (73) Assume x N x (µ, Σ) where ] xa x = x b µ = µa µ b ] Σa Σ Σ = c Σ T c Σ b ] (74) Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 34
35 8.1 Basics 8 GAUSSIANS then p(x a x b ) = N xa (ˆµ a, ˆΣ a ) p(x b x a ) = N xb (ˆµ b, ˆΣ b ) { ˆµa = µ a + Σ c Σ 1 b (x b µ b ) ˆΣ a = Σ a Σ c Σ 1 b Σ T (75) c { ˆµb = µ b + Σ T c Σ 1 a (x a µ a ) ˆΣ b = Σ b Σ T c Σ 1 (76) a Σ c Note, that the covariance matrices are the Schur complement of the block matrix, see for details Linear combination Assume x N (m x, Σ x ) and y N (m y, Σ y ) then Ax + By + c N (Am x + Bm y + c, AΣ x A T + BΣ y B T ) (77) Rearranging Means det(π(at Σ N Ax m, Σ] = 1 A) 1 ) N x A 1 m, (A T Σ 1 A) 1 ] (78) det(πσ) Rearranging into squared form If A is symmetric, then 1 xt Ax + b T x = 1 (x A 1 b) T A(x A 1 b) + 1 bt A 1 b 1 Tr(XT AX) + Tr(B T X) = 1 Tr(X A 1 B) T A(X A 1 B)] + 1 Tr(BT A 1 B) Sum of two squared forms In vector formulation (assuming Σ 1, Σ are symmetric) 1 (x m 1) T Σ 1 1 (x m 1) (79) 1 (x m ) T Σ 1 (x m ) (80) = 1 (x m c) T Σ 1 c (x m c ) + C (81) Σ 1 c = Σ Σ 1 (8) m c = (Σ Σ 1 ) 1 (Σ 1 1 m 1 + Σ 1 m ) (83) C = 1 (mt 1 Σ m T Σ 1 )(Σ Σ 1 1 ( ) m T 1 Σ 1 1 m 1 + m T Σ 1 m ) 1 (Σ 1 1 m 1 + Σ 1 m )(84) (85) Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 35
36 8. Moments 8 GAUSSIANS In a trace formulation (assuming Σ 1, Σ are symmetric) 1 Tr((X M 1) T Σ 1 1 (X M 1)) (86) 1 Tr((X M ) T Σ 1 (X M )) (87) = 1 Tr(X M c) T Σ 1 c (X M c )] + C (88) Σ 1 c = Σ Σ 1 (89) M c = (Σ Σ 1 ) 1 (Σ 1 1 M 1 + Σ 1 M ) (90) C = 1 ] Tr (Σ 1 1 M 1 + Σ 1 M ) T (Σ Σ 1 ) 1 (Σ 1 1 M 1 + Σ 1 M ) 1 Tr(MT 1 Σ 1 1 M 1 + M T Σ 1 M ) (91) Product of gaussian densities Let N x (m, Σ) denote a density of x, then N x (m 1, Σ 1 ) N x (m, Σ ) = c c N x (m c, Σ c ) (9) c c = N m1 (m, (Σ 1 + Σ )) 1 = det(π(σ1 + Σ )) exp 1 ] (m 1 m ) T (Σ 1 + Σ ) 1 (m 1 m ) m c = (Σ Σ 1 ) 1 (Σ 1 1 m 1 + Σ 1 m ) Σ c = (Σ Σ 1 ) 1 but note that the product is not normalized as a density of x. 8. Moments 8..1 Mean and covariance of linear forms First and second moments. Assume x N (m, Σ) E(x) = m (93) Cov(x, x) = Var(x) = Σ = E(xx T ) E(x)E(x T ) = E(xx T ) mm T (94) As for any other distribution is holds for gaussians that EAx] = AEx] (95) VarAx] = AVarx]A T (96) CovAx, By] = ACovx, y]b T (97) Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 36
37 8. Moments 8 GAUSSIANS 8.. Mean and variance of square forms Mean and variance of square forms: Assume x N (m, Σ) E(xx T ) = Σ + mm T (98) Ex T Ax] = Tr(AΣ) + m T Am (99) Var(x T Ax) = σ 4 Tr(A ) + 4σ m T A m (300) E(x m ) T A(x m )] = (m m ) T A(m m ) + Tr(AΣ) (301) Assume x N (0, σ I) and A and B to be symmetric, then Cov(x T Ax, x T Bx) = σ 4 Tr(AB) (30) 8..3 Cubic forms Exb T xx T ] = mb T (M + mm T ) + (M + mm T )bm T 8..4 Mean of Quartic Forms +b T m(m mm T ) (303) Exx T xx T ] = (Σ + mm T ) + m T m(σ mm T ) +Tr(Σ)(Σ + mm T ) Exx T Axx T ] = (Σ + mm T )(A + A T )(Σ + mm T ) +m T Am(Σ mm T ) + TrAΣ](Σ + mm T ) Ex T xx T x] = Tr(Σ ) + 4m T Σm + (Tr(Σ) + m T m) Ex T Axx T Bx] = TrAΣ(B + B T )Σ] + m T (A + A T )Σ(B + B T )m +(Tr(AΣ) + m T Am)(Tr(BΣ) + m T Bm) Ea T xb T xc T xd T x] = (a T (Σ + mm T )b)(c T (Σ + mm T )d) +(a T (Σ + mm T )c)(b T (Σ + mm T )d) +(a T (Σ + mm T )d)(b T (Σ + mm T )c) a T mb T mc T md T m E(Ax + a)(bx + b) T (Cx + c)(dx + d) T ] = AΣB T + (Am + a)(bm + b) T ]CΣD T + (Cm + c)(dm + d) T ] +AΣC T + (Am + a)(cm + c) T ]BΣD T + (Bm + b)(dm + d) T ] +(Bm + b) T (Cm + c)aσd T (Am + a)(dm + d) T ] +Tr(BΣC T )AΣD T + (Am + a)(dm + d) T ] Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 37
38 8.3 Miscellaneous 8 GAUSSIANS E(Ax + a) T (Bx + b)(cx + c) T (Dx + d)] = TrAΣ(C T D + D T C)ΣB T ] +(Am + a) T B + (Bm + b) T A]ΣC T (Dm + d) + D T (Cm + c)] +Tr(AΣB T ) + (Am + a) T (Bm + b)]tr(cσd T ) + (Cm + c) T (Dm + d)] See 7] Moments Ex] = k Cov(x) = k ρ k m k (304) k ρ k ρ k (Σ k + m k m T k m k m T k ) (305) 8.3 Miscellaneous Whitening Assume x N (m, Σ) then z = Σ 1/ (x m) N (0, I) (306) Conversely having z N (0, I) one can generate data x N (m, Σ) by setting x = Σ 1/ z + m N (m, Σ) (307) Note that Σ 1/ means the matrix which fulfils Σ 1/ Σ 1/ = Σ, and that it exists and is unique since Σ is positive definite The Chi-Square connection Assume x N (m, Σ) and x to be n dimensional, then z = (x m) T Σ 1 (x m) χ n (308) where χ n denotes the Chi square distribution with n degrees of freedom Entropy Entropy of a D-dimensional gaussian H(x) = N (m, Σ) ln N (m, Σ)dx = ln det(πσ) + D (309) Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 38
39 8.4 Mixture of Gaussians 8 GAUSSIANS 8.4 Mixture of Gaussians Density The variable x is distributed as a mixture of gaussians if it has the density p(x) = K k=1 1 ρ k det(πσk ) exp 1 ] (x m k) T Σ 1 k (x m k) where ρ k sum to 1 and the Σ k all are positive definite Derivatives Defining p(s) = k ρ kn s (µ k, Σ k ) one get ln p(s) ρ j = = ln p(s) µ j = = ln p(s) Σ j = = ρ j N s (µ j, Σ j ) k ρ kn s (µ k, Σ k ) ρ j N s (µ j, Σ j ) k ρ kn s (µ k, Σ k ) ρ j N s (µ j, Σ j ) k ρ kn s (µ k, Σ k ) ρ j N s (µ j, Σ j ) k ρ kn s (µ k, Σ k ) ρ j N s (µ j, Σ j ) lnρ j N s (µ ρ j, Σ j )] j 1 ρ j lnρ j N s (µ µ j, Σ j )] j Σ 1 k (s µ k) ] k ρ lnρ j N s (µ kn s (µ k, Σ k ) Σ j, Σ j )] j ρ j N s (µ j, Σ j ) 1 Σ 1 k ρ j + Σ 1 j kn s (µ k, Σ k ) But ρ k and Σ k needs to be constrained. (s µ j )(s µ j ) T Σ 1 ] j (310) Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 39
40 9 SPECIAL MATRICES 9 Special Matrices 9.1 Orthogonal, Ortho-symmetric, and Ortho-skew Orthogonal By definition, the real matrix A is orthogonal if and only if A 1 = A T Basic properties for the orthogonal matrix A A T = A AA T = I A T A = I det(a) = 1 9. Units, Permutation and Shift 9..1 Unit vector Let e i R n 1 be the ith unit vector, i.e. the vector which is zero in all entries except the ith at which it is Rows and Columns 9..3 Permutations i.th row of A = e T i A (311) j.th column of A = Ae j (31) Let P be some permutation matrix, e.g P = = ] e e 1 e 3 = e T e T 1 e T 3 (313) For permutation matrices it holds that and that AP = Ae Ae 1 Ae 3 ] PP T = I (314) PA = e T A e T 1 A e T 3 A (315) That is, the first is a matrix which has columns of A but in permuted sequence and the second is a matrix which has the rows of A but in the permuted sequence. Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 40
41 9.3 The Singleentry Matrix 9 SPECIAL MATRICES 9..4 Translation, Shift or Lag Operators Let L denote the lag (or translation or shift ) operator defined on a 4 4 example by L = (316) i.e. a matrix of zeros with one on the sub-diagonal, (L) ij = δ i,j+1. With some signal x t for t = 1,..., N, the n.th power of the lag operator shifts the indices, i.e. { (L n 0 for t = 1,.., n x) t = (317) x t n for t = n + 1,..., N A related but slightly different matrix is the recurrent shifted operator defined on a 4x4 example by ˆL = (318) i.e. a matrix defined by (ˆL) ij = δ i,j+1 + δ i,1 δ j,dim(l). On a signal x it has the effect (ˆL n x) t = x t, t = (t n) mod N] + 1 (319) That is, ˆL is like the shift operator L except that it wraps the signal as if it was periodic and shifted (substituting the zeros with the rear end of the signal). Note that ˆL is invertible and orthogonal, i.e. ˆL 1 = ˆL T (30) 9.3 The Singleentry Matrix Definition The single-entry matrix J ij R n n is defined as the matrix which is zero everywhere except in the entry (i, j) in which it is 1. In a 4 4 example one might have J 3 = (31) The single-entry matrix is very useful when working with derivatives of expressions involving matrices. Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 41
42 9.3 The Singleentry Matrix 9 SPECIAL MATRICES 9.3. Swap and Zeros Assume A to be n m and J ij to be m p AJ ij = A i... 0 ] (3) i.e. an n p matrix of zeros with the i.th column of A in place of the j.th column. Assume A to be n m and J ij to be p n 0. 0 J ij A = A j (33) 0. 0 i.e. an p m matrix of zeros with the j.th row of A in the placed of the i.th row Rewriting product of elements A ki B jl = (Ae i e T j B) kl = (AJ ij B) kl (34) A ik B lj = (A T e i e T j B T ) kl = (A T J ij B T ) kl (35) A ik B jl = (A T e i e T j B) kl = (A T J ij B) kl (36) A ki B lj = (Ae i e T j B T ) kl = (AJ ij B T ) kl (37) Properties of the Singleentry Matrix If i = j If i j J ij J ij = J ij (J ij ) T (J ij ) T = J ij J ij (J ij ) T = J ij (J ij ) T J ij = J ij J ij J ij = 0 (J ij ) T (J ij ) T = 0 J ij (J ij ) T = J ii (J ij ) T J ij = J jj The Singleentry Matrix in Scalar Expressions Assume A is n m and J is m n, then Tr(AJ ij ) = Tr(J ij A) = (A T ) ij (38) Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 4
43 9.4 Symmetric and Antisymmetric 9 SPECIAL MATRICES Assume A is n n, J is n m and B is m n, then Tr(AJ ij B) = (A T B T ) ij (39) Tr(AJ ji B) = (BA) ij (330) Tr(AJ ij J ij B) = diag(a T B T ) ij (331) Assume A is n n, J ij is n m B is m n, then Structure Matrices The structure matrix is defined by If A has no special structure then x T AJ ij Bx = (A T xx T B T ) ij (33) x T AJ ij J ij Bx = diag(a T xx T B T ) ij (333) A A ij = S ij (334) S ij = J ij (335) If A is symmetric then S ij = J ij + J ji J ij J ij (336) 9.4 Symmetric and Antisymmetric Symmetric The matrix A is said to be symmetric if A = A T (337) Symmetric matrices have many important properties, e.g. that their eigenvalues are real and eigenvectors orthogonal Antisymmetric The antisymmetric matrix is also known as the skew symmetric matrix. It has the following property from which it is defined A = A T (338) Hereby, it can be seen that the antisymmetric matrices always have a zero diagonal. The n n antisymmetric matrices also have the following properties. det(a T ) = det( A) = ( 1) n det(a) (339) det(a) = det( A) = 0, if n is odd (340) The eigenvalues of an antisymmetric matrix are placed on the imaginary axis and the eigenvectors are unitary. Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 43
44 9.5 Orthogonal matrices 9 SPECIAL MATRICES Decomposition A square matrix A can always be written as a sum of a symmetric A + and an antisymmetric matrix A A = A + + A (341) Such a decomposition could e.g. be A = A + AT + A AT = A + + A (34) 9.5 Orthogonal matrices If a square matrix Q is orthogonal Q T Q = QQ T = I. Furthermore Q had the following properties Its eigenvalues are placed on the unit circle. Its eigenvectors are unitary. det(q) = ±1. The inverse of an orthogonal matrix is orthogonal too Ortho-Sym A matrix Q + which simultaneously is orthogonal and symmetric is called an ortho-sym matrix 19]. Hereby Q T +Q + = I (343) Q + = Q T + (344) The powers of an ortho-sym matrix are given by the following rule 9.5. Ortho-Skew Q k + = 1 + ( 1)k = 1 + cos(kπ) I ( 1)k+1 Q + (345) I + 1 cos(kπ) Q + (346) A matrix which simultaneously is orthogonal and antisymmetric is called an ortho-skew matrix 19]. Hereby Q H Q = I (347) Q = Q H (348) The powers of an ortho-skew matrix are given by the following rule Q k = ik + ( i) k I i ik ( i) k Q (349) = cos(k π )I + sin(k π )Q (350) Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 44
45 9.6 Vandermonde Matrices 9 SPECIAL MATRICES Decomposition A square matrix A can always be written as a sum of a symmetric A + and an antisymmetric matrix A A = A + + A (351) 9.6 Vandermonde Matrices A Vandermonde matrix has the form 15] V =. 1 v 1 v 1 v n v v v n v n vn vn n 1. (35) The transpose of V is also said to a Vandermonde matrix. The determinant is given by 8] det V = i>j(v i v j ) (353) 9.7 Toeplitz Matrices A Toeplitz matrix T is a matrix where the elements of each diagonal is the same. In the n n square case, it has the following structure: t 11 t 1 t 1n t 0 t 1 t n 1. T = t t1 =. t (354).. t1 t n1 t 1 t 11 t (n 1) t 1 t 0 A Toeplitz matrix is persymmetric. If a matrix is persymmetric (or orthosymmetric), it means that the matrix is symmetric about its northeast-southwest diagonal (anti-diagonal) 1]. Persymmetric matrices is a larger class of matrices, since a persymmetric matrix not necessarily has a Toeplitz structure. There are some special cases of Toeplitz matrices. The symmetric Toeplitz matrix is given by: t 0 t 1 t n 1. T = t (355).. t1 t (n 1) t 1 t 0 Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 45
46 9.8 The DFT Matrix 9 SPECIAL MATRICES The circular Toeplitz matrix: t 0 t 1 t n 1. T C = t..... n t1 t 1 t n 1 t 0 (356) The upper triangular Toeplitz matrix: t 0 t 1 t n 1. T U = t1, (357) 0 0 t 0 and the lower triangular Toeplitz matrix: t T L = t t (n 1) t 1 t Properties of Toeplitz Matrices (358) The Toeplitz matrix has some computational advantages. The addition of two Toeplitz matrices can be done with O(n) flops, multiplication of two Toeplitz matrices can be done in O(n ln n) flops. Toeplitz equation systems can be solved in O(n ) flops. The inverse of a positive definite Toeplitz matrix can be found in O(n ) flops too. The inverse of a Toeplitz matrix is persymmetric. The product of two lower triangular Toeplitz matrices is a Toeplitz matrix. More information on Toeplitz matrices and circulant matrices can be found in 13, 7]. 9.8 The DFT Matrix The DFT matrix is an N N symmetric matrix W N, where the k, nth element is given by = e jπkn N (359) W kn N Thus the discrete Fourier transform (DFT) can be expressed as X(k) = N 1 n=0 x(n)w kn N. (360) Likewise the inverse discrete Fourier transform (IDFT) can be expressed as x(n) = 1 N N 1 k=0 X(k)W kn N. (361) Petersen & Pedersen, The Matrix Cookbook, Version: September 5, 007, Page 46
The Matrix Cookbook. [ http://matrixcookbook.com ] Kaare Brandt Petersen Michael Syskind Pedersen. Version: November 15, 2012
The Matrix Cookbook http://matrixcookbook.com ] Kaare Brandt Petersen Michael Syskind Pedersen Version: November 15, 01 1 Introduction What is this? These pages are a collection of facts (identities, approximations,
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More informationIntroduction to Matrix Algebra
Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationNotes on Symmetric Matrices
CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.
More informationMath 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.
Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(
More informationSF2940: Probability theory Lecture 8: Multivariate Normal Distribution
SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,
More information1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
More informationSF2940: Probability theory Lecture 8: Multivariate Normal Distribution
SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2014 Timo Koski () Mathematisk statistik 24.09.2014 1 / 75 Learning outcomes Random vectors, mean vector, covariance
More information15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
More information13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.
3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms
More informationLecture 2 Matrix Operations
Lecture 2 Matrix Operations transpose, sum & difference, scalar multiplication matrix multiplication, matrix-vector product matrix inverse 2 1 Matrix transpose transpose of m n matrix A, denoted A T or
More informationApplied Linear Algebra I Review page 1
Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties
More informationLinear Algebra: Determinants, Inverses, Rank
D Linear Algebra: Determinants, Inverses, Rank D 1 Appendix D: LINEAR ALGEBRA: DETERMINANTS, INVERSES, RANK TABLE OF CONTENTS Page D.1. Introduction D 3 D.2. Determinants D 3 D.2.1. Some Properties of
More informationVector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty
More informationSome probability and statistics
Appendix A Some probability and statistics A Probabilities, random variables and their distribution We summarize a few of the basic concepts of random variables, usually denoted by capital letters, X,Y,
More informationThe Characteristic Polynomial
Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem
More informationNotes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
More informationLINEAR ALGEBRA. September 23, 2010
LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................
More informationOctober 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix
Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,
More informationFactorization Theorems
Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization
More informationEigenvalues, Eigenvectors, Matrix Factoring, and Principal Components
Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they
More informationDecember 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation
More informationMatrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.
Matrix Algebra A. Doerr Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Some Basic Matrix Laws Assume the orders of the matrices are such that
More informationMultivariate normal distribution and testing for means (see MKB Ch 3)
Multivariate normal distribution and testing for means (see MKB Ch 3) Where are we going? 2 One-sample t-test (univariate).................................................. 3 Two-sample t-test (univariate).................................................
More informationLinear Algebra Review. Vectors
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length
More informationSection 6.1 - Inner Products and Norms
Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,
More informationLecture 4: Partitioned Matrices and Determinants
Lecture 4: Partitioned Matrices and Determinants 1 Elementary row operations Recall the elementary operations on the rows of a matrix, equivalent to premultiplying by an elementary matrix E: (1) multiplying
More informationLinear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
More informationMAT188H1S Lec0101 Burbulla
Winter 206 Linear Transformations A linear transformation T : R m R n is a function that takes vectors in R m to vectors in R n such that and T (u + v) T (u) + T (v) T (k v) k T (v), for all vectors u
More informationInner Product Spaces and Orthogonality
Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,
More informationEigenvalues and Eigenvectors
Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution
More informationPUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5.
PUTNAM TRAINING POLYNOMIALS (Last updated: November 17, 2015) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include
More informationBindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8
Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e
More information5. Orthogonal matrices
L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal
More informationGaussian Conjugate Prior Cheat Sheet
Gaussian Conjugate Prior Cheat Sheet Tom SF Haines 1 Purpose This document contains notes on how to handle the multivariate Gaussian 1 in a Bayesian setting. It focuses on the conjugate prior, its Bayesian
More informationInner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
More informationCONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation
Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in
More informationMATRICES WITH DISPLACEMENT STRUCTURE A SURVEY
MATRICES WITH DISPLACEMENT STRUCTURE A SURVEY PLAMEN KOEV Abstract In the following survey we look at structured matrices with what is referred to as low displacement rank Matrices like Cauchy Vandermonde
More information3.1 Least squares in matrix form
118 3 Multiple Regression 3.1 Least squares in matrix form E Uses Appendix A.2 A.4, A.6, A.7. 3.1.1 Introduction More than one explanatory variable In the foregoing chapter we considered the simple regression
More informationChapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. ( 1 1 5 2 0 6
Chapter 7 Matrices Definition An m n matrix is an array of numbers set out in m rows and n columns Examples (i ( 1 1 5 2 0 6 has 2 rows and 3 columns and so it is a 2 3 matrix (ii 1 0 7 1 2 3 3 1 is a
More informationLinear Algebra Review and Reference
Linear Algebra Review and Reference Zico Kolter (updated by Chuong Do) September 30, 2015 Contents 1 Basic Concepts and Notation 2 11 Basic Notation 2 2 Matrix Multiplication 3 21 Vector-Vector Products
More information3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.
Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R
More information1 Determinants and the Solvability of Linear Systems
1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped
More informationThe Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression
The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every
More informationTHREE DIMENSIONAL GEOMETRY
Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,
More information9 MATRICES AND TRANSFORMATIONS
9 MATRICES AND TRANSFORMATIONS Chapter 9 Matrices and Transformations Objectives After studying this chapter you should be able to handle matrix (and vector) algebra with confidence, and understand the
More informationSummary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)
Summary of Formulas and Concepts Descriptive Statistics (Ch. 1-4) Definitions Population: The complete set of numerical information on a particular quantity in which an investigator is interested. We assume
More informationExamination paper for TMA4205 Numerical Linear Algebra
Department of Mathematical Sciences Examination paper for TMA4205 Numerical Linear Algebra Academic contact during examination: Markus Grasmair Phone: 97580435 Examination date: December 16, 2015 Examination
More informationUsing row reduction to calculate the inverse and the determinant of a square matrix
Using row reduction to calculate the inverse and the determinant of a square matrix Notes for MATH 0290 Honors by Prof. Anna Vainchtein 1 Inverse of a square matrix An n n square matrix A is called invertible
More informationLeast-Squares Intersection of Lines
Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a
More informationT ( a i x i ) = a i T (x i ).
Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)
More informationMatrix Differentiation
1 Introduction Matrix Differentiation ( and some other stuff ) Randal J. Barnes Department of Civil Engineering, University of Minnesota Minneapolis, Minnesota, USA Throughout this presentation I have
More informationMATH PROBLEMS, WITH SOLUTIONS
MATH PROBLEMS, WITH SOLUTIONS OVIDIU MUNTEANU These are free online notes that I wrote to assist students that wish to test their math skills with some problems that go beyond the usual curriculum. These
More informationMatrices and Linear Algebra
Chapter 2 Matrices and Linear Algebra 2. Basics Definition 2... A matrix is an m n array of scalars from a given field F. The individual values in the matrix are called entries. Examples. 2 3 A = 2 4 2
More informationFinite dimensional C -algebras
Finite dimensional C -algebras S. Sundar September 14, 2012 Throughout H, K stand for finite dimensional Hilbert spaces. 1 Spectral theorem for self-adjoint opertors Let A B(H) and let {ξ 1, ξ 2,, ξ n
More informationNotes on Matrix Calculus
Notes on Matrix Calculus Paul L. Fackler North Carolina State University September 27, 2005 Matrix calculus is concerned with rules for operating on functions of matrices. For example, suppose that an
More informationChapter 17. Orthogonal Matrices and Symmetries of Space
Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length
More informationLinear Algebraic Equations, SVD, and the Pseudo-Inverse
Linear Algebraic Equations, SVD, and the Pseudo-Inverse Philip N. Sabes October, 21 1 A Little Background 1.1 Singular values and matrix inversion For non-smmetric matrices, the eigenvalues and singular
More informationThe Matrix Elements of a 3 3 Orthogonal Matrix Revisited
Physics 116A Winter 2011 The Matrix Elements of a 3 3 Orthogonal Matrix Revisited 1. Introduction In a class handout entitled, Three-Dimensional Proper and Improper Rotation Matrices, I provided a derivation
More informationSOLVING LINEAR SYSTEMS
SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis
More informationQuadratic forms Cochran s theorem, degrees of freedom, and all that
Quadratic forms Cochran s theorem, degrees of freedom, and all that Dr. Frank Wood Frank Wood, fwood@stat.columbia.edu Linear Regression Models Lecture 1, Slide 1 Why We Care Cochran s theorem tells us
More informationDerivative Free Optimization
Department of Mathematics Derivative Free Optimization M.J.D. Powell LiTH-MAT-R--2014/02--SE Department of Mathematics Linköping University S-581 83 Linköping, Sweden. Three lectures 1 on Derivative Free
More information6. Cholesky factorization
6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix
More informationAu = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.
Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry
More informationF Matrix Calculus F 1
F Matrix Calculus F 1 Appendix F: MATRIX CALCULUS TABLE OF CONTENTS Page F1 Introduction F 3 F2 The Derivatives of Vector Functions F 3 F21 Derivative of Vector with Respect to Vector F 3 F22 Derivative
More information4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION
4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:
More informationLecture 5: Singular Value Decomposition SVD (1)
EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system
More informationLinear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices
MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two
More informationNumerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,
More informationMath 312 Homework 1 Solutions
Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please
More informationMultidimensional data and factorial methods
Multidimensional data and factorial methods Bidimensional data x 5 4 3 4 X 3 6 X 3 5 4 3 3 3 4 5 6 x Cartesian plane Multidimensional data n X x x x n X x x x n X m x m x m x nm Factorial plane Interpretation
More informationInner products on R n, and more
Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +
More informationSolution to Homework 2
Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if
More informationLinear Threshold Units
Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear
More informationSimilarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
More informationLogistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression
Logistic Regression Department of Statistics The Pennsylvania State University Email: jiali@stat.psu.edu Logistic Regression Preserve linear classification boundaries. By the Bayes rule: Ĝ(x) = arg max
More information1 2 3 1 1 2 x = + x 2 + x 4 1 0 1
(d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which
More informationIntroduction to General and Generalized Linear Models
Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby
More informationMore than you wanted to know about quadratic forms
CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences More than you wanted to know about quadratic forms KC Border Contents 1 Quadratic forms 1 1.1 Quadratic forms on the unit
More informationVERTICES OF GIVEN DEGREE IN SERIES-PARALLEL GRAPHS
VERTICES OF GIVEN DEGREE IN SERIES-PARALLEL GRAPHS MICHAEL DRMOTA, OMER GIMENEZ, AND MARC NOY Abstract. We show that the number of vertices of a given degree k in several kinds of series-parallel labelled
More informationSolving Linear Systems, Continued and The Inverse of a Matrix
, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing
More information7 Gaussian Elimination and LU Factorization
7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method
More informationRecall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.
ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?
More informationMATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.
MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar
More informationA note on companion matrices
Linear Algebra and its Applications 372 (2003) 325 33 www.elsevier.com/locate/laa A note on companion matrices Miroslav Fiedler Academy of Sciences of the Czech Republic Institute of Computer Science Pod
More informationSome Lecture Notes and In-Class Examples for Pre-Calculus:
Some Lecture Notes and In-Class Examples for Pre-Calculus: Section.7 Definition of a Quadratic Inequality A quadratic inequality is any inequality that can be put in one of the forms ax + bx + c < 0 ax
More informationLecture 1: Schur s Unitary Triangularization Theorem
Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections
More informationSpectra of Sample Covariance Matrices for Multiple Time Series
Spectra of Sample Covariance Matrices for Multiple Time Series Reimer Kühn, Peter Sollich Disordered System Group, Department of Mathematics, King s College London VIIIth Brunel-Bielefeld Workshop on Random
More informationNOV - 30211/II. 1. Let f(z) = sin z, z C. Then f(z) : 3. Let the sequence {a n } be given. (A) is bounded in the complex plane
Mathematical Sciences Paper II Time Allowed : 75 Minutes] [Maximum Marks : 100 Note : This Paper contains Fifty (50) multiple choice questions. Each question carries Two () marks. Attempt All questions.
More informationConstrained Least Squares
Constrained Least Squares Authors: G.H. Golub and C.F. Van Loan Chapter 12 in Matrix Computations, 3rd Edition, 1996, pp.580-587 CICN may05/1 Background The least squares problem: min Ax b 2 x Sometimes,
More informationCS3220 Lecture Notes: QR factorization and orthogonal transformations
CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss
More information1 VECTOR SPACES AND SUBSPACES
1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such
More informationNumerical Methods I Eigenvalue Problems
Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001, Fall 2010 September 30th, 2010 A. Donev (Courant Institute)
More informationa 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4)
ROOTS OF POLYNOMIAL EQUATIONS In this unit we discuss polynomial equations. A polynomial in x of degree n, where n 0 is an integer, is an expression of the form P n (x) =a n x n + a n 1 x n 1 + + a 1 x
More informationBrief Introduction to Vectors and Matrices
CHAPTER 1 Brief Introduction to Vectors and Matrices In this chapter, we will discuss some needed concepts found in introductory course in linear algebra. We will introduce matrix, vector, vector-valued
More informationMULTIVARIATE PROBABILITY DISTRIBUTIONS
MULTIVARIATE PROBABILITY DISTRIBUTIONS. PRELIMINARIES.. Example. Consider an experiment that consists of tossing a die and a coin at the same time. We can consider a number of random variables defined
More informationUniversity of Lille I PC first year list of exercises n 7. Review
University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients
More informationIntroduction to Matrix Algebra
4 Introduction to Matrix Algebra In the previous chapter, we learned the algebraic results that form the foundation for the study of factor analysis and structural equation modeling. These results, powerful
More informationHøgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver
Høgskolen i Narvik Sivilingeniørutdanningen STE637 ELEMENTMETODER Oppgaver Klasse: 4.ID, 4.IT Ekstern Professor: Gregory A. Chechkin e-mail: chechkin@mech.math.msu.su Narvik 6 PART I Task. Consider two-point
More information