The Matrix Cookbook. [ ] Kaare Brandt Petersen Michael Syskind Pedersen. Version: November 15, 2012

Size: px
Start display at page:

Download "The Matrix Cookbook. [ http://matrixcookbook.com ] Kaare Brandt Petersen Michael Syskind Pedersen. Version: November 15, 2012"

Transcription

1 The Matrix Cookbook ] Kaare Brandt Petersen Michael Syskind Pedersen Version: November 15, 01 1

2 Introduction What is this? These pages are a collection of facts (identities, approximations, inequalities, relations,...) about matrices and matters relating to them. It is collected in this form for the convenience of anyone who wants a quick desktop reference. Disclaimer: The identities, approximations and relations presented here were obviously not invented but collected, borrowed and copied from a large amount of sources. These sources include similar but shorter notes found on the internet and appendices in books - see the references for a full list. Errors: Very likely there are errors, typos, and mistakes for which we apologize and would be grateful to receive corrections at cookbook@30.dk. Its ongoing: The project of keeping a large repository of relations involving matrices is naturally ongoing and the version will be apparent from the date in the header. Suggestions: Your suggestion for additional content or elaboration of some topics is most welcome acookbook@30.dk. Keywords: Matrix algebra, matrix relations, matrix identities, derivative of determinant, derivative of inverse matrix, differentiate a matrix. Acknowledgements: We would like to thank the following for contributions and suggestions: Bill Baxter, Brian Templeton, Christian Rishøj, Christian Schröppel, Dan Boley, Douglas L. Theobald, Esben Hoegh-Rasmussen, Evripidis Karseras, Georg Martius, Glynne Casteel, Jan Larsen, Jun Bin Gao, Jürgen Struckmeier, Kamil Dedecius, Karim T. Abou-Moustafa, Korbinian Strimmer, Lars Christiansen, Lars Kai Hansen, Leland Wilkinson, Liguo He, Loic Thibaut, Markus Froeb, Michael Hubatka, Miguel Barão, Ole Winther, Pavel Sakov, Stephan Hattinger, Troels Pedersen, Vasile Sima, Vincent Rabaud, Zhaoshui He. We would also like thank The Oticon Foundation for funding our PhD studies. Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 01, Page

3 CONTENTS CONTENTS Contents 1 Basics Trace Determinant The Special Case x Derivatives 8.1 Derivatives of a Determinant Derivatives of an Inverse Derivatives of Eigenvalues Derivatives of Matrices, Vectors and Scalar Forms Derivatives of Traces Derivatives of vector norms Derivatives of matrix norms Derivatives of Structured Matrices Inverses Basic Exact Relations Implication on Inverses Approximations Generalized Inverse Pseudo Inverse Complex Matrices Complex Derivatives Higher order and non-linear derivatives Inverse of complex sum Solutions and Decompositions Solutions to linear equations Eigenvalues and Eigenvectors Singular Value Decomposition Triangular Decomposition LU decomposition LDM decomposition LDL decompositions Statistics and Probability Definition of Moments Expectation of Linear Combinations Weighted Scalar Variable Multivariate Distributions Cauchy Dirichlet Normal Normal-Inverse Gamma Gaussian Multinomial Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 01, Page 3

4 CONTENTS CONTENTS 7.7 Student s t Wishart Wishart, Inverse Gaussians Basics Moments Miscellaneous Mixture of Gaussians Special Matrices Block matrices Discrete Fourier Transform Matrix, The Hermitian Matrices and skew-hermitian Idempotent Matrices Orthogonal matrices Positive Definite and Semi-definite Matrices Singleentry Matrix, The Symmetric, Skew-symmetric/Antisymmetric Toeplitz Matrices Transition matrices Units, Permutation and Shift Vandermonde Matrices Functions and Operators Functions and Series Kronecker and Vec Operator Vector Norms Matrix Norms Rank Integral Involving Dirac Delta Functions Miscellaneous A One-dimensional Results 64 A.1 Gaussian A. One Dimensional Mixture of Gaussians B Proofs and Details 66 B.1 Misc Proofs Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 01, Page 4

5 CONTENTS CONTENTS Notation and Nomenclature A Matrix A ij Matrix indexed for some purpose A i Matrix indexed for some purpose A ij Matrix indexed for some purpose A n Matrix indexed for some purpose or The n.th power of a square matrix A 1 The inverse matrix of the matrix A A + The pseudo inverse matrix of the matrix A (see Sec. 3.6) A 1/ The square root of a matrix (if unique), not elementwise (A) ij The (i, j).th entry of the matrix A A ij The (i, j).th entry of the matrix A A] ij The ij-submatrix, i.e. A with i.th row and j.th column deleted a Vector (column-vector) a i Vector indexed for some purpose a i The i.th element of the vector a a Scalar Rz Rz RZ Iz Iz IZ Real part of a scalar Real part of a vector Real part of a matrix Imaginary part of a scalar Imaginary part of a vector Imaginary part of a matrix det(a) Determinant of A Tr(A) Trace of the matrix A diag(a) Diagonal matrix of the matrix A, i.e. (diag(a)) ij = δ ij A ij eig(a) Eigenvalues of the matrix A vec(a) The vector-version of the matrix A (see Sec. 10..) sup Supremum of a set A Matrix norm (subscript if any denotes what norm) A T Transposed matrix A T The inverse of the transposed and vice versa, A T = (A 1 ) T = (A T ) 1. A Complex conjugated matrix A H Transposed and complex conjugated matrix (Hermitian) A B A B Hadamard (elementwise) product Kronecker product 0 The null matrix. Zero in all entries. I The identity matrix The single-entry matrix, 1 at (i, j) and zero elsewhere Σ A positive definite matrix Λ A diagonal matrix J ij Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 01, Page 5

6 1 BASICS 1 Basics (AB) 1 = B 1 A 1 (1) (ABC...) 1 =...C 1 B 1 A 1 () (A T ) 1 = (A 1 ) T (3) (A + B) T = A T + B T (4) (AB) T = B T A T (5) (ABC...) T =...C T B T A T (6) (A H ) 1 = (A 1 ) H (7) (A + B) H = A H + B H (8) (AB) H = B H A H (9) (ABC...) H =...C H B H A H (10) 1.1 Trace Tr(A) = i A ii (11) Tr(A) = i λ i, λ i = eig(a) (1) Tr(A) = Tr(A T ) (13) Tr(AB) = Tr(BA) (14) Tr(A + B) = Tr(A) + Tr(B) (15) Tr(ABC) = Tr(BCA) = Tr(CAB) (16) a T a = Tr(aa T ) (17) 1. Determinant Let A be an n n matrix. det(a) = i λ i λ i = eig(a) (18) det(ca) = c n det(a), if A R n n (19) det(a T ) = det(a) (0) det(ab) = det(a) det(b) (1) det(a 1 ) = 1/ det(a) () det(a n ) = det(a) n (3) det(i + uv T ) = 1 + u T v (4) For n = : For n = 3: det(i + A) = 1 + det(a) + Tr(A) (5) det(i + A) = 1 + det(a) + Tr(A) + 1 Tr(A) 1 Tr(A ) (6) Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 01, Page 6

7 1.3 The Special Case x 1 BASICS For n = 4: det(i + A) = 1 + det(a) + Tr(A) + 1 +Tr(A) 1 Tr(A ) Tr(A)3 1 Tr(A)Tr(A ) Tr(A3 ) (7) For small ε, the following approximation holds det(i + εa) = 1 + det(a) + εtr(a) + 1 ε Tr(A) 1 ε Tr(A ) (8) 1.3 The Special Case x Consider the matrix A A11 A A = 1 A 1 A Determinant and trace ] Eigenvalues λ 1 = Tr(A) + Tr(A) 4 det(a) λ 1 + λ = Tr(A) Eigenvectors Inverse v 1 det(a) = A 11 A A 1 A 1 (9) Tr(A) = A 11 + A (30) λ λ Tr(A) + det(a) = 0 ] A 1 λ 1 A 11 A 1 = λ = Tr(A) Tr(A) 4 det(a) λ 1 λ = det(a) v A 1 λ A 11 det(a) A 1 A 11 1 A A 1 ] ] (31) Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 01, Page 7

8 DERIVATIVES Derivatives This section is covering differentiation of a number of expressions with respect to a matrix X. Note that it is always assumed that X has no special structure, i.e. that the elements of X are independent (e.g. not symmetric, Toeplitz, positive definite). See section.8 for differentiation of structured matrices. The basic assumptions can be written in a formula as X kl X ij = δ ik δ lj (3) that is for e.g. vector forms, ] x = x i y y i ] x = x y i y i ] x = x i y ij y j The following rules are general and very useful when deriving the differential of an expression (19]): A = 0 (A is a constant) (33) (αx) = αx (34) (X + Y) = X + Y (35) (Tr(X)) = Tr(X) (36) (XY) = (X)Y + X(Y) (37) (X Y) = (X) Y + X (Y) (38) (X Y) = (X) Y + X (Y) (39) (X 1 ) = X 1 (X)X 1 (40) (det(x)) = Tr(adj(X)X) (41) (det(x)) = det(x)tr(x 1 X) (4) (ln(det(x))) = Tr(X 1 X) (43) X T = (X) T (44) X H = (X) H (45).1 Derivatives of a Determinant.1.1 General form k det(y) x det(x) = det(y)tr Y 1 Y x ] (46) X jk = δ ij det(x) (47) X ik ] Y det(y) 1 x x = det(y) Tr Y x +Tr Y ( Tr Y 1 Y x 1 Y x ] Tr Y ) ( Y 1 Y ] x )] ] (48) 1 Y x Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 01, Page 8

9 . Derivatives of an Inverse DERIVATIVES.1. Linear forms det(x) = det(x)(x 1 ) T (49) X det(x) X jk = δ ij det(x) (50) X ik k det(axb) X.1.3 Square forms If X is square and invertible, then = det(axb)(x 1 ) T = det(axb)(x T ) 1 (51) det(x T AX) X If X is not square but A is symmetric, then det(x T AX) X If X is not square and A is not symmetric, then det(x T AX) X.1.4 Other nonlinear forms Some special cases are (See 9, 7]) = det(x T AX)X T (5) = det(x T AX)AX(X T AX) 1 (53) = det(x T AX)(AX(X T AX) 1 + A T X(X T A T X) 1 ) (54) ln det(x T X) X = (X + ) T (55) ln det(x T X) X + = X T (56) ln det(x) X = (X 1 ) T = (X T ) 1 (57) det(x k ) X = k det(x k )X T (58). Derivatives of an Inverse From 7] we have the basic identity Y 1 x Y = Y 1 x Y 1 (59) Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 01, Page 9

10 .3 Derivatives of Eigenvalues DERIVATIVES from which it follows (X 1 ) kl X ij = (X 1 ) ki (X 1 ) jl (60) a T X 1 b X = X T ab T X T (61) det(x 1 ) X = det(x 1 )(X 1 ) T (6) Tr(AX 1 B) X = (X 1 BAX 1 ) T (63) Tr((X + A) 1 ) X = ((X + A) 1 (X + A) 1 ) T (64) From 3] we have the following result: Let A be an n n invertible square matrix, W be the inverse of A, and J(A) is an n n -variate and differentiable function with respect to A, then the partial differentials of J with respect to A and W satisfy J A = J A T W A T.3 Derivatives of Eigenvalues eig(x) = Tr(X) = I (65) X X eig(x) = X X det(x) = det(x)x T (66) If A is real and symmetric, λ i and v i are distinct eigenvalues and eigenvectors of A (see (76)) with vi T v i = 1, then 33] λ i = v T i (A)v i (67) v i = (λ i I A) + (A)v i (68).4 Derivatives of Matrices, Vectors and Scalar Forms.4.1 First Order x T a x a T Xb X a T X T b X a T Xa X = at x x = a (69) = ab T (70) = ba T (71) = at X T a X = aa T (7) X X ij = J ij (73) (XA) ij X mn = δ im (A) nj = (J mn A) ij (74) (X T A) ij X mn = δ in (A) mj = (J nm A) ij (75) Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 01, Page 10

11 .4 Derivatives of Matrices, Vectors and Scalar Forms DERIVATIVES.4. Second Order X ij X kl (76) X kl X mn klmn = kl b T X T Xc X = X(bc T + cb T ) (77) (Bx + b) T C(Dx + d) x = B T C(Dx + d) + D T C T (Bx + b) (78) (X T BX) kl X ij = δ lj (X T B) ki + δ kj (BX) il (79) (X T BX) X ij = X T BJ ij + J ji BX (J ij ) kl = δ ik δ jl (80) See Sec 9.7 for useful properties of the Single-entry matrix J ij x T Bx x = (B + B T )x (81) b T X T DXc X = D T Xbc T + DXcb T (8) X (Xb + c)t D(Xb + c) = (D + D T )(Xb + c)b T (83) Assume W is symmetric, then s (x As)T W(x As) = A T W(x As) (84) x (x s)t W(x s) = W(x s) (85) s (x s)t W(x s) = W(x s) (86) x (x As)T W(x As) = W(x As) (87) A (x As)T W(x As) = W(x As)s T (88) As a case with complex values the following holds (a x H b) x This formula is also known from the LMS algorithm 14].4.3 Higher-order and non-linear For proof of the above, see B.1.3. = b(a x H b) (89) (X n n 1 ) kl = (X r J ij X n 1 r ) kl (90) X ij r=0 n 1 X at X n b = (X r ) T ab T (X n 1 r ) T (91) r=0 Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 01, Page 11

12 .5 Derivatives of Traces DERIVATIVES X at (X n ) T X n b = n 1 X n 1 r ab T (X n ) T X r r=0 +(X r ) T X n ab T (X n 1 r ) T ] (9) See B.1.3 for a proof. Assume s and r are functions of x, i.e. s = s(x), r = r(x), and that A is a constant, then x st Ar = ] T s Ar + x ] T r A T s (93) x (Ax) T (Ax) x (Bx) T (Bx) = x T A T Ax x x T B T Bx (94) = AT Ax x T BBx xt A T AxB T Bx (x T B T Bx) (95).4.4 Gradient and Hessian Using the above we have for the gradient and the Hessian.5 Derivatives of Traces f = x T Ax + b T x (96) x f = f x = (A + AT )x + b (97) f xx T = A + A T (98) Assume F (X) to be a differentiable function of each of the elements of X. It then holds that Tr(F (X)) = f(x) T X where f( ) is the scalar derivative of F ( )..5.1 First Order Tr(X) X = I (99) X Tr(XA) = AT (100) X Tr(AXB) = AT B T (101) X Tr(AXT B) = BA (10) X Tr(XT A) = A (103) X Tr(AXT ) = A (104) Tr(A X) X = Tr(A)I (105) Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 01, Page 1

13 .5 Derivatives of Traces DERIVATIVES.5. Second Order See 7]. X Tr(X ) = X T (106) X Tr(X B) = (XB + BX) T (107) X Tr(XT BX) = BX + B T X (108) X Tr(BXXT ) = BX + B T X (109) X Tr(XXT B) = BX + B T X (110) X Tr(XBXT ) = XB T + XB (111) X Tr(BXT X) = XB T + XB (11) X Tr(XT XB) = XB T + XB (113) X Tr(AXBX) = AT X T B T + B T X T A T (114) X Tr(XT X) = X Tr(XXT ) = X (115) X Tr(BT X T CXB) = C T XBB T + CXBB T (116) X Tr X T BXC ] = BXC + B T XC T (117) X Tr(AXBXT C) = A T C T XB T + CAXB (118) ] X Tr (AXB + C)(AXB + C) T = A T (AXB + C)B T (119).5.3 Higher Order Tr(X X) = Tr(X)Tr(X) = Tr(X)I(10) X X X Tr(Xk ) = k(x k 1 ) T (11) X Tr(AXk ) = k 1 (X r AX k r 1 ) T (1) r=0 X Tr B T X T CXX T CXB ] = CXX T CXBB T +C T XBB T X T C T X +CXBB T X T CX +C T XX T C T XBB T (13) Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 01, Page 13

14 .6 Derivatives of vector norms DERIVATIVES.5.4 Other X Tr(AX 1 B) = (X 1 BAX 1 ) T = X T A T B T X T (14) Assume B and C to be symmetric, then ] X Tr (X T CX) 1 A ] X Tr (X T CX) 1 (X T BX) ] X Tr (A + X T CX) 1 (X T BX) See 7]. = (CX(X T CX) 1 )(A + A T )(X T CX) 1 (15) = CX(X T CX) 1 X T BX(X T CX) 1 +BX(X T CX) 1 (16) = CX(A + X T CX) 1 X T BX(A + X T CX) 1 +BX(A + X T CX) 1 (17) Tr(sin(X)) X = cos(x) T (18).6 Derivatives of vector norms.6.1 Two-norm x x a = x a (19) x a x a I (x a)(x a)t = x x a x a x a 3.7 Derivatives of matrix norms For more on matrix norms, see Sec (130) x x = xt x = x (131) x.7.1 Frobenius norm X X F = X Tr(XXH ) = X (13) See (48). Note that this is also a special case of the result in equation Derivatives of Structured Matrices Assume that the matrix A has some structure, i.e. symmetric, toeplitz, etc. In that case the derivatives of the previous section does not apply in general. Instead, consider the following general rule for differentiating a scalar function f(a) df = ] ] T f A kl f A = Tr (133) da ij A kl A ij A A ij kl Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 01, Page 14

15 .8 Derivatives of Structured Matrices DERIVATIVES The matrix differentiated with respect to itself is in this document referred to as the structure matrix of A and is defined simply by A A ij = S ij (134) If A has no special structure we have simply S ij = J ij, that is, the structure matrix is simply the single-entry matrix. Many structures have a representation in singleentry matrices, see Sec for more examples of structure matrices..8.1 The Chain Rule Sometimes the objective is to find the derivative of a matrix which is a function of another matrix. Let U = f(x), the goal is to find the derivative of the function g(u) with respect to X: g(u) X = g(f(x)) X Then the Chain Rule can then be written the following way: (135) g(u) X = g(u) x ij = M N k=1 l=1 Using matrix notation, this can be written as:.8. Symmetric g(u) X ij = Tr ( g(u) U g(u) u kl (136) u kl x ij )T U X ij ]. (137) If A is symmetric, then S ij = J ij + J ji J ij J ij and therefore ] ] T ] df f f f da = + diag A A A (138) That is, e.g., (5]): Tr(AX) X det(x) X ln det(x) X = A + A T (A I), see (14) (139) = det(x)(x 1 (X 1 I)) (140) = X 1 (X 1 I) (141).8.3 Diagonal If X is diagonal, then (19]): Tr(AX) X = A I (14) Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 01, Page 15

16 .8 Derivatives of Structured Matrices DERIVATIVES.8.4 Toeplitz Like symmetric matrices and diagonal matrices also Toeplitz matrices has a special structure which should be taken into account when the derivative with respect to a matrix with Toeplitz structure. Tr(AT) T = Tr(TA) T = Tr(A) Tr(A T ] n1 ) Tr(A T ] 1n ] n 1, ) A n1 Tr(A T ] 1n )) Tr(A T ] 1n ],n 1 ) α(a) Tr(A) Tr(A T ]1n ] n 1, ) Tr(A T ]n1 ) A 1n Tr(A T ] 1n ],n 1 ) Tr(A T ] 1n )) Tr(A) (143) As it can be seen, the derivative α(a) also has a Toeplitz structure. Each value in the diagonal is the sum of all the diagonal valued in A, the values in the diagonals next to the main diagonal equal the sum of the diagonal next to the main diagonal in A T. This result is only valid for the unconstrained Toeplitz matrix. If the Toeplitz matrix also is symmetric, the same derivative yields Tr(AT) T = Tr(TA) T = α(a) + α(a) T α(a) I (144) Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 01, Page 16

17 3 INVERSES 3 Inverses 3.1 Basic Definition The inverse A 1 of a matrix A C n n is defined such that AA 1 = A 1 A = I, (145) where I is the n n identity matrix. If A 1 exists, A is said to be nonsingular. Otherwise, A is said to be singular (see e.g. 1]) Cofactors and Adjoint The submatrix of a matrix A, denoted by A] ij is a (n 1) (n 1) matrix obtained by deleting the ith row and the jth column of A. The (i, j) cofactor of a matrix is defined as cof(a, i, j) = ( 1) i+j det(a] ij ), (146) The matrix of cofactors can be created from the cofactors cof(a, 1, 1) cof(a, 1, n) cof(a) =. cof(a, i, j). cof(a, n, 1) cof(a, n, n) (147) The adjoint matrix is the transpose of the cofactor matrix adj(a) = (cof(a)) T, (148) Determinant The determinant of a matrix A C n n is defined as (see 1]) det(a) = = n ( 1) j+1 A 1j det (A] 1j ) (149) j=1 n A 1j cof(a, 1, j). (150) j= Construction The inverse matrix can be constructed, using the adjoint matrix, by A 1 = For the case of matrices, see section adj(a) (151) det(a) Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 01, Page 17

18 3. Exact Relations 3 INVERSES Condition number The condition number of a matrix c(a) is the ratio between the largest and the smallest singular value of a matrix (see Section 5.3 on singular values), c(a) = d + d (15) The condition number can be used to measure how singular a matrix is. If the condition number is large, it indicates that the matrix is nearly singular. The condition number can also be estimated from the matrix norms. Here c(a) = A A 1, (153) where is a norm such as e.g the 1-norm, the -norm, the -norm or the Frobenius norm (see Sec 10.4 for more on matrix norms). The -norm of A equals (max(eig(a H A))) 1, p.57]. For a symmetric matrix, this reduces to A = max( eig(a) ) 1, p.394]. If the matrix is symmetric and positive definite, A = max(eig(a)). The condition number based on the -norm thus reduces to A A 1 = max(eig(a)) max(eig(a 1 )) = max(eig(a)) min(eig(a)). (154) 3. Exact Relations 3..1 Basic 3.. The Woodbury identity (AB) 1 = B 1 A 1 (155) The Woodbury identity comes in many variants. The latter of the two can be found in 1] (A + CBC T ) 1 = A 1 A 1 C(B 1 + C T A 1 C) 1 C T A 1 (156) (A + UBV) 1 = A 1 A 1 U(B 1 + VA 1 U) 1 VA 1 (157) If P, R are positive definite, then (see 30]) (P 1 + B T R 1 B) 1 B T R 1 = PB T (BPB T + R) 1 (158) 3..3 The Kailath Variant See 4, page 153]. (A + BC) 1 = A 1 A 1 B(I + CA 1 B) 1 CA 1 (159) 3..4 Sherman-Morrison (A + bc T ) 1 = A 1 A 1 bc T A c T A 1 b (160) Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 01, Page 18

19 3. Exact Relations 3 INVERSES 3..5 The Searle Set of Identities The following set of identities, can be found in 5, page 151], (I + A 1 ) 1 = A(A + I) 1 (161) (A + BB T ) 1 B = A 1 B(I + B T A 1 B) 1 (16) (A 1 + B 1 ) 1 = A(A + B) 1 B = B(A + B) 1 A (163) A A(A + B) 1 A = B B(A + B) 1 B (164) A 1 + B 1 = A 1 (A + B)B 1 (165) (I + AB) 1 = I A(I + BA) 1 B (166) (I + AB) 1 A = A(I + BA) 1 (167) 3..6 Rank-1 update of inverse of inner product Denote A = (X T X) 1 and that X is extended to include a new column vector in the end X = X v]. Then 34] ] ( X T 1 A + X) AXT vv T XA T AX T v = v T v v T XAX T v v T v v T XAX T v v T XA T v T v v T XAX T v 3..7 Rank-1 update of Moore-Penrose Inverse 1 v T v v T XAX T v The following is a rank-1 update for the Moore-Penrose pseudo-inverse of real valued matrices and proof can be found in 18]. The matrix G is defined below: Using the the notation (A + cd T ) + = A + + G (168) β = 1 + d T A + c (169) v = A + c (170) n = (A + ) T d (171) w = (I AA + )c (17) m = (I A + A) T d (173) the solution is given as six different cases, depending on the entities w, m, and β. Please note, that for any (column) vector v it holds that v + = v T (v T v) 1 = vt v. The solution is: Case 1 of 6: If w 0 and m 0. Then G = vw + (m + ) T n T + β(m + ) T w + (174) = 1 w vwt 1 β m mnt + m w mwt (175) Case of 6: If w = 0 and m 0 and β = 0. Then G = vv + A + (m + ) T n T (176) = 1 v vvt A + 1 m mnt (177) Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 01, Page 19

20 3.3 Implication on Inverses 3 INVERSES Case 3 of 6: If w = 0 and β 0. Then G = 1 ( ) ( ) β mvt A + β v m T v m + β β m + v β (A+ ) T v + n (178) Case 4 of 6: If w 0 and m = 0 and β = 0. Then G = A + nn + vw + (179) = 1 n A+ nn T 1 w vwt (180) Case 5 of 6: If m = 0 and β 0. Then G = 1 β A+ nw T β n w + β ( w Case 6 of 6: If w = 0 and m = 0 and β = 0. Then β ) ( ) n T A+ n + v β w + n (181) G = vv + A + A + nn + + v + A + nvn + (18) = 1 v vvt A + 1 n A+ nn T + vt A + n v n vnt (183) 3.3 Implication on Inverses See 5]. If (A + B) 1 = A 1 + B 1 then AB 1 A = BA 1 B (184) A PosDef identity Assume P, R to be positive definite and invertible, then See 30]. (P 1 + B T R 1 B) 1 B T R 1 = PB T (BPB T + R) 1 (185) 3.4 Approximations The following identity is known as the Neuman series of a matrix, which holds when λ i < 1 for all eigenvalues λ i which is equivalent to (I A) 1 = (I + A) 1 = A n (186) n=0 ( 1) n A n (187) When λ i < 1 for all eigenvalues λ i, it holds that A 0 for n, and the following approximations holds n=0 (I A) 1 = I + A + A (I + A) 1 = I A + A (188) (189) Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 01, Page 0

21 3.5 Generalized Inverse 3 INVERSES The following approximation is from ] and holds when A large and symmetric If σ is small compared to Q and M then Proof: A A(I + A) 1 A = I A 1 (190) (Q + σ M) 1 = Q 1 σ Q 1 MQ 1 (191) (Q + σ M) 1 = (19) (QQ 1 Q + σ MQ 1 Q) 1 = (193) This can be rewritten using the Taylor expansion: ((I + σ MQ 1 )Q) 1 = (194) Q 1 (I + σ MQ 1 ) 1 (195) Q 1 (I + σ MQ 1 ) 1 = (196) Q 1 (I σ MQ 1 + (σ MQ 1 )...) = Q 1 σ Q 1 MQ 1 (197) 3.5 Generalized Inverse Definition A generalized inverse matrix of the matrix A is any matrix A such that (see 6]) AA A = A (198) The matrix A is not unique. 3.6 Pseudo Inverse Definition The pseudo inverse (or Moore-Penrose inverse) of a matrix A is the matrix A + that fulfils I AA + A = A II A + AA + = A + III IV AA + symmetric A + A symmetric The matrix A + is unique and does always exist. Note that in case of complex matrices, the symmetric condition is substituted by a condition of being Hermitian. Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 01, Page 1

22 3.6 Pseudo Inverse 3 INVERSES 3.6. Properties Assume A + to be the pseudo-inverse of A, then (See 3] for some of them) (A + ) + = A (199) (A T ) + = (A + ) T (00) (A H ) + = (A + ) H (01) (A ) + = (A + ) (0) (A + A)A H = A H (03) (A + A)A T A T (04) (ca) + = (1/c)A + (05) A + = (A T A) + A T (06) A + = A T (AA T ) + (07) (A T A) + = A + (A T ) + (08) (AA T ) + = (A T ) + A + (09) A + = (A H A) + A H (10) A + = A H (AA H ) + (11) (A H A) + = A + (A H ) + (1) (AA H ) + = (A H ) + A + (13) (AB) + = (A + AB) + (ABB + ) + (14) f(a H A) f(0)i = A + f(aa H ) f(0)i]a (15) f(aa H ) f(0)i = Af(A H A) f(0)i]a + (16) where A C n m. Assume A to have full rank, then For two matrices it hold that Construction (AA + )(AA + ) = AA + (17) (A + A)(A + A) = A + A (18) Assume that A has full rank, then Tr(AA + ) = rank(aa + ) (See 6]) (19) Tr(A + A) = rank(a + A) (See 6]) (0) (AB) + = (A + AB) + (ABB + ) + (1) (A B) + = A + B + () A n n Square rank(a) = n A + = A 1 A n m Broad rank(a) = n A + = A T (AA T ) 1 A n m Tall rank(a) = m A + = (A T A) 1 A T The so-called broad version is also known as right inverse and the tall version as the left inverse. Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 01, Page

23 3.6 Pseudo Inverse 3 INVERSES Assume A does not have full rank, i.e. A is n m and rank(a) = r < min(n, m). The pseudo inverse A + can be constructed from the singular value decomposition A = UDV T, by A + = V r D 1 r U T r (3) where U r, D r, and V r are the matrices with the degenerated rows and columns deleted. A different way is this: There do always exist two matrices C n r and D r m of rank r, such that A = CD. Using these matrices it holds that See 3]. A + = D T (DD T ) 1 (C T C) 1 C T (4) Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 01, Page 3

24 4 COMPLEX MATRICES 4 Complex Matrices The complex scalar product r = pq can be written as ] ] ] Rr Rp Ip Rq = Ir Ip Rp Iq (5) 4.1 Complex Derivatives In order to differentiate an expression f(z) with respect to a complex z, the Cauchy-Riemann equations have to be satisfied (7]): df(z) dz and df(z) dz or in a more compact form: = R(f(z)) Rz = i R(f(z)) Iz f(z) Iz + i I(f(z)) Rz + I(f(z)) Iz (6) (7) = if(z) Rz. (8) A complex function that satisfies the Cauchy-Riemann equations for points in a region R is said yo be analytic in this region R. In general, expressions involving complex conjugate or conjugate transpose do not satisfy the Cauchy-Riemann equations. In order to avoid this problem, a more generalized definition of complex derivative is used (4], 6]): Generalized Complex Derivative: df(z) dz = 1 Conjugate Complex Derivative ( f(z) ) Rz if(z). (9) Iz df(z) dz = 1 ( f(z) ) Rz + if(z). (30) Iz The Generalized Complex Derivative equals the normal derivative, when f is an analytic function. For a non-analytic function such as f(z) = z, the derivative equals zero. The Conjugate Complex Derivative equals zero, when f is an analytic function. The Conjugate Complex Derivative has e.g been used by 1] when deriving a complex gradient. Notice: df(z) dz f(z) Rz + if(z) Iz. (31) Complex Gradient Vector: If f is a real function of a complex vector z, then the complex gradient vector is given by (14, p. 798]) f(z) = df(z) dz (3) = f(z) Rz + if(z) Iz. Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 01, Page 4

25 4.1 Complex Derivatives 4 COMPLEX MATRICES Complex Gradient Matrix: If f is a real function of a complex matrix Z, then the complex gradient matrix is given by (]) f(z) = df(z) dz (33) = f(z) RZ + if(z) IZ. These expressions can be used for gradient descent algorithms The Chain Rule for complex numbers The chain rule is a little more complicated when the function of a complex u = f(x) is non-analytic. For a non-analytic function, the following chain rule can be applied (7]) g(u) x = g u u x + g u u x = g u ( g ) u u x + u x (34) Notice, if the function is analytic, the second term reduces to zero, and the function is reduced to the normal well-known chain rule. For the matrix derivative of a scalar function g(u), the chain rule can be written the following way: g(u) X g(u) Tr(( = U )T U) + X 4.1. Complex Derivatives of Traces Tr(( g(u) U ) T U ) X. (35) If the derivatives involve complex numbers, the conjugate transpose is often involved. The most useful way to show complex derivative is to show the derivative with respect to the real and the imaginary part separately. An easy example is: Tr(X ) RX = Tr(XH ) RX i Tr(X ) IX = ) itr(xh IX = I (36) = I (37) Since the two results have the same sign, the conjugate complex derivative (30) should be used. Tr(X) RX = Tr(XT ) RX i Tr(X) IX = ) itr(xt IX = I (38) = I (39) Here, the two results have different signs, and the generalized complex derivative (9) should be used. Hereby, it can be seen that (100) holds even if X is a complex number. Tr(AX H ) RX i Tr(AXH ) IX = A (40) = A (41) Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 01, Page 5

26 4. Higher order and non-linear derivatives 4 COMPLEX MATRICES Tr(AX ) RX i Tr(AX ) IX = A T (4) = A T (43) Tr(XX H ) RX i Tr(XXH ) IX = Tr(XH X) RX = i Tr(XH X) IX = RX (44) = iix (45) By inserting (44) and (45) in (9) and (30), it can be seen that Tr(XX H ) = X X (46) Tr(XX H ) X = X (47) Since the function Tr(XX H ) is a real function of the complex matrix X, the complex gradient matrix (33) is given by Tr(XX H ) = Tr(XXH ) X = X (48) Complex Derivative Involving Determinants Here, a calculation example is provided. The objective is to find the derivative of det(x H AX) with respect to X C m n. The derivative is found with respect to the real part and the imaginary part of X, by use of (4) and (37), det(x H AX) can be calculated as (see App. B.1.4 for details) det(x H AX) X = 1 ( det(x H AX) i det(xh AX) ) RX IX = det(x H AX) ( (X H AX) 1 X H A ) T (49) and the complex conjugate derivative yields det(x H AX) X = 1 ( det(x H AX) + i det(xh AX) ) RX IX = det(x H AX)AX(X H AX) 1 (50) 4. Higher order and non-linear derivatives (Ax) H (Ax) x (Bx) H (Bx) = x H A H Ax x x H B H Bx (51) = AH Ax x H BBx xh A H AxB H Bx (x H B H Bx) (5) Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 01, Page 6

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Applied Linear Algebra I Review page 1

Applied Linear Algebra I Review page 1 Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

More information

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices. Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5.

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5. PUTNAM TRAINING POLYNOMIALS (Last updated: November 17, 2015) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include

More information

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i. Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

Linear Algebra: Determinants, Inverses, Rank

Linear Algebra: Determinants, Inverses, Rank D Linear Algebra: Determinants, Inverses, Rank D 1 Appendix D: LINEAR ALGEBRA: DETERMINANTS, INVERSES, RANK TABLE OF CONTENTS Page D.1. Introduction D 3 D.2. Determinants D 3 D.2.1. Some Properties of

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions. 3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

More information

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. ( 1 1 5 2 0 6

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. ( 1 1 5 2 0 6 Chapter 7 Matrices Definition An m n matrix is an array of numbers set out in m rows and n columns Examples (i ( 1 1 5 2 0 6 has 2 rows and 3 columns and so it is a 2 3 matrix (ii 1 0 7 1 2 3 3 1 is a

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Matrix Algebra A. Doerr Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Some Basic Matrix Laws Assume the orders of the matrices are such that

More information

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

More information

Lecture 2 Matrix Operations

Lecture 2 Matrix Operations Lecture 2 Matrix Operations transpose, sum & difference, scalar multiplication matrix multiplication, matrix-vector product matrix inverse 2 1 Matrix transpose transpose of m n matrix A, denoted A T or

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

Notes on Matrix Calculus

Notes on Matrix Calculus Notes on Matrix Calculus Paul L. Fackler North Carolina State University September 27, 2005 Matrix calculus is concerned with rules for operating on functions of matrices. For example, suppose that an

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution

More information

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

More information

9 MATRICES AND TRANSFORMATIONS

9 MATRICES AND TRANSFORMATIONS 9 MATRICES AND TRANSFORMATIONS Chapter 9 Matrices and Transformations Objectives After studying this chapter you should be able to handle matrix (and vector) algebra with confidence, and understand the

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

MATRICES WITH DISPLACEMENT STRUCTURE A SURVEY

MATRICES WITH DISPLACEMENT STRUCTURE A SURVEY MATRICES WITH DISPLACEMENT STRUCTURE A SURVEY PLAMEN KOEV Abstract In the following survey we look at structured matrices with what is referred to as low displacement rank Matrices like Cauchy Vandermonde

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

Lecture 4: Partitioned Matrices and Determinants

Lecture 4: Partitioned Matrices and Determinants Lecture 4: Partitioned Matrices and Determinants 1 Elementary row operations Recall the elementary operations on the rows of a matrix, equivalent to premultiplying by an elementary matrix E: (1) multiplying

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2014 Timo Koski () Mathematisk statistik 24.09.2014 1 / 75 Learning outcomes Random vectors, mean vector, covariance

More information

MAT188H1S Lec0101 Burbulla

MAT188H1S Lec0101 Burbulla Winter 206 Linear Transformations A linear transformation T : R m R n is a function that takes vectors in R m to vectors in R n such that and T (u + v) T (u) + T (v) T (k v) k T (v), for all vectors u

More information

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

More information

MATH PROBLEMS, WITH SOLUTIONS

MATH PROBLEMS, WITH SOLUTIONS MATH PROBLEMS, WITH SOLUTIONS OVIDIU MUNTEANU These are free online notes that I wrote to assist students that wish to test their math skills with some problems that go beyond the usual curriculum. These

More information

3.1 Least squares in matrix form

3.1 Least squares in matrix form 118 3 Multiple Regression 3.1 Least squares in matrix form E Uses Appendix A.2 A.4, A.6, A.7. 3.1.1 Introduction More than one explanatory variable In the foregoing chapter we considered the simple regression

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

RESULTANT AND DISCRIMINANT OF POLYNOMIALS

RESULTANT AND DISCRIMINANT OF POLYNOMIALS RESULTANT AND DISCRIMINANT OF POLYNOMIALS SVANTE JANSON Abstract. This is a collection of classical results about resultants and discriminants for polynomials, compiled mainly for my own use. All results

More information

Solution to Homework 2

Solution to Homework 2 Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if

More information

Linear Algebra Review and Reference

Linear Algebra Review and Reference Linear Algebra Review and Reference Zico Kolter (updated by Chuong Do) September 30, 2015 Contents 1 Basic Concepts and Notation 2 11 Basic Notation 2 2 Matrix Multiplication 3 21 Vector-Vector Products

More information

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8 Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e

More information

THREE DIMENSIONAL GEOMETRY

THREE DIMENSIONAL GEOMETRY Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,

More information

Math 312 Homework 1 Solutions

Math 312 Homework 1 Solutions Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

CS3220 Lecture Notes: QR factorization and orthogonal transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss

More information

Factor analysis. Angela Montanari

Factor analysis. Angela Montanari Factor analysis Angela Montanari 1 Introduction Factor analysis is a statistical model that allows to explain the correlations between a large number of observed correlated variables through a small number

More information

Matrix Differentiation

Matrix Differentiation 1 Introduction Matrix Differentiation ( and some other stuff ) Randal J. Barnes Department of Civil Engineering, University of Minnesota Minneapolis, Minnesota, USA Throughout this presentation I have

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

5. Orthogonal matrices

5. Orthogonal matrices L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

More information

NOV - 30211/II. 1. Let f(z) = sin z, z C. Then f(z) : 3. Let the sequence {a n } be given. (A) is bounded in the complex plane

NOV - 30211/II. 1. Let f(z) = sin z, z C. Then f(z) : 3. Let the sequence {a n } be given. (A) is bounded in the complex plane Mathematical Sciences Paper II Time Allowed : 75 Minutes] [Maximum Marks : 100 Note : This Paper contains Fifty (50) multiple choice questions. Each question carries Two () marks. Attempt All questions.

More information

Continuous Groups, Lie Groups, and Lie Algebras

Continuous Groups, Lie Groups, and Lie Algebras Chapter 7 Continuous Groups, Lie Groups, and Lie Algebras Zeno was concerned with three problems... These are the problem of the infinitesimal, the infinite, and continuity... Bertrand Russell The groups

More information

Some probability and statistics

Some probability and statistics Appendix A Some probability and statistics A Probabilities, random variables and their distribution We summarize a few of the basic concepts of random variables, usually denoted by capital letters, X,Y,

More information

Matrices and Polynomials

Matrices and Polynomials APPENDIX 9 Matrices and Polynomials he Multiplication of Polynomials Let α(z) =α 0 +α 1 z+α 2 z 2 + α p z p and y(z) =y 0 +y 1 z+y 2 z 2 + y n z n be two polynomials of degrees p and n respectively. hen,

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone

More information

Lecture 1: Schur s Unitary Triangularization Theorem

Lecture 1: Schur s Unitary Triangularization Theorem Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections

More information

1 Determinants and the Solvability of Linear Systems

1 Determinants and the Solvability of Linear Systems 1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped

More information

SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis

More information

Least-Squares Intersection of Lines

Least-Squares Intersection of Lines Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a

More information

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

More information

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS KEITH CONRAD 1. Introduction The Fundamental Theorem of Algebra says every nonconstant polynomial with complex coefficients can be factored into linear

More information

Chapter 7. Permutation Groups

Chapter 7. Permutation Groups Chapter 7 Permutation Groups () We started the study of groups by considering planar isometries In the previous chapter, we learnt that finite groups of planar isometries can only be cyclic or dihedral

More information

Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday.

Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday. Math 312, Fall 2012 Jerry L. Kazdan Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday. In addition to the problems below, you should also know how to solve

More information

VERTICES OF GIVEN DEGREE IN SERIES-PARALLEL GRAPHS

VERTICES OF GIVEN DEGREE IN SERIES-PARALLEL GRAPHS VERTICES OF GIVEN DEGREE IN SERIES-PARALLEL GRAPHS MICHAEL DRMOTA, OMER GIMENEZ, AND MARC NOY Abstract. We show that the number of vertices of a given degree k in several kinds of series-parallel labelled

More information

Sections 2.11 and 5.8

Sections 2.11 and 5.8 Sections 211 and 58 Timothy Hanson Department of Statistics, University of South Carolina Stat 704: Data Analysis I 1/25 Gesell data Let X be the age in in months a child speaks his/her first word and

More information

Matrices and Linear Algebra

Matrices and Linear Algebra Chapter 2 Matrices and Linear Algebra 2. Basics Definition 2... A matrix is an m n array of scalars from a given field F. The individual values in the matrix are called entries. Examples. 2 3 A = 2 4 2

More information

Statistical machine learning, high dimension and big data

Statistical machine learning, high dimension and big data Statistical machine learning, high dimension and big data S. Gaïffas 1 14 mars 2014 1 CMAP - Ecole Polytechnique Agenda for today Divide and Conquer principle for collaborative filtering Graphical modelling,

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression Logistic Regression Department of Statistics The Pennsylvania State University Email: jiali@stat.psu.edu Logistic Regression Preserve linear classification boundaries. By the Bayes rule: Ĝ(x) = arg max

More information

Finite dimensional C -algebras

Finite dimensional C -algebras Finite dimensional C -algebras S. Sundar September 14, 2012 Throughout H, K stand for finite dimensional Hilbert spaces. 1 Spectral theorem for self-adjoint opertors Let A B(H) and let {ξ 1, ξ 2,, ξ n

More information

6. Cholesky factorization

6. Cholesky factorization 6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix

More information

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year.

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year. This document is designed to help North Carolina educators teach the Common Core (Standard Course of Study). NCDPI staff are continually updating and improving these tools to better serve teachers. Algebra

More information

Some Lecture Notes and In-Class Examples for Pre-Calculus:

Some Lecture Notes and In-Class Examples for Pre-Calculus: Some Lecture Notes and In-Class Examples for Pre-Calculus: Section.7 Definition of a Quadratic Inequality A quadratic inequality is any inequality that can be put in one of the forms ax + bx + c < 0 ax

More information

Chapter 9 Unitary Groups and SU(N)

Chapter 9 Unitary Groups and SU(N) Chapter 9 Unitary Groups and SU(N) The irreducible representations of SO(3) are appropriate for describing the degeneracies of states of quantum mechanical systems which have rotational symmetry in three

More information

Polynomials. Key Terms. quadratic equation parabola conjugates trinomial. polynomial coefficient degree monomial binomial GCF

Polynomials. Key Terms. quadratic equation parabola conjugates trinomial. polynomial coefficient degree monomial binomial GCF Polynomials 5 5.1 Addition and Subtraction of Polynomials and Polynomial Functions 5.2 Multiplication of Polynomials 5.3 Division of Polynomials Problem Recognition Exercises Operations on Polynomials

More information

Tim Kerins. Leaving Certificate Honours Maths - Algebra. Tim Kerins. the date

Tim Kerins. Leaving Certificate Honours Maths - Algebra. Tim Kerins. the date Leaving Certificate Honours Maths - Algebra the date Chapter 1 Algebra This is an important portion of the course. As well as generally accounting for 2 3 questions in examination it is the basis for many

More information

Solution of Linear Systems

Solution of Linear Systems Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra 4 Introduction to Matrix Algebra In the previous chapter, we learned the algebraic results that form the foundation for the study of factor analysis and structural equation modeling. These results, powerful

More information

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible

More information

Inner products on R n, and more

Inner products on R n, and more Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +

More information

Statistical Machine Learning

Statistical Machine Learning Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes

More information

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone

More information

How To Understand The Kroncker Product

How To Understand The Kroncker Product The Kronecker Product A Product of the Times Charles Van Loan Department of Computer Science Cornell University The Kronecker Product B C is a block matrix whose ij-th block is b ij C. E.g., [ b11 b 12

More information

Linear Algebraic Equations, SVD, and the Pseudo-Inverse

Linear Algebraic Equations, SVD, and the Pseudo-Inverse Linear Algebraic Equations, SVD, and the Pseudo-Inverse Philip N. Sabes October, 21 1 A Little Background 1.1 Singular values and matrix inversion For non-smmetric matrices, the eigenvalues and singular

More information

How To Prove The Dirichlet Unit Theorem

How To Prove The Dirichlet Unit Theorem Chapter 6 The Dirichlet Unit Theorem As usual, we will be working in the ring B of algebraic integers of a number field L. Two factorizations of an element of B are regarded as essentially the same if

More information

a 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4)

a 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4) ROOTS OF POLYNOMIAL EQUATIONS In this unit we discuss polynomial equations. A polynomial in x of degree n, where n 0 is an integer, is an expression of the form P n (x) =a n x n + a n 1 x n 1 + + a 1 x

More information

7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization 7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

More information

A note on companion matrices

A note on companion matrices Linear Algebra and its Applications 372 (2003) 325 33 www.elsevier.com/locate/laa A note on companion matrices Miroslav Fiedler Academy of Sciences of the Czech Republic Institute of Computer Science Pod

More information

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0. Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability

More information

Linear Threshold Units

Linear Threshold Units Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear

More information

Notes on Linear Algebra. Peter J. Cameron

Notes on Linear Algebra. Peter J. Cameron Notes on Linear Algebra Peter J. Cameron ii Preface Linear algebra has two aspects. Abstractly, it is the study of vector spaces over fields, and their linear maps and bilinear forms. Concretely, it is

More information

Lecture 13 Linear quadratic Lyapunov theory

Lecture 13 Linear quadratic Lyapunov theory EE363 Winter 28-9 Lecture 13 Linear quadratic Lyapunov theory the Lyapunov equation Lyapunov stability conditions the Lyapunov operator and integral evaluating quadratic integrals analysis of ARE discrete-time

More information

1 Homework 1. [p 0 q i+j +... + p i 1 q j+1 ] + [p i q j ] + [p i+1 q j 1 +... + p i+j q 0 ]

1 Homework 1. [p 0 q i+j +... + p i 1 q j+1 ] + [p i q j ] + [p i+1 q j 1 +... + p i+j q 0 ] 1 Homework 1 (1) Prove the ideal (3,x) is a maximal ideal in Z[x]. SOLUTION: Suppose we expand this ideal by including another generator polynomial, P / (3, x). Write P = n + x Q with n an integer not

More information

Introduction to Matrices for Engineers

Introduction to Matrices for Engineers Introduction to Matrices for Engineers C.T.J. Dodson, School of Mathematics, Manchester Universit 1 What is a Matrix? A matrix is a rectangular arra of elements, usuall numbers, e.g. 1 0-8 4 0-1 1 0 11

More information

Similar matrices and Jordan form

Similar matrices and Jordan form Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

More information

(Quasi-)Newton methods

(Quasi-)Newton methods (Quasi-)Newton methods 1 Introduction 1.1 Newton method Newton method is a method to find the zeros of a differentiable non-linear function g, x such that g(x) = 0, where g : R n R n. Given a starting

More information