SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "SF2940: Probability theory Lecture 8: Multivariate Normal Distribution"

Transcription

1 SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski Timo Koski Matematisk statistik / 1

2 Learning outcomes Random vectors, mean vector, covariance matrix, rules of transformation Multivariate normal R.V., moment generating functions, characteristic function, rules of transformation Density of a multivariate normal RV Joint PDF of bivariate normal RVs Conditional distributions in a multivariate normal distribution Timo Koski Matematisk statistik / 1

3 PART 1: Mean vector, Covariance matrix, MGF, Characteristic function Timo Koski Matematisk statistik / 1

4 Vector Notation: Random Vector A random vector X is a column vector X 1 X 2 X =. = (X 1,X 2,...,X n ) T X n Each X i is a random variable. Timo Koski Matematisk statistik / 1

5 Sample Value Random Vector A column vector x = x 1 x 2. x n = (x 1,x 2,...,x n ) T We can think of x i is an outcome of X i. Timo Koski Matematisk statistik / 1

6 Joint CDF, Joint PDF The joint CDF (=cumulative distribution function) of a continuous random vector X is F X (x) = F X1,...,X n (x 1,...,x n ) = P (X x) = = P (X 1 x 1,...,X n x n ) Joint probability density function (PDF) f X (x) = n x 1... x n F X1,...,X n (x 1,...,x n ) Timo Koski Matematisk statistik / 1

7 Mean Vector µ X = E [X] = E [X 1 ] E [X 2 ]. E [X n ] a column vector of means (=expectations) of X., Timo Koski Matematisk statistik / 1

8 Matrix, Scalar Product If X T is the transposed column vector (=a row vector), then is a n n matrix, and XX T X T X = is a scalar product, a real valued R.V.. n Xi 2 i=1 Timo Koski Matematisk statistik / 1

9 Covariance Matrix of A Random Vector Covariance matrix C X := E [(X µ X )(X µ X ) T] where the element (i,j) is the covariance of X i and X j. C X (i,j) = E [(X i µ i )(X j µ j )] Timo Koski Matematisk statistik / 1

10 Remarks on Covariance X and Y are independent Cov(X,Y) = 0. The converse implication is not true in general, as shown in the next example. Let X N(0,1) and set Y = X 2. Then Y is clearly functionally dependent on X. But we have Cov(X,Y) = E [(X Y)] E [X] E [Y] = E [ X 3] 0 E [Y] = E [ X 3] = 0. The last equality holds, since one has g(x) = x 3 φ(x), so that g( x) = g(x). Hence E [ X 3] = + g(x)dx = 0, c.f., in the sequel, too. Timo Koski Matematisk statistik / 1

11 A Quadratic Form We see that = E = x T C X x = n n i=1j=1 [ n i=1 n n i=1j=1 x i x j C X (i,j). x i x j E [(X i µ i )(X j µ j )] n x i x j (X i µ i )(X j µ j ) j=1 ] ( ) Timo Koski Matematisk statistik / 1

12 Properties of a Covariance Matrix Covariance matrix is nonnegative definite, i.e., for all x we have x T C X x 0 Hence detc X 0. The covariance matrix is symmetric C X = CX T Timo Koski Matematisk statistik / 1

13 Properties of a Covariance Matrix The covariance matrix is symmetric C X = C T X since C X (i,j) = E [(X i µ i )(X j µ j )] = E [(X j µ j )(X i µ i )] = C X (j,i) Timo Koski Matematisk statistik / 1

14 Properties of a Covariance Matrix A covariance matrix is positive definite, x T C X x > 0 for all x = 0 iff (i.e. C X is invertible). detc X > 0 Timo Koski Matematisk statistik / 1

15 Properties of a Covariance Matrix Proposition Pf: By ( ) above = E x T C X x = x T E x T C X x 0 [ (X µ X )(X µ X ) T] x [ ] [ ] x T (X µ X )(X µ X ) T x = E x T w w T x where we have set w = (X µ X ). Then by linear algebra x T w = w T x = n i=1 w ix i. Hence ( [ ] n ) 2 E x T ww T x = E i x i 0. i=1w Timo Koski Matematisk statistik / 1

16 Properties of a Covariance Matrix In terms of the entries c i,j of a covariance matrix C = (c i,j ) n,n, i=1,j=1 there are the following necessary properties. 1 c i,j = c j,i (symmetry). 2 c i,i = Var(X i ) = σ 2 i 0 (the elements in the main diagonal are the variances, and thus all elements in the main diagonal are nonnegative). 3 c 2 i,j c i,i c j,j (Cauchy-Schwartz inequality). Timo Koski Matematisk statistik / 1

17 Coefficient of Correlation The Coefficient of Correlation ρ of X and Y is defined as ρ := ρ X,Y := Cov(X,Y) Var(X) Var(Y), where Cov(X,Y) = E [(X µ X )(Y µ Y )]. This is normalized For random variables X and Y, 1 ρ X,Y 1 Cov(X,Y) = ρ X,Y = 0 does not always mean that X,Y are independent. Timo Koski Matematisk statistik / 1

18 Special case: Covariance Matrix of A Bivariate Vector X = (X 1,X 2 ) T. ( σ 2 C X = 1 ρσ 1 σ 2 ρσ 1 σ 2 σ2 2 where ρ is the coefficient of correlation of X 1 and X 2, and σ1 2 = Var(X 1), σ2 2 = Var(X 2). C X is invertible iff ρ 2 = 1, for proof we note that detc X = σ1 2 σ2 2 ( 1 ρ 2 ) ), Timo Koski Matematisk statistik / 1

19 Special case: Covariance Matrix of A Bivariate Vector if ρ 2 = 1, the inverse exists and Λ 1 = ( σ 2 Λ = 1 ρσ 1 σ 2 ρσ 1 σ 2 σ2 2 ( 1 σ1 2σ2 2 (1 ρ2 ) ), σ 2 2 ρσ 1 σ 2 ρσ 1 σ 2 σ 2 1 ), Timo Koski Matematisk statistik / 1

20 Y = BX+b Proposition X is a random vector with mean vector µ X and covariance matrix C X. B is a m n matrix. If Y = BX+b, then EY = Bµ X +b C Y = BC X B T Pf: For simplicity of writing, take b = µ = 0. Then C Y = EYY T = EBX(BX) T = [ = EBXX T B T = BE XX T] B T = BC X B T Timo Koski Matematisk statistik / 1

21 Moment Generating and Characteristic Functions Definition Moment generating function of X is defined as ψ X (t) def = Ee ttx = Ee t 1X 1 +t 2 X 2 + +t n X n Definition Characteristic function of X is defined as ϕ X (t) def = Ee ittx = Ee i(t 1X 1 +t 2 X 2 + +t n X n ) Special cases: take t 1 = 1,t 2 = t 3 =... = t n = 0, then ϕ X (t) = ϕ X1 (t 1 ). Timo Koski Matematisk statistik / 1

22 PART 2: Def I of a multivariate normal distribution We recall first some of the properties of univariate normal distribution Timo Koski Matematisk statistik / 1

23 Normal (Gaussian) One-dimensional RVs X is a normal random variable if where µ is real and σ > 0. Notation: X N(µ, σ 2 ) f X (x) = 1 σ 2π e 1 2σ 2(x µ)2 Properties: E(X) = µ, Var(X) = σ 2 Timo Koski Matematisk statistik / 1

24 Normal (Gaussian) One-dimensional RVs f X (x) x f X (x) x µ = 2, σ = 1/2, (b) µ = 2, σ = 2 (a) Timo Koski Matematisk statistik / 1

25 Central Moments Normal (Gaussian) One-dimensional RVs X N(0, σ 2 ). Then E [X n ] = { 0 n is odd (2k)! 2 k k! σ2k n = 2k, k = 0,1,2,.... Timo Koski Matematisk statistik / 1

26 Linear Transformation X N(µ X, σ 2 ) Y = ax +b is N(aµ X +b,a 2 σ 2 ) Thus Z = X µ X σ X N(0,1) and ( X µx P(X x) = P σ X or x µ X σ X ( F X (x) = P Z x µ ) ( ) X x µx = Φ σ X σ X ) Timo Koski Matematisk statistik / 1

27 Normal (Gaussian) One-dimensional RVs X N(µ, σ 2 ) then the moment generating function is [ ψ X (t) = E e tx] = e tµ+1 2 t2 σ 2, and the characteristic function is ϕ X (t) = E as found in previous Lectures. [ e itx] = e itµ 1 2 t2 σ 2 Timo Koski Matematisk statistik / 1

28 Multivariate Normal Def. I Definition An n 1 random vector X has a normal distribution iff for every n 1-vector a the one-dimensional random vector a T X has a normal distribution. We write X N(µ,Λ), when µ is the mean vector and Λ is the covariance matrix. Timo Koski Matematisk statistik / 1

29 Consequences of Def. I (1) An n 1 vector X N(µ,Λ) iff the one-dimensional random vector a T X has a normal distribution for every n-vector a. Now we know that (take B = a T in the preceding) [ ] Ea T X = a T µ, Var a T X = a T Λa Timo Koski Matematisk statistik / 1

30 Consequences of Def. I (2) Hence, if Y = a T X, then Y N ( a T µ,a T Λa ) and the moment generating function of Y is [ ψ Y (t) = E e ty] = e tat µ+ 2 1t2 a TΛa. Therefore ψ X (a) = Ee atx = ψ Y (1) = e at µ+ 1 2 at Λa. Timo Koski Matematisk statistik / 1

31 Consequences of Def. I (3) Hence we have shown that if X N(µ,Λ), then ψ X (t) = Ee ttx = e tt µ+ 1 2 tt Λt. is the moment generating function of X. Timo Koski Matematisk statistik / 1

32 Consequences of Def. I (4) In the same way we can find that ϕ X (t) = Ee ittx = e itt µ 1 2 tt Λt. is the characteristic function of X N(µ,Λ). Timo Koski Matematisk statistik / 1

33 Consequences of Def. I (5) Let Λ be a diagonal covariance matrix with λ 2 i s on the main diagonal, i.e., λ λ Λ = 0 0 λ , λ 2 n Proposition If X N(µ,Λ), then X 1,X 2,...,X n are independent normal variables. Timo Koski Matematisk statistik / 1

34 Consequences of Def. I (6) Pf: Λ is diagonal, the quadratic form becomes a single sum of squares. ϕ X (t) = e itt µ 1 2 tt Λt = = e i n i=1 µ it i 1 2 n i=1 λ2 i t2 i = e iµ 1t λ2 1 t2 1e iµ 2t λ2 2 t2 2 e iµ n t n 2 1 λ2 n t2 n is the product of the characteristic functions of X i N ( µ i, λ 2 ) i, which are thus seen to be independent N ( µ i, λ 2 ) i. Timo Koski Matematisk statistik / 1

35 Kac s theorem: Thm in LN Theorem X = (X 1,X 2,,X n ) T. The componentsx 1,X 2,,X n are independent if and only if φ X (s) = E [ e is X ] = n i=1 φ Xi (s i ), where φ Xi (s i ) is the characteristic function for X i. Timo Koski Matematisk statistik / 1

36 Further properties of the multivariate normal X N(µ,Λ) Every component X k is one-dimensional normal. To prove this we take a = (0,0,..., }{{} 1,0,...,0) T position k and the conclusion follows by Def. I. X 1 +X 2 + X n is one-dimensional normal. Note: The terms in the sum need not be independent. Timo Koski Matematisk statistik / 1

37 Properties of multivariate normal X N(µ,Λ) Every marginal distribution of k variables ( 1 k < n is normal. To prove this we consider any k variables X i1,x i2...x ik and then take a such that a j = 0 for j = i 1,...i k and then apply Def. I. Timo Koski Matematisk statistik / 1

38 Properties of multivariate normal Proposition X N(µ,Λ) and Y = BX+b. Then ( Y N Bµ+b,BΛB T). Pf: ψ Y (s) = E = e stb E E [ ] [ ] e st Y = E e st (b+bx) = [ e st BX ] = e stb E [ e (BT s) T X [ ] ( ) e s) T (BT X = ψ X B T s. ] Timo Koski Matematisk statistik / 1

39 Properties of multivariate normal X N(µ,Λ) ) ψ X (B T s = e s) T (BT µ+ 2(B 1 T s) T Λ(B T s). ( B T s) T µ = s T Bµ, ( ) T ( ) B T s Λ B T s = s T BΛB T s, e (BT s) T µ+ 1 2(B T s) T Λ(B T s) = e s T Bµ+ 1 2 st BΛB T s Timo Koski Matematisk statistik / 1

40 Properties of multivariate normal ) ψ X (B T s = e st Bµ+ 1 2 st BΛB Ts. ) ψ Y (s) = e stb ψ X (B T s = e stb e st Bµ+ 2 1sT BΛB T s which proves the claim as asserted. ψ Y (s) = e st (b+bµ)+ 1 2 st BΛB Ts, Timo Koski Matematisk statistik / 1

41 PART 3: Multivariate normal, Def. II: characteristic function, DEF III: density Timo Koski Matematisk statistik / 1

42 Multivariate normal, Def. II: char. fnctn Definition A random vector X with mean vector µ and a covariance matrix Λ is N(µ,Λ) if its characteristic function is ϕ X (t) = Ee ittx = e itt µ 1 2 tt Λt. Timo Koski Matematisk statistik / 1

43 Multivariate normal, Def. II implies Def. I We need to show that the one-dimensional random vector Y = a T X has a normal distribution. [ ϕ Y (t) = E e ity] ] = E [e it n i=1 a i X i = = E [ e itat X ] = ϕ X (ta) = = e itat µ 1 2 t2 a T Λa and this is the characteristic function of N ( a T µ,a T Λa ). Timo Koski Matematisk statistik / 1

44 Multivariate normal, Def. III: joint PDF Definition A random vector X with mean vector µ and an invertible covariance matrix Λ is N(µ,Λ), if the density is f X (x) = 1 (2π) n/2 det(λ) e 1 2 (x µ) T Λ 1 (x µ) Timo Koski Matematisk statistik / 1

45 Multivariate normal It can be checked by a computation that e itt µ 2 1tTΛt = e itt x 1 R n (2π) n/2 det(λ) e 1 2 (x µ) TΛ 1 (x µ) dx (complete the square) Hence Def. III implies the property in Def. II. The three definitions are equivalent, in the case inverse of the covariance matrix exists. Timo Koski Matematisk statistik / 1

46 PART 4: Bivariate normal with density Timo Koski Matematisk statistik / 1

47 Multivariate Normal: the bivariate case As soon as ρ 2 = 1, the matrix ( σ 2 Λ = 1 ρσ 1 σ 2 ρσ 1 σ 2 σ2 2 ), is invertible, and the inverse is Λ 1 = 1 σ 2 1 σ2 2 (1 ρ2 ) ( σ 2 2 ρσ 1 σ 2 ρσ 1 σ 2 σ 2 1 ), Timo Koski Matematisk statistik / 1

48 Multivariate Normal: the bivariate case ρ 2 = 1, and X = (X 1,X 2 ) T, then f X (x) = = 1 2π detλ e 1 2 (x µ X ) T Λ 1 (x µ X ) 1 2πσ 1 σ 2 1 ρ 2 e 1 2 Q(x 1,x 2 ) Timo Koski Matematisk statistik / 1

49 Multivariate Normal: the bivariate case where Q(x 1,x 2 ) = [ (x1 ) 1 (1 ρ 2 ) µ 2 1 2ρ(x ( ) ] 1 µ 1 )(x 2 µ 2 ) x2 µ σ 1 σ 2 σ 1 For this, invert the matrix Λ and expand the quadratic form! σ 2 Timo Koski Matematisk statistik / 1

50 ρ = Timo Koski Matematisk statistik / 1

51 ρ = Timo Koski Matematisk statistik / 1

52 ρ = Timo Koski Matematisk statistik / 1

53 Conditional densities for the bivariate normal Complete the square of the exponent to write where f X,Y (x,y) = f X (x)f Y X (y) f X (x) = f Y X (y) = 1 e 1 2σ 2 (x µ 1 ) 2 1 σ 1 2π 1 e 1 2 σ 2 (y µ 2 (x)) 2 2 σ 2 2π µ 2 (x) = µ 2 + ρ σ 2 σ 1 (x µ 1 ), σ 2 = σ 2 1 ρ 2 Timo Koski Matematisk statistik / 1

54 Bivariate normal properties E(X) = µ 1 Given X = x, Y is Gaussian Conditional mean of Y given X = x: µ 2 (x) = µ 2 + ρ σ 2 σ 1 (x µ 1 ) = E(Y X = x) Conditional variance of Y given X = x: Var(Y X = x) = σ2 2 ( 1 ρ 2 ) Timo Koski Matematisk statistik / 1

55 Bivariate normal properties Conditional mean of Y given X = x: µ 2 (x) = µ 2 + ρ σ 2 σ 1 (x µ 1 ) = E(Y X = x) Conditional variance of Y given X = x: Var(Y X = x) = σ2 2 ( 1 ρ 2 ) Check Section and Exercise By this is seen that the conditional mean of Y given X variable in a bivariate normal distribution is also the best LINEAR predictor of Y based on X, and the conditional variance is the variance of the estimation error. Timo Koski Matematisk statistik / 1

56 Marginal PDFs Timo Koski Matematisk statistik / 1

57 Proof of conditional pdf Consider f X,Y (x,y) f X (x) = σ 1 2π 2πσ 1 σ 2 1 ρ 2 e 1 2 Q(x,y)+ 1 2σ 1 2 (x µ 1 ) 2 Timo Koski Matematisk statistik / 1

58 Proof of conditional pdf 1 2 Q(x,y)+ 1 2σ1 2 (x µ 1 ) 2 = 1 2 H(x,y), Timo Koski Matematisk statistik / 1

59 Proof of conditional pdfs H(x,y) = [ (x ) 1 2 (1 ρ 2 ) µ1 2ρ(x µ ( ) ] 1)(y µ 2 ) y 2 µ2 + σ 1 σ 2 σ 1 ( x µ1 σ 1 ) 2 σ 2 Timo Koski Matematisk statistik / 1

60 Proof of conditional pdf H(x,y) = ρ 2 (x µ 1 ) 2 (1 ρ 2 ) σ1 2 2ρ(x µ 1)(y µ 2 ) σ 1 σ 2 (1 ρ 2 + (y µ 2) 2 ) σ2 2(1 ρ2 ) Timo Koski Matematisk statistik / 1

61 Proof of conditional pdf H(x,y) = ( ) 2 y µ 2 ρ σ 2 σ 1 (x µ 1 ) σ 2 2 (1 ρ2 ) Timo Koski Matematisk statistik / 1

62 Conditional pdf 1 1 ρ 2 σ 2 2π e f X,Y (x,y) = f X (x) 2 1 (y µ 2 ρ σ 2 σ1 (x µ 1 )) 2 σ 2 2(1 ρ2 ) This establishes the bivariate normal properties claimed above. Timo Koski Matematisk statistik / 1

63 Bivariate normal properties : ρ Proposition (X,Y) bivariate normal ρ = ρ X,Y Proof: E [(X µ 1 )(Y µ 2 )] = E(E([(X µ 1 )(Y µ 2 )] X)) = E((X µ 1 )E [Y µ 2 ] X)) Timo Koski Matematisk statistik / 1

64 Bivariate normal properties : ρ = E((X µ 1 )E [(Y µ 2 )] X)) = E(X µ 1 )[E(Y X) µ 2 ] [ = E((X µ 1 ) µ 2 + ρ σ ] 2 (X µ 1 ) µ 2 σ 1 = ρ σ 2 σ 1 E(X µ 1 )((X µ 1 )) Timo Koski Matematisk statistik / 1

65 Bivariate normal properties : ρ = ρ σ 2 σ 1 E(X µ 1 )(X µ 1 ) = ρ σ 2 σ 1 E(X µ 1 ) 2 = ρ σ 2 σ 1 σ 2 1 = ρσ 2 σ 1 Timo Koski Matematisk statistik / 1

66 Bivariate normal properties : ρ In other words we have checked that ρ = E [(X µ 1)(Y µ 2 )] σ 2 σ 1 ρ = 0 bivariate normal X,Y are independent. Timo Koski Matematisk statistik / 1

67 PART 5: Generating a multivariate normal variable Timo Koski Matematisk statistik / 1

68 Standard Normal Vector: definition Z N(0,I) is a standard normal vector. I is the n n identity matrix. f Z (z) = 1 (2π) n/2 det(i) e 1 2 (z 0) T I 1 (z 0) = 1 (2π) n/2e 1 2 zt z Timo Koski Matematisk statistik / 1

69 Distribution of X = AZ+b X = AZ+b, Z is standard Gaussian, then X = N (b,aa T) (follows by a rule in the preceding) Timo Koski Matematisk statistik / 1

70 Multivariate Normal: the bivariate case If ( σ 2 Λ = 1 ρσ 1 σ 2 ρσ 1 σ 2 σ2 2 ), then Λ = AA T, where A = ( σ1 0 ρσ 2 σ 2 1 ρ 2 ), Timo Koski Matematisk statistik / 1

71 Standard Normal Vector X N(µ X,Λ), and A is such that Λ = AA T (An invertible matrix A with this property exists always, if Λ is positive definite (we need the symmetry of Λ, too.) Then Z = A 1 (X µ X ) is a standard Gaussian vector. Proof: We give the first idea of his proof, a rule of transformation. Timo Koski Matematisk statistik / 1

72 Rule of transformation If X has density f X (x), Y = AX+b, A is invertible, then f Y (y) = Note that if Λ = AA T, then so that deta = detλ. 1 deta f ( X A 1 (y b) ) detλ = deta deta T = deta deta = deta 2, Timo Koski Matematisk statistik / 1

73 Diagonalizable Matrices An n n matrix A is orthogonally diagonalizable, if there is an orthogonal matrix P (i.e., P T P =PP T = I) such that where Λ is a diagonal matrix. P T AP = Λ, Timo Koski Matematisk statistik / 1

74 Diagonalizable Matrices Theorem If A is an n n matrix, then the following are equivalent: (i) A is orthogonally diagonalizable. (ii) A has an orthonormal set of eigenvectors. (iii) A is symmetric. Since covariance matrices are symmetric, we have by the theorem above that all covariance matrices are orthogonally diagonalizable. Timo Koski Matematisk statistik / 1

75 Diagonalizable Matrices Theorem If A is a symmetric matrix, then (i) Eigenvalues ofaare all real numbers. (ii) Eigenvectors from different eigenspaces are orthogonal. That is, all eigenvalues of a covariance matrix are real. Timo Koski Matematisk statistik / 1

76 Diagonalizable Matrices Hence we have for any covariance matrix the spectral decomposition C = n λ i e i ei T, (1) i=1 where Ce i = λ i e i. Since C is nonnegative definite, and its eigenvectors are orthonormal, 0 e T i Ce i = λ i e T i e i = λ i, and thus the eigenvalues of a covariance matrix are nonnegative. Timo Koski Matematisk statistik / 1

77 Diagonalizable Matrices Let now P be an orthogonal matrix such that P C X P = Λ, and X N(0,C X ), i.e., C X is a covariance matrix and Λ is diagonal (with the eigenvalues of C X on the main diagonal). Then if Y = P T X, we have that Y N(0,Λ). In other words, Y is a Gaussian vector and has independent components. This method of producing independent Gaussians has several important applications. One of these is the principal component analysis. Timo Koski Matematisk statistik / 1

78 Timo Koski Matematisk statistik / 1

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2014 Timo Koski () Mathematisk statistik 24.09.2014 1 / 75 Learning outcomes Random vectors, mean vector, covariance

More information

Notes for STA 437/1005 Methods for Multivariate Data

Notes for STA 437/1005 Methods for Multivariate Data Notes for STA 437/1005 Methods for Multivariate Data Radford M. Neal, 26 November 2010 Random Vectors Notation: Let X be a random vector with p elements, so that X = [X 1,..., X p ], where denotes transpose.

More information

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

More information

Random Vectors and the Variance Covariance Matrix

Random Vectors and the Variance Covariance Matrix Random Vectors and the Variance Covariance Matrix Definition 1. A random vector X is a vector (X 1, X 2,..., X p ) of jointly distributed random variables. As is customary in linear algebra, we will write

More information

4. Joint Distributions of Two Random Variables

4. Joint Distributions of Two Random Variables 4. Joint Distributions of Two Random Variables 4.1 Joint Distributions of Two Discrete Random Variables Suppose the discrete random variables X and Y have supports S X and S Y, respectively. The joint

More information

Topic 4: Multivariate random variables. Multiple random variables

Topic 4: Multivariate random variables. Multiple random variables Topic 4: Multivariate random variables Joint, marginal, and conditional pmf Joint, marginal, and conditional pdf and cdf Independence Expectation, covariance, correlation Conditional expectation Two jointly

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

ECE302 Spring 2006 HW7 Solutions March 11, 2006 1

ECE302 Spring 2006 HW7 Solutions March 11, 2006 1 ECE32 Spring 26 HW7 Solutions March, 26 Solutions to HW7 Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in italics where

More information

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n. ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

P (x) 0. Discrete random variables Expected value. The expected value, mean or average of a random variable x is: xp (x) = v i P (v i )

P (x) 0. Discrete random variables Expected value. The expected value, mean or average of a random variable x is: xp (x) = v i P (v i ) Discrete random variables Probability mass function Given a discrete random variable X taking values in X = {v 1,..., v m }, its probability mass function P : X [0, 1] is defined as: P (v i ) = Pr[X =

More information

Some probability and statistics

Some probability and statistics Appendix A Some probability and statistics A Probabilities, random variables and their distribution We summarize a few of the basic concepts of random variables, usually denoted by capital letters, X,Y,

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

The Hadamard Product

The Hadamard Product The Hadamard Product Elizabeth Million April 12, 2007 1 Introduction and Basic Results As inexperienced mathematicians we may have once thought that the natural definition for matrix multiplication would

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

Quadratic forms Cochran s theorem, degrees of freedom, and all that

Quadratic forms Cochran s theorem, degrees of freedom, and all that Quadratic forms Cochran s theorem, degrees of freedom, and all that Dr. Frank Wood Frank Wood, fwood@stat.columbia.edu Linear Regression Models Lecture 1, Slide 1 Why We Care Cochran s theorem tells us

More information

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix.

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. Inverse matrix Definition. Let A be an n n matrix. The inverse of A is an n n matrix, denoted

More information

Diagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions

Diagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions Chapter 3 Diagonalisation Eigenvalues and eigenvectors, diagonalisation of a matrix, orthogonal diagonalisation fo symmetric matrices Reading As in the previous chapter, there is no specific essential

More information

1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each)

1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each) Math 33 AH : Solution to the Final Exam Honors Linear Algebra and Applications 1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each) (1) If A is an invertible

More information

MATH 240 Fall, Chapter 1: Linear Equations and Matrices

MATH 240 Fall, Chapter 1: Linear Equations and Matrices MATH 240 Fall, 2007 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 9th Ed. written by Prof. J. Beachy Sections 1.1 1.5, 2.1 2.3, 4.2 4.9, 3.1 3.5, 5.3 5.5, 6.1 6.3, 6.5, 7.1 7.3 DEFINITIONS

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

Inner products on R n, and more

Inner products on R n, and more Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +

More information

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two

More information

Worked examples Multiple Random Variables

Worked examples Multiple Random Variables Worked eamples Multiple Random Variables Eample Let X and Y be random variables that take on values from the set,, } (a) Find a joint probability mass assignment for which X and Y are independent, and

More information

SYSM 6304: Risk and Decision Analysis Lecture 3 Monte Carlo Simulation

SYSM 6304: Risk and Decision Analysis Lecture 3 Monte Carlo Simulation SYSM 6304: Risk and Decision Analysis Lecture 3 Monte Carlo Simulation M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas Email: M.Vidyasagar@utdallas.edu September 19, 2015 Outline

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

Multivariate normal distribution and testing for means (see MKB Ch 3)

Multivariate normal distribution and testing for means (see MKB Ch 3) Multivariate normal distribution and testing for means (see MKB Ch 3) Where are we going? 2 One-sample t-test (univariate).................................................. 3 Two-sample t-test (univariate).................................................

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices. Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R

More information

MULTIVARIATE PROBABILITY DISTRIBUTIONS

MULTIVARIATE PROBABILITY DISTRIBUTIONS MULTIVARIATE PROBABILITY DISTRIBUTIONS. PRELIMINARIES.. Example. Consider an experiment that consists of tossing a die and a coin at the same time. We can consider a number of random variables defined

More information

Chapter 4. Multivariate Distributions

Chapter 4. Multivariate Distributions 1 Chapter 4. Multivariate Distributions Joint p.m.f. (p.d.f.) Independent Random Variables Covariance and Correlation Coefficient Expectation and Covariance Matrix Multivariate (Normal) Distributions Matlab

More information

More than you wanted to know about quadratic forms

More than you wanted to know about quadratic forms CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences More than you wanted to know about quadratic forms KC Border Contents 1 Quadratic forms 1 1.1 Quadratic forms on the unit

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

More information

2.1: Determinants by Cofactor Expansion. Math 214 Chapter 2 Notes and Homework. Evaluate a Determinant by Expanding by Cofactors

2.1: Determinants by Cofactor Expansion. Math 214 Chapter 2 Notes and Homework. Evaluate a Determinant by Expanding by Cofactors 2.1: Determinants by Cofactor Expansion Math 214 Chapter 2 Notes and Homework Determinants The minor M ij of the entry a ij is the determinant of the submatrix obtained from deleting the i th row and the

More information

UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure

UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure UNIT 2 MATRICES - I Matrices - I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress

More information

Lecture 11. Shuanglin Shao. October 2nd and 7th, 2013

Lecture 11. Shuanglin Shao. October 2nd and 7th, 2013 Lecture 11 Shuanglin Shao October 2nd and 7th, 2013 Matrix determinants: addition. Determinants: multiplication. Adjoint of a matrix. Cramer s rule to solve a linear system. Recall that from the previous

More information

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns L. Vandenberghe EE133A (Spring 2016) 4. Matrix inverses left and right inverse linear independence nonsingular matrices matrices with linearly independent columns matrices with linearly independent rows

More information

1 Eigenvalues and Eigenvectors

1 Eigenvalues and Eigenvectors Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x

More information

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015.

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015. Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment -3, Probability and Statistics, March 05. Due:-March 5, 05.. Show that the function 0 for x < x+ F (x) = 4 for x < for x

More information

[1] Diagonal factorization

[1] Diagonal factorization 8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8 Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e

More information

Lecture 5 Principal Minors and the Hessian

Lecture 5 Principal Minors and the Hessian Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and

More information

The Bivariate Normal Distribution

The Bivariate Normal Distribution The Bivariate Normal Distribution This is Section 4.7 of the st edition (2002) of the book Introduction to Probability, by D. P. Bertsekas and J. N. Tsitsiklis. The material in this section was not included

More information

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i. Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

R mp nq. (13.1) a m1 B a mn B

R mp nq. (13.1) a m1 B a mn B This electronic version is for personal use and may not be duplicated or distributed Chapter 13 Kronecker Products 131 Definition and Examples Definition 131 Let A R m n, B R p q Then the Kronecker product

More information

Finite dimensional C -algebras

Finite dimensional C -algebras Finite dimensional C -algebras S. Sundar September 14, 2012 Throughout H, K stand for finite dimensional Hilbert spaces. 1 Spectral theorem for self-adjoint opertors Let A B(H) and let {ξ 1, ξ 2,, ξ n

More information

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION 4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:

More information

5. Continuous Random Variables

5. Continuous Random Variables 5. Continuous Random Variables Continuous random variables can take any value in an interval. They are used to model physical characteristics such as time, length, position, etc. Examples (i) Let X be

More information

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

More information

STAT 830 Convergence in Distribution

STAT 830 Convergence in Distribution STAT 830 Convergence in Distribution Richard Lockhart Simon Fraser University STAT 830 Fall 2011 Richard Lockhart (Simon Fraser University) STAT 830 Convergence in Distribution STAT 830 Fall 2011 1 / 31

More information

Cofactor Expansion: Cramer s Rule

Cofactor Expansion: Cramer s Rule Cofactor Expansion: Cramer s Rule MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Introduction Today we will focus on developing: an efficient method for calculating

More information

(a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular.

(a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. Theorem.7.: (Properties of Triangular Matrices) (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. (b) The product

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

Lecture 21. The Multivariate Normal Distribution

Lecture 21. The Multivariate Normal Distribution Lecture. The Multivariate Normal Distribution. Definitions and Comments The joint moment-generating function of X,...,X n [also called the moment-generating function of the random vector (X,...,X n )]

More information

Sections 2.11 and 5.8

Sections 2.11 and 5.8 Sections 211 and 58 Timothy Hanson Department of Statistics, University of South Carolina Stat 704: Data Analysis I 1/25 Gesell data Let X be the age in in months a child speaks his/her first word and

More information

EC9A0: Pre-sessional Advanced Mathematics Course

EC9A0: Pre-sessional Advanced Mathematics Course University of Warwick, EC9A0: Pre-sessional Advanced Mathematics Course Peter J. Hammond & Pablo F. Beker 1 of 55 EC9A0: Pre-sessional Advanced Mathematics Course Slides 1: Matrix Algebra Peter J. Hammond

More information

Bivariate Distributions

Bivariate Distributions Chapter 4 Bivariate Distributions 4.1 Distributions of Two Random Variables In many practical cases it is desirable to take more than one measurement of a random observation: (brief examples) 1. What is

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

Matrix Inverse and Determinants

Matrix Inverse and Determinants DM554 Linear and Integer Programming Lecture 5 and Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1 2 3 4 and Cramer s rule 2 Outline 1 2 3 4 and

More information

1 Determinants. Definition 1

1 Determinants. Definition 1 Determinants The determinant of a square matrix is a value in R assigned to the matrix, it characterizes matrices which are invertible (det 0) and is related to the volume of a parallelpiped described

More information

THREE DIMENSIONAL GEOMETRY

THREE DIMENSIONAL GEOMETRY Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,

More information

Joint Distributions. Tieming Ji. Fall 2012

Joint Distributions. Tieming Ji. Fall 2012 Joint Distributions Tieming Ji Fall 2012 1 / 33 X : univariate random variable. (X, Y ): bivariate random variable. In this chapter, we are going to study the distributions of bivariate random variables

More information

Lecture Notes 1: Matrix Algebra Part B: Determinants and Inverses

Lecture Notes 1: Matrix Algebra Part B: Determinants and Inverses University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 57 Lecture Notes 1: Matrix Algebra Part B: Determinants and Inverses Peter J. Hammond email: p.j.hammond@warwick.ac.uk Autumn 2012,

More information

Eigenvalues and eigenvectors of a matrix

Eigenvalues and eigenvectors of a matrix Eigenvalues and eigenvectors of a matrix Definition: If A is an n n matrix and there exists a real number λ and a non-zero column vector V such that AV = λv then λ is called an eigenvalue of A and V is

More information

Lecture Notes 1. Brief Review of Basic Probability

Lecture Notes 1. Brief Review of Basic Probability Probability Review Lecture Notes Brief Review of Basic Probability I assume you know basic probability. Chapters -3 are a review. I will assume you have read and understood Chapters -3. Here is a very

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001, Fall 2010 September 30th, 2010 A. Donev (Courant Institute)

More information

WHICH LINEAR-FRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE?

WHICH LINEAR-FRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE? WHICH LINEAR-FRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE? JOEL H. SHAPIRO Abstract. These notes supplement the discussion of linear fractional mappings presented in a beginning graduate course

More information

Math 315: Linear Algebra Solutions to Midterm Exam I

Math 315: Linear Algebra Solutions to Midterm Exam I Math 35: Linear Algebra s to Midterm Exam I # Consider the following two systems of linear equations (I) ax + by = k cx + dy = l (II) ax + by = 0 cx + dy = 0 (a) Prove: If x = x, y = y and x = x 2, y =

More information

Inverses and powers: Rules of Matrix Arithmetic

Inverses and powers: Rules of Matrix Arithmetic Contents 1 Inverses and powers: Rules of Matrix Arithmetic 1.1 What about division of matrices? 1.2 Properties of the Inverse of a Matrix 1.2.1 Theorem (Uniqueness of Inverse) 1.2.2 Inverse Test 1.2.3

More information

equations Karl Lundengård December 3, 2012 MAA704: Matrix functions and matrix equations Matrix functions Matrix equations Matrix equations, cont d

equations Karl Lundengård December 3, 2012 MAA704: Matrix functions and matrix equations Matrix functions Matrix equations Matrix equations, cont d and and Karl Lundengård December 3, 2012 Solving General, Contents of todays lecture and (Kroenecker product) Solving General, Some useful from calculus and : f (x) = x n, x C, n Z + : f (x) = n x, x R,

More information

INTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL

INTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL SOLUTIONS OF THEORETICAL EXERCISES selected from INTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL Eighth Edition, Prentice Hall, 2005. Dr. Grigore CĂLUGĂREANU Department of Mathematics

More information

Applied Linear Algebra I Review page 1

Applied Linear Algebra I Review page 1 Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

More information

P(a X b) = f X (x)dx. A p.d.f. must integrate to one: f X (x)dx = 1. Z b

P(a X b) = f X (x)dx. A p.d.f. must integrate to one: f X (x)dx = 1. Z b Continuous Random Variables The probability that a continuous random variable, X, has a value between a and b is computed by integrating its probability density function (p.d.f.) over the interval [a,b]:

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

B such that AB = I and BA = I. (We say B is an inverse of A.) Definition A square matrix A is invertible (or nonsingular) if matrix

B such that AB = I and BA = I. (We say B is an inverse of A.) Definition A square matrix A is invertible (or nonsingular) if matrix Matrix inverses Recall... Definition A square matrix A is invertible (or nonsingular) if matrix B such that AB = and BA =. (We say B is an inverse of A.) Remark Not all square matrices are invertible.

More information

Diagonal, Symmetric and Triangular Matrices

Diagonal, Symmetric and Triangular Matrices Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

Examination paper for TMA4115 Matematikk 3

Examination paper for TMA4115 Matematikk 3 Department of Mathematical Sciences Examination paper for TMA45 Matematikk 3 Academic contact during examination: Antoine Julien a, Alexander Schmeding b, Gereon Quick c Phone: a 73 59 77 82, b 40 53 99

More information

Similar matrices and Jordan form

Similar matrices and Jordan form Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

More information

MATH36001 Background Material 2015

MATH36001 Background Material 2015 MATH3600 Background Material 205 Matrix Algebra Matrices and Vectors An ordered array of mn elements a ij (i =,, m; j =,, n) written in the form a a 2 a n A = a 2 a 22 a 2n a m a m2 a mn is said to be

More information

DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II. Matrix Algorithms DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

More information

5. Orthogonal matrices

5. Orthogonal matrices L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

More information

Lecture 1: Schur s Unitary Triangularization Theorem

Lecture 1: Schur s Unitary Triangularization Theorem Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections

More information

Determinants. Dr. Doreen De Leon Math 152, Fall 2015

Determinants. Dr. Doreen De Leon Math 152, Fall 2015 Determinants Dr. Doreen De Leon Math 52, Fall 205 Determinant of a Matrix Elementary Matrices We will first discuss matrices that can be used to produce an elementary row operation on a given matrix A.

More information

Lecture 2 Matrix Operations

Lecture 2 Matrix Operations Lecture 2 Matrix Operations transpose, sum & difference, scalar multiplication matrix multiplication, matrix-vector product matrix inverse 2 1 Matrix transpose transpose of m n matrix A, denoted A T or

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Lecture 5: Singular Value Decomposition SVD (1)

Lecture 5: Singular Value Decomposition SVD (1) EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system

More information

Chapter 8. Matrices II: inverses. 8.1 What is an inverse?

Chapter 8. Matrices II: inverses. 8.1 What is an inverse? Chapter 8 Matrices II: inverses We have learnt how to add subtract and multiply matrices but we have not defined division. The reason is that in general it cannot always be defined. In this chapter, we

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

CS395T Computational Statistics with Application to Bioinformatics

CS395T Computational Statistics with Application to Bioinformatics CS395T Computational Statistics with Application to Bioinformatics Prof. William H. Press Spring Term, 2010 The University of Texas at Austin Unit 6: Multivariate Normal Distributions and Chi Square The

More information

Matrix Algebra, Class Notes (part 1) by Hrishikesh D. Vinod Copyright 1998 by Prof. H. D. Vinod, Fordham University, New York. All rights reserved.

Matrix Algebra, Class Notes (part 1) by Hrishikesh D. Vinod Copyright 1998 by Prof. H. D. Vinod, Fordham University, New York. All rights reserved. Matrix Algebra, Class Notes (part 1) by Hrishikesh D. Vinod Copyright 1998 by Prof. H. D. Vinod, Fordham University, New York. All rights reserved. 1 Sum, Product and Transpose of Matrices. If a ij with

More information

MT426 Notebook 3 Fall 2012 prepared by Professor Jenny Baglivo. 3 MT426 Notebook 3 3. 3.1 Definitions... 3. 3.2 Joint Discrete Distributions...

MT426 Notebook 3 Fall 2012 prepared by Professor Jenny Baglivo. 3 MT426 Notebook 3 3. 3.1 Definitions... 3. 3.2 Joint Discrete Distributions... MT426 Notebook 3 Fall 2012 prepared by Professor Jenny Baglivo c Copyright 2004-2012 by Jenny A. Baglivo. All Rights Reserved. Contents 3 MT426 Notebook 3 3 3.1 Definitions............................................

More information