SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

Size: px
Start display at page:

Download "SF2940: Probability theory Lecture 8: Multivariate Normal Distribution"

Transcription

1 SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski Timo Koski () Mathematisk statistik / 75

2 Learning outcomes Random vectors, mean vector, covariance matrix, rules of transformation Multivariate normal R.V., moment generating functions, characteristic function, rules of transformation Density of a multivariate normal RV Joint PDF of bivariate normal RVs Conditional distributions in a multivariate normal distribution Timo Koski () Mathematisk statistik / 75

3 PART 1: Mean vector, Covariance matrix, MGF, Characteristic function Timo Koski () Mathematisk statistik / 75

4 Vector Notation: Random Vector A random vector X is a column vector X 1 X 2 X =. = (X 1,X 2,...,X n ) T X n Each X i is a random variable. Timo Koski () Mathematisk statistik / 75

5 Sample Value Random Vector A column vector x = x 1 x 2. x n = (x 1,x 2,...,x n ) T We can think of x i is an outcome of X i. Timo Koski () Mathematisk statistik / 75

6 Joint CDF, Joint PDF The joint CDF (=cumulative distribution function) of a continuous random vector X is F X (x) = F X1,...,X n (x 1,...,x n ) = P (X x) = = P (X 1 x 1,...,X n x n ) Joint probability density function (PDF) f X (x) = n x 1... x n F X1,...,X n (x 1,...,x n ) Timo Koski () Mathematisk statistik / 75

7 Mean Vector µ X = E [X] = E [X 1 ] E [X 2 ]. E [X n ] a column vector of means (=expectations) of X., Timo Koski () Mathematisk statistik / 75

8 Matrix, Scalar Product If X T is the transposed column vector (=a row vector), then is a n n matrix, and XX T X T X = is a scalar product, a real valued R.V.. n Xi 2 i=1 Timo Koski () Mathematisk statistik / 75

9 Covariance Matrix of A Random Vector Covariance matrix C X := E [(X µ X )(X µ X ) T] where the element (i,j) is the covariance of X i and X j. C X (i,j) = E [(X i µ i )(X j µ j )] Timo Koski () Mathematisk statistik / 75

10 A Quadratic Form We see that = E = x T C X x = n n i=1j=1 [ n i=1 n n i=1j=1 x i x j C X (i,j). x i x j E [(X i µ i )(X j µ j )] n x i x j (X i µ i )(X j µ j ) j=1 ] ( ) Timo Koski () Mathematisk statistik / 75

11 Properties of a Covariance Matrix Covariance matrix is nonnegative definite, i.e., for all x we have x T C X x 0 Hence detc X 0. The covariance matrix is symmetric C X = CX T Timo Koski () Mathematisk statistik / 75

12 Properties of a Covariance Matrix The covariance matrix is symmetric C X = C T X since C X (i,j) = E [(X i µ i )(X j µ j )] = E [(X j µ j )(X i µ i )] = C X (j,i) Timo Koski () Mathematisk statistik / 75

13 Properties of a Covariance Matrix A covariance matrix is positive definite, x T C X x > 0 for all x = 0 iff (i.e. C X is invertible). detc X > 0 Timo Koski () Mathematisk statistik / 75

14 Properties of a Covariance Matrix Proposition Pf: By ( ) above = E x T C X x = x T E x T C X x 0 [ (X µ X )(X µ X ) T] x [ ] [ ] x T (X µ X )(X µ X ) T x = E x T w w T x where we have set w = (X µ X ). Then by linear algebra x T w = w T x = n i=1 w ix i. Hence ( [ ] n ) 2 E x T ww T x = E i x i 0. i=1w Timo Koski () Mathematisk statistik / 75

15 Properties of a Covariance Matrix In terms of the entries c i,j of a covariance matrix C = (c i,j ) n,n, i=1,j=1 there are the following necessary properties. 1 c i,j = c j,i (symmetry). 2 c i,i = Var(X i ) = σ 2 i 0 (the elements in the main diagonal are the variances, and thus all elements in the main diagonal are nonnegative). 3 c 2 i,j c i,i c j,j (Cauchy-Schwartz inequality). Timo Koski () Mathematisk statistik / 75

16 Coefficient of Correlation The Coefficient of Correlation ρ of X and Y is defined as ρ := ρ X,Y := Cov(X,Y) Var(X) Var(Y), where Cov(X,Y) = E [(X µ X )(Y µ Y )]. This is normalized For random variables X and Y, 1 ρ X,Y 1 Cov(X,Y) = ρ X,Y = 0 does not always mean that X,Y are independent. Timo Koski () Mathematisk statistik / 75

17 Special case: Covariance Matrix of A Bivariate Vector X = (X 1,X 2 ) T. ( σ 2 C X = 1 ρσ 1 σ 2 ρσ 1 σ 2 σ2 2 where ρ is the coefficient of correlation of X 1 and X 2, and σ1 2 = Var(X 1), σ2 2 = Var(X 2). C X is invertible iff ρ 2 = 1, for proof we note that detc X = σ1 2 σ2 2 ( 1 ρ 2 ) ), Timo Koski () Mathematisk statistik / 75

18 Special case: Covariance Matrix of A Bivariate Vector if ρ 2 = 1, the inverse exists and Λ 1 = ( σ 2 Λ = 1 ρσ 1 σ 2 ρσ 1 σ 2 σ2 2 ( 1 σ1 2σ2 2 (1 ρ2 ) ), σ 2 2 ρσ 1 σ 2 ρσ 1 σ 2 σ 2 1 ), Timo Koski () Mathematisk statistik / 75

19 Y = BX+b Proposition X is a random vector with mean vector µ X and covariance matrix C X. B is a m n matrix. If Y = BX+b, then EY = Bµ X +b C Y = BC X B T Pf: For simplicity of writing, take b = µ = 0. Then C Y = EYY T = EBX(BX) T = [ = EBXX T B T = BE XX T] B T = BC X B T Timo Koski () Mathematisk statistik / 75

20 Moment Generating and Characteristic Functions Definition Moment generating function of X is defined as ψ X (t) def = Ee ttx = Ee t 1X 1 +t 2 X 2 + +t n X n Definition Characteristic function of X is defined as ϕ X (t) def = Ee ittx = Ee i(t 1X 1 +t 2 X 2 + +t n X n ) Special cases: take t 1 = 1,t 2 = t 3 =... = t n = 0, then ϕ X (t) = ϕ X1 (t 1 ). Timo Koski () Mathematisk statistik / 75

21 PART 2: Def I of a multivariate normal distribution We recall first some of the properties of univariate normal distribution Timo Koski () Mathematisk statistik / 75

22 Normal (Gaussian) One-dimensional RVs X is a normal random variable if where µ is real and σ > 0. Notation: X N(µ, σ 2 ) Properties: E(X) = µ, Var = σ 2 f X (x) = 1 σ 2π e 1 2σ 2(x µ)2 Timo Koski () Mathematisk statistik / 75

23 Normal (Gaussian) One-dimensional RVs f X (x) x f X (x) x µ = 2, σ = 1/2, (b) µ = 2, σ = 2 (a) Timo Koski () Mathematisk statistik / 75

24 Linear Transformation X N(µ X, σ 2 ) Y = ax +b is N(aµ X +b,a 2 σ 2 ) Thus Z = X µ X σ X N(0,1) and ( X µx P(X x) = P σ X or x µ X σ X ( F X (x) = P Z x µ ) ( ) X x µx = Φ σ X σ X ) Timo Koski () Mathematisk statistik / 75

25 Normal (Gaussian) One-dimensional RVs X N(µ, σ 2 ) then the moment generating function is [ ψ X (t) = E e tx] = e tµ+1 2 t2 σ 2, and the characteristic function is ϕ X (t) = E as found in previous Lectures. [ e itx] = e itµ 1 2 t2 σ 2 Timo Koski () Mathematisk statistik / 75

26 Multivariate Normal Def. I Definition An n 1 random vector X has a normal distribution iff for every n 1-vector a the one-dimensional random vector a T X has a normal distribution. We write X N(µ,Λ), when µ is the mean vector and Λ is the covariance matrix. Timo Koski () Mathematisk statistik / 75

27 Consequences of Def. I (1) An n 1 vector X N(µ,Λ) iff the one-dimensional random vector a T X has a normal distribution for every n-vector a. Now we know that (take B = a T in the preceding) [ ] Ea T X = a T µ,var a T X = a T Λa Timo Koski () Mathematisk statistik / 75

28 Consequences of Def. I (2) Hence, if Y = a T X, then Y N ( a T µ,a T Λa ) and the moment generating function of Y is [ ψ Y (t) = E e ty] = e tat µ+ 2 1t2 a TΛa. Therefore ψ X (a) = Ee atx = ψ Y (1) = e at µ+ 1 2 at Λa. Timo Koski () Mathematisk statistik / 75

29 Consequences of Def. I (3) Hence we have shown that if X N(µ,Λ), then ψ X (t) = Ee ttx = e tt µ+ 1 2 tt Λt. is the moment generating function of X. Timo Koski () Mathematisk statistik / 75

30 Consequences of Def. I (4) In the same way we can find that ϕ X (t) = Ee ittx = e itt µ 1 2 tt Λt. is the characteristic function of X N(µ,Λ). Timo Koski () Mathematisk statistik / 75

31 Consequences of Def. I (5) Let Λ be a diagonal covariance matrix with λ 2 i s on the main diagonal, i.e., λ λ Λ = 0 0 λ , λ 2 n Proposition If X N(µ,Λ), then X 1,X 2,...,X n are independent normal variables. Timo Koski () Mathematisk statistik / 75

32 Consequences of Def. I (6) Pf: Λ is diagonal, the quadratic form becomes a single sum of squares. ϕ X (t) = e itt µ 1 2 tt Λt = = e i n i=1 µ it i 1 2 n i=1 λ2 i t2 i = e iµ 1t λ2 1 t2 1e iµ 2t λ2 2 t2 2 e iµ n t n 2 1 λ2 n t2 n is the product of the characteristic functions of X i N ( µ i, λ 2 ) i, which are thus seen to be independent N ( µ i, λ 2 ) i. Timo Koski () Mathematisk statistik / 75

33 Kac s theorem: Thm in LN Theorem X = (X 1,X 2,,X n ). ThecomponentsX 1,X 2,,X n are independent if and only if φ X (s) = E [ e is X ] = n i=1 φ Xi (s i ), where φ Xi (s i ) is thecharacteristic functionfor X i. Timo Koski () Mathematisk statistik / 75

34 Further properties of the multivariate normal X N(µ,Λ) Every component X k is one-dimensional normal. To prove this we take a = (0,0,..., }{{} 1,0,...,0) T position k and the conclusion follows by Def. I. X 1 +X 2 + X n is one-dimensional normal. Note: The terms in the sum need not be independent. Timo Koski () Mathematisk statistik / 75

35 Properties of multivariate normal X N(µ,Λ) Every marginal distribution of k variables ( 1 k < n is normal. To prove this we consider any k variables X i1,x i2...x ik and then take a such that a j = 0 for j = i 1,...i k and then apply Def. I. Timo Koski () Mathematisk statistik / 75

36 Properties of multivariate normal Proposition X N(µ,Λ) and Y = BX+b. Then ( Y N Bµ+b,BΛB T). Pf: ψ Y (s) = E = e stb E E [ ] [ ] e st Y = E e st (b+bx) = [ e st BX ] = e stb E [ e (BT s) T X [ ] ( ) e s) T (BT X = ψ X B T s. ] Timo Koski () Mathematisk statistik / 75

37 Properties of multivariate normal X N(µ,Λ) ) ψ X (B T s = e s) T (BT µ+ 2(B 1 T s) T Λ(B T s). ( B T s) T µ = s T Bµ, ( ) T ( ) B T s Λ B T s = s T BΛB T s, e (BT s) T µ+ 1 2(B T s) T Λ(B T s) = e s T Bµ+ 1 2 st BΛB T s Timo Koski () Mathematisk statistik / 75

38 Properties of multivariate normal ) ψ X (B T s = e st Bµ+ 1 2 st BΛB Ts. ) ψ Y (s) = e stb ψ X (B T s = e stb e st Bµ+ 2 1sT BΛB T s which proves the claim as asserted. ψ Y (s) = e st (b+bµ)+ 1 2 st BΛB Ts, Timo Koski () Mathematisk statistik / 75

39 PART 3: Multivariate normal, Def. II: characteristic function, DEF III: density Timo Koski () Mathematisk statistik / 75

40 Multivariate normal, Def. II: char. fnctn Definition A random vector X with mean vector µ and a covariance matrix Λ is N(µ,Λ) if its characteristic function is ϕ X (t) = Ee ittx = e itt µ 1 2 tt Λt. Timo Koski () Mathematisk statistik / 75

41 Multivariate normal, Def. II implies Def. I We need to show that the one-dimensional random vector Y = a T X has a normal distribution. [ ϕ Y (t) = E e ity] ] = E [e it n i=1 a i X i = = E [ e itat X ] = ϕ X (ta) = = e itat µ 1 2 t2 a T Λa and this is the characteristic function of N ( a T µ,a T Λa ). Timo Koski () Mathematisk statistik / 75

42 Multivariate normal, Def. III: joint PDF Definition A random vector X with mean vector µ and an invertible covariance matrix Λ is N(µ,Λ), if the density is f X (x) = 1 (2π) n/2 det(λ) e 1 2 (x µ) T Λ 1 (x µ) Timo Koski () Mathematisk statistik / 75

43 Multivariate normal It can be checked by a computation that e itt µ 2 1tTΛt = e itt x 1 R n (2π) n/2 det(λ) e 1 2 (x µ) TΛ 1 (x µ) dx (complete the square) Hence Def. III implies the property in Def. II. The three definitions are equivalent, in the case inverse of the covariance matrix exists. Timo Koski () Mathematisk statistik / 75

44 PART 4: Bivariate normal with density Timo Koski () Mathematisk statistik / 75

45 Multivariate Normal: the bivariate case As soon as ρ 2 = 1, the matrix ( σ 2 Λ = 1 ρσ 1 σ 2 ρσ 1 σ 2 σ2 2 ), is invertible, and the inverse is Λ 1 = 1 σ 2 1 σ2 2 (1 ρ2 ) ( σ 2 2 ρσ 1 σ 2 ρσ 1 σ 2 σ 2 1 ), Timo Koski () Mathematisk statistik / 75

46 Multivariate Normal: the bivariate case ρ 2 = 1, and X = (X 1,X 2 ) T, then f X (x) = = 1 2π detλ e 1 2 (x µ X ) T Λ 1 (x µ X ) 1 2πσ 1 σ 2 1 ρ 2 e 1 2 Q(x 1,x 2 ) Timo Koski () Mathematisk statistik / 75

47 Multivariate Normal: the bivariate case where Q(x 1,x 2 ) = [ (x1 ) 1 (1 ρ 2 ) µ 2 1 2ρ(x ( ) ] 1 µ 1 )(x 2 µ 2 ) x2 µ σ 1 σ 2 σ 1 For this, invert the matrix Λ and expand the quadratic form! σ 2 Timo Koski () Mathematisk statistik / 75

48 ρ = Timo Koski () Mathematisk statistik / 75

49 ρ = Timo Koski () Mathematisk statistik / 75

50 ρ = Timo Koski () Mathematisk statistik / 75

51 Conditional densities for the bivariate normal Complete the square of the exponent to write where f X,Y (x,y) = f X (x)f Y X (y) f X (x) = f Y X (y) = 1 e 1 2σ 2 (x µ 1 ) 2 1 σ 1 2π 1 e 1 2 σ 2 (y µ 2 (x)) 2 2 σ 2 2π µ 2 (x) = µ 2 + ρ σ 2 σ 1 (x µ 1 ), σ 2 = σ 2 1 ρ 2 Timo Koski () Mathematisk statistik / 75

52 Bivariate normal properties E(X) = µ 1 Given X = x, Y is Gaussian Conditional mean of Y given X = x: µ 2 (x) = µ 2 + ρ σ 2 σ 1 (x µ 1 ) = E(Y X = x) Conditional variance of Y given X = x: Var(Y X = x) = σ2 2 ( 1 ρ 2 ) Timo Koski () Mathematisk statistik / 75

53 Bivariate normal properties Conditional mean of Y given X = x: µ 2 (x) = µ 2 + ρ σ 2 σ 1 (x µ 1 ) = E(Y X = x) Conditional variance of Y given X = x: Var(Y X = x) = σ2 2 ( 1 ρ 2 ) Check Section and Exercise By this is seen that the conditional mean of Y given X variable in a bivariate normal distribution is also the best LINEAR predictor of Y based on X, and the conditional variance is the variance of the estimation error. Timo Koski () Mathematisk statistik / 75

54 Marginal PDFs Timo Koski () Mathematisk statistik / 75

55 Proof of conditional pdf Consider f X,Y (x,y) f X (x) = σ 1 2π 2πσ 1 σ 2 1 ρ 2 e 1 2 Q(x,y)+ 1 2σ 1 2 (x µ 1 ) 2 Timo Koski () Mathematisk statistik / 75

56 Proof of conditional pdf 1 2 Q(x,y)+ 1 2σ1 2 (x µ 1 ) 2 = 1 2 H(x,y), Timo Koski () Mathematisk statistik / 75

57 Proof of conditional pdfs H(x,y) = [ (x ) 1 2 (1 ρ 2 ) µ1 2ρ(x µ ( ) ] 1)(y µ 2 ) y 2 µ2 + σ 1 σ 2 σ 1 ( x µ1 σ 1 ) 2 σ 2 Timo Koski () Mathematisk statistik / 75

58 Proof of conditional pdf H(x,y) = ρ 2 (x µ 1 ) 2 (1 ρ 2 ) σ1 2 2ρ(x µ 1)(y µ 2 ) σ 1 σ 2 (1 ρ 2 + (y µ 2) 2 ) σ2 2(1 ρ2 ) Timo Koski () Mathematisk statistik / 75

59 Proof of conditional pdf H(x,y) = ( ) 2 y µ 2 ρ σ 2 σ 1 (x µ 1 ) σ 2 2 (1 ρ2 ) Timo Koski () Mathematisk statistik / 75

60 Conditional pdf 1 1 ρ 2 σ 2 2π e f X,Y (x,y) = f X (x) 2 1 (y µ 2 ρ σ 2 σ1 (x µ 1 )) 2 σ 2 2(1 ρ2 ) This establishes the bivariate normal properties claimed above. Timo Koski () Mathematisk statistik / 75

61 Bivariate normal properties : ρ Proposition (X,Y) bivariate normal ρ = ρ X,Y Proof: E [(X µ 1 )(Y µ 2 )] = E(E([(X µ 1 )(Y µ 2 )] X)) = E((X µ 1 )E [Y µ 2 ] X)) Timo Koski () Mathematisk statistik / 75

62 Bivariate normal properties : ρ = E((X µ 1 )E [(Y µ 2 )] X)) = E(X µ 1 )[E(Y X) µ 2 ] [ = E((X µ 1 ) µ 2 + ρ σ ] 2 (X µ 1 ) µ 2 σ 1 = ρ σ 2 σ 1 E(X µ 1 )((X µ 1 )) Timo Koski () Mathematisk statistik / 75

63 Bivariate normal properties : ρ = ρ σ 2 σ 1 E(X µ 1 )(X µ 1 ) = ρ σ 2 σ 1 E(X µ 1 ) 2 = ρ σ 2 σ 1 σ 2 1 = ρσ 2 σ 1 Timo Koski () Mathematisk statistik / 75

64 Bivariate normal properties : ρ In other words we have checked that ρ = E [(X µ 1)(Y µ 2 )] σ 2 σ 1 ρ = 0 bivariate normal X,Y are independent. Timo Koski () Mathematisk statistik / 75

65 PART 5: Generating a multivariate normal variable Timo Koski () Mathematisk statistik / 75

66 Standard Normal Vector: definition Z N(0,I) is a standard normal vector. I is the n n identity matrix. f Z (z) = 1 (2π) n/2 det(i) e 1 2 (z 0) T I 1 (z 0) = 1 (2π) n/2e 1 2 zt z Timo Koski () Mathematisk statistik / 75

67 Distribution of X = AZ+b X = AZ+b, Z is standard Gaussian, then X = N (b,aa T) (follows by a rule in the preceding) Timo Koski () Mathematisk statistik / 75

68 Multivariate Normal: the bivariate case If ( σ 2 Λ = 1 ρσ 1 σ 2 ρσ 1 σ 2 σ2 2 ), then Λ = AA T, where A = ( σ1 0 ρσ 2 σ 2 1 ρ 2 ), Timo Koski () Mathematisk statistik / 75

69 Standard Normal Vector X N(µ X,Λ), and A is such that Λ = AA T (An invertible matrix A with this property exists always, if Λ is positive definite (we need the symmetry of Λ, too.) Then Z = A 1 (X µ X ) is a standard Gaussian vector. Proof: We give the first idea of his proof, a rule of transformation. Timo Koski () Mathematisk statistik / 75

70 Rule of transformation If X has density f X (x), Y = AX+b, A is invertible, then f Y (y) = Note that if Λ = AA T, then so that deta = detλ. 1 deta f ( X A 1 (y b) ) detλ = deta deta T = deta deta = deta 2, Timo Koski () Mathematisk statistik / 75

71 Johann Carl Friedrich Gauss (30 April February 1855) Timo Koski () Mathematisk statistik / 75

72 Diagonalizable Matrices An n n matrix A is orthogonally diagonalizable, if there is an orthogonal matrix P (i.e., P T P =PP T = I) such that where Λ is a diagonal matrix. P T AP = Λ, Timo Koski () Mathematisk statistik / 75

73 Diagonalizable Matrices Theorem If Ais an n n matrix, thenthefollowingare equivalent: (i) A is orthogonally diagonalizable. (ii) A has an orthonormal set of eigenvectors. (iii) A is symmetric. Since covariance matrices are symmetric, we have by the theorem above that all covariance matrices are orthogonally diagonalizable. Timo Koski () Mathematisk statistik / 75

74 Diagonalizable Matrices Theorem If Ais asymmetricmatrix, then (i) Eigenvalues of A are all real numbers. (ii) Eigenvectors from different eigenspaces are orthogonal. That is, all eigenvalues of a covariance matrix are real. Timo Koski () Mathematisk statistik / 75

75 Diagonalizable Matrices Hence we have for any covariance matrix the spectral decomposition C = n λ i e i ei T, (1) i=1 where Ce i = λ i e i. Since C is nonnegative definite, and its eigenvectors are orthonormal, 0 e T i Ce i = λ i e T i e i = λ i, and thus the eigenvalues of a covariance matrix are nonnegative. Timo Koski () Mathematisk statistik / 75

76 Diagonalizable Matrices Let now P be an orthogonal matrix such that P C X P = Λ, and X N(0,C X ), i.e., C X is a covariance matrix and Λ is diagonal (with the eigenvalues of C X on the main diagonal). Then if Y = P T X, we have that Y N(0,Λ). In other words, Y is a Gaussian vector and has independent components. This method of producing independent Gaussians has several important applications. One of these is the principal component analysis. Timo Koski () Mathematisk statistik / 75

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,

More information

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

Some probability and statistics

Some probability and statistics Appendix A Some probability and statistics A Probabilities, random variables and their distribution We summarize a few of the basic concepts of random variables, usually denoted by capital letters, X,Y,

More information

Quadratic forms Cochran s theorem, degrees of freedom, and all that

Quadratic forms Cochran s theorem, degrees of freedom, and all that Quadratic forms Cochran s theorem, degrees of freedom, and all that Dr. Frank Wood Frank Wood, fwood@stat.columbia.edu Linear Regression Models Lecture 1, Slide 1 Why We Care Cochran s theorem tells us

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n. ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

More information

The Bivariate Normal Distribution

The Bivariate Normal Distribution The Bivariate Normal Distribution This is Section 4.7 of the st edition (2002) of the book Introduction to Probability, by D. P. Bertsekas and J. N. Tsitsiklis. The material in this section was not included

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

[1] Diagonal factorization

[1] Diagonal factorization 8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

More information

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two

More information

Inner products on R n, and more

Inner products on R n, and more Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015.

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015. Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment -3, Probability and Statistics, March 05. Due:-March 5, 05.. Show that the function 0 for x < x+ F (x) = 4 for x < for x

More information

Multivariate normal distribution and testing for means (see MKB Ch 3)

Multivariate normal distribution and testing for means (see MKB Ch 3) Multivariate normal distribution and testing for means (see MKB Ch 3) Where are we going? 2 One-sample t-test (univariate).................................................. 3 Two-sample t-test (univariate).................................................

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

MULTIVARIATE PROBABILITY DISTRIBUTIONS

MULTIVARIATE PROBABILITY DISTRIBUTIONS MULTIVARIATE PROBABILITY DISTRIBUTIONS. PRELIMINARIES.. Example. Consider an experiment that consists of tossing a die and a coin at the same time. We can consider a number of random variables defined

More information

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

Lecture 2 Matrix Operations

Lecture 2 Matrix Operations Lecture 2 Matrix Operations transpose, sum & difference, scalar multiplication matrix multiplication, matrix-vector product matrix inverse 2 1 Matrix transpose transpose of m n matrix A, denoted A T or

More information

More than you wanted to know about quadratic forms

More than you wanted to know about quadratic forms CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences More than you wanted to know about quadratic forms KC Border Contents 1 Quadratic forms 1 1.1 Quadratic forms on the unit

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001, Fall 2010 September 30th, 2010 A. Donev (Courant Institute)

More information

Lecture 5 Principal Minors and the Hessian

Lecture 5 Principal Minors and the Hessian Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and

More information

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8 Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e

More information

Covariance and Correlation

Covariance and Correlation Covariance and Correlation ( c Robert J. Serfling Not for reproduction or distribution) We have seen how to summarize a data-based relative frequency distribution by measures of location and spread, such

More information

Lecture 21. The Multivariate Normal Distribution

Lecture 21. The Multivariate Normal Distribution Lecture. The Multivariate Normal Distribution. Definitions and Comments The joint moment-generating function of X,...,X n [also called the moment-generating function of the random vector (X,...,X n )]

More information

Finite dimensional C -algebras

Finite dimensional C -algebras Finite dimensional C -algebras S. Sundar September 14, 2012 Throughout H, K stand for finite dimensional Hilbert spaces. 1 Spectral theorem for self-adjoint opertors Let A B(H) and let {ξ 1, ξ 2,, ξ n

More information

Examination paper for TMA4115 Matematikk 3

Examination paper for TMA4115 Matematikk 3 Department of Mathematical Sciences Examination paper for TMA45 Matematikk 3 Academic contact during examination: Antoine Julien a, Alexander Schmeding b, Gereon Quick c Phone: a 73 59 77 82, b 40 53 99

More information

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices. Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R

More information

5. Continuous Random Variables

5. Continuous Random Variables 5. Continuous Random Variables Continuous random variables can take any value in an interval. They are used to model physical characteristics such as time, length, position, etc. Examples (i) Let X be

More information

STAT 830 Convergence in Distribution

STAT 830 Convergence in Distribution STAT 830 Convergence in Distribution Richard Lockhart Simon Fraser University STAT 830 Fall 2011 Richard Lockhart (Simon Fraser University) STAT 830 Convergence in Distribution STAT 830 Fall 2011 1 / 31

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i. Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

More information

Sections 2.11 and 5.8

Sections 2.11 and 5.8 Sections 211 and 58 Timothy Hanson Department of Statistics, University of South Carolina Stat 704: Data Analysis I 1/25 Gesell data Let X be the age in in months a child speaks his/her first word and

More information

THREE DIMENSIONAL GEOMETRY

THREE DIMENSIONAL GEOMETRY Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION 4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:

More information

DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II. Matrix Algorithms DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

More information

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. ( 1 1 5 2 0 6

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. ( 1 1 5 2 0 6 Chapter 7 Matrices Definition An m n matrix is an array of numbers set out in m rows and n columns Examples (i ( 1 1 5 2 0 6 has 2 rows and 3 columns and so it is a 2 3 matrix (ii 1 0 7 1 2 3 3 1 is a

More information

Lecture 5: Singular Value Decomposition SVD (1)

Lecture 5: Singular Value Decomposition SVD (1) EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system

More information

Similar matrices and Jordan form

Similar matrices and Jordan form Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

More information

Applied Linear Algebra I Review page 1

Applied Linear Algebra I Review page 1 Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

5. Orthogonal matrices

5. Orthogonal matrices L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

More information

Lecture 1: Schur s Unitary Triangularization Theorem

Lecture 1: Schur s Unitary Triangularization Theorem Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections

More information

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1 (d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

More information

MATH 551 - APPLIED MATRIX THEORY

MATH 551 - APPLIED MATRIX THEORY MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 14 10/27/2008 MOMENT GENERATING FUNCTIONS

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 14 10/27/2008 MOMENT GENERATING FUNCTIONS MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 14 10/27/2008 MOMENT GENERATING FUNCTIONS Contents 1. Moment generating functions 2. Sum of a ranom number of ranom variables 3. Transforms

More information

M2S1 Lecture Notes. G. A. Young http://www2.imperial.ac.uk/ ayoung

M2S1 Lecture Notes. G. A. Young http://www2.imperial.ac.uk/ ayoung M2S1 Lecture Notes G. A. Young http://www2.imperial.ac.uk/ ayoung September 2011 ii Contents 1 DEFINITIONS, TERMINOLOGY, NOTATION 1 1.1 EVENTS AND THE SAMPLE SPACE......................... 1 1.1.1 OPERATIONS

More information

Factor Analysis. Factor Analysis

Factor Analysis. Factor Analysis Factor Analysis Principal Components Analysis, e.g. of stock price movements, sometimes suggests that several variables may be responding to a small number of underlying forces. In the factor model, we

More information

Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab

Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab Monte Carlo Simulation: IEOR E4703 Fall 2004 c 2004 by Martin Haugh Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab 1 Overview of Monte Carlo Simulation 1.1 Why use simulation?

More information

Lecture Notes 1. Brief Review of Basic Probability

Lecture Notes 1. Brief Review of Basic Probability Probability Review Lecture Notes Brief Review of Basic Probability I assume you know basic probability. Chapters -3 are a review. I will assume you have read and understood Chapters -3. Here is a very

More information

Linear Algebra: Determinants, Inverses, Rank

Linear Algebra: Determinants, Inverses, Rank D Linear Algebra: Determinants, Inverses, Rank D 1 Appendix D: LINEAR ALGEBRA: DETERMINANTS, INVERSES, RANK TABLE OF CONTENTS Page D.1. Introduction D 3 D.2. Determinants D 3 D.2.1. Some Properties of

More information

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Review Jeopardy. Blue vs. Orange. Review Jeopardy Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?

More information

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7. Review University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

More information

4 Sums of Random Variables

4 Sums of Random Variables Sums of a Random Variables 47 4 Sums of Random Variables Many of the variables dealt with in physics can be expressed as a sum of other variables; often the components of the sum are statistically independent.

More information

Lecture 6: Discrete & Continuous Probability and Random Variables

Lecture 6: Discrete & Continuous Probability and Random Variables Lecture 6: Discrete & Continuous Probability and Random Variables D. Alex Hughes Math Camp September 17, 2015 D. Alex Hughes (Math Camp) Lecture 6: Discrete & Continuous Probability and Random September

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

More information

Multivariate Normal Distribution Rebecca Jennings, Mary Wakeman-Linn, Xin Zhao November 11, 2010

Multivariate Normal Distribution Rebecca Jennings, Mary Wakeman-Linn, Xin Zhao November 11, 2010 Multivariate Normal Distribution Rebecca Jeings, Mary Wakeman-Li, Xin Zhao November, 00. Basics. Parameters We say X ~ N n (µ, ) with parameters µ = [E[X ],.E[X n ]] and = Cov[X i X j ] i=..n, j= n. The

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution

More information

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Linear Systems, Continued and The Inverse of a Matrix , Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

More information

4 MT210 Notebook 4 3. 4.1 Eigenvalues and Eigenvectors... 3. 4.1.1 Definitions; Graphical Illustrations... 3

4 MT210 Notebook 4 3. 4.1 Eigenvalues and Eigenvectors... 3. 4.1.1 Definitions; Graphical Illustrations... 3 MT Notebook Fall / prepared by Professor Jenny Baglivo c Copyright 9 by Jenny A. Baglivo. All Rights Reserved. Contents MT Notebook. Eigenvalues and Eigenvectors................................... Definitions;

More information

F Matrix Calculus F 1

F Matrix Calculus F 1 F Matrix Calculus F 1 Appendix F: MATRIX CALCULUS TABLE OF CONTENTS Page F1 Introduction F 3 F2 The Derivatives of Vector Functions F 3 F21 Derivative of Vector with Respect to Vector F 3 F22 Derivative

More information

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4) Summary of Formulas and Concepts Descriptive Statistics (Ch. 1-4) Definitions Population: The complete set of numerical information on a particular quantity in which an investigator is interested. We assume

More information

Regression Analysis. Regression Analysis MIT 18.S096. Dr. Kempthorne. Fall 2013

Regression Analysis. Regression Analysis MIT 18.S096. Dr. Kempthorne. Fall 2013 Lecture 6: Regression Analysis MIT 18.S096 Dr. Kempthorne Fall 2013 MIT 18.S096 Regression Analysis 1 Outline Regression Analysis 1 Regression Analysis MIT 18.S096 Regression Analysis 2 Multiple Linear

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

UNIT I: RANDOM VARIABLES PART- A -TWO MARKS

UNIT I: RANDOM VARIABLES PART- A -TWO MARKS UNIT I: RANDOM VARIABLES PART- A -TWO MARKS 1. Given the probability density function of a continuous random variable X as follows f(x) = 6x (1-x) 0

More information

1 Determinants and the Solvability of Linear Systems

1 Determinants and the Solvability of Linear Systems 1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped

More information

THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE. Alexander Barvinok

THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE. Alexander Barvinok THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE Alexer Barvinok Papers are available at http://www.math.lsa.umich.edu/ barvinok/papers.html This is a joint work with J.A. Hartigan

More information

ECE302 Spring 2006 HW5 Solutions February 21, 2006 1

ECE302 Spring 2006 HW5 Solutions February 21, 2006 1 ECE3 Spring 6 HW5 Solutions February 1, 6 1 Solutions to HW5 Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in italics

More information

Understanding and Applying Kalman Filtering

Understanding and Applying Kalman Filtering Understanding and Applying Kalman Filtering Lindsay Kleeman Department of Electrical and Computer Systems Engineering Monash University, Clayton 1 Introduction Objectives: 1. Provide a basic understanding

More information

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Lecture 13: Martingales

Lecture 13: Martingales Lecture 13: Martingales 1. Definition of a Martingale 1.1 Filtrations 1.2 Definition of a martingale and its basic properties 1.3 Sums of independent random variables and related models 1.4 Products of

More information

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions. 3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

More information

Math 312 Homework 1 Solutions

Math 312 Homework 1 Solutions Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

More information

Matrix Representations of Linear Transformations and Changes of Coordinates

Matrix Representations of Linear Transformations and Changes of Coordinates Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under

More information

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Matrix Algebra A. Doerr Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Some Basic Matrix Laws Assume the orders of the matrices are such that

More information

Joint Exam 1/P Sample Exam 1

Joint Exam 1/P Sample Exam 1 Joint Exam 1/P Sample Exam 1 Take this practice exam under strict exam conditions: Set a timer for 3 hours; Do not stop the timer for restroom breaks; Do not look at your notes. If you believe a question

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I. Ronald van Luijk, 2012 Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

More information

CONDITIONAL, PARTIAL AND RANK CORRELATION FOR THE ELLIPTICAL COPULA; DEPENDENCE MODELLING IN UNCERTAINTY ANALYSIS

CONDITIONAL, PARTIAL AND RANK CORRELATION FOR THE ELLIPTICAL COPULA; DEPENDENCE MODELLING IN UNCERTAINTY ANALYSIS CONDITIONAL, PARTIAL AND RANK CORRELATION FOR THE ELLIPTICAL COPULA; DEPENDENCE MODELLING IN UNCERTAINTY ANALYSIS D. Kurowicka, R.M. Cooke Delft University of Technology, Mekelweg 4, 68CD Delft, Netherlands

More information

Statistics 100A Homework 7 Solutions

Statistics 100A Homework 7 Solutions Chapter 6 Statistics A Homework 7 Solutions Ryan Rosario. A television store owner figures that 45 percent of the customers entering his store will purchase an ordinary television set, 5 percent will purchase

More information

Statistics 100A Homework 8 Solutions

Statistics 100A Homework 8 Solutions Part : Chapter 7 Statistics A Homework 8 Solutions Ryan Rosario. A player throws a fair die and simultaneously flips a fair coin. If the coin lands heads, then she wins twice, and if tails, the one-half

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information