Some probability and statistics

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "Some probability and statistics"

Transcription

1 Appendix A Some probability and statistics A Probabilities, random variables and their distribution We summarize a few of the basic concepts of random variables, usually denoted by capital letters, X,Y, Z, etc, and their probability distributions, defined by the cumulative distribution function CDF) F X x)=px x), etc To a random experiment we define a sample space Ω, that contains all the outcomes that can occur in the experiment A random variable, X, is a function defined on a sample space Ω To each outcome ω Ω it defines a real number Xω), that represents the value of a numeric quantity that can be measured in the experiment For X to be called a random variable, the probability PX x) has to be defined for all real x Distributions and moments A probability distribution with cumulative distribution function F X x) can be discrete or continuous with probability function p X x) and probability density function f X x), respectively, such that F X x)=px x)= k x p X k), if X takes only integer values, x f Xy)dy A distribution can be of mixed type The distribution function is then an integral plus discrete jumps; see Appendix B The expectation of a random variable X is defined as the center of gravity in the distribution, k kp X k), E[X]=m X = x= x f Xx)dx The variance is a simple measure of the spreading of the distribution and is 26

2 262 SOME PROBABILITY AND STATISTICS defined as V[X]=E[X m X ) 2 ]=E[X 2 ] m 2 X = Chebyshev s inequality states that, for all ε > 0, k k m X ) 2 p X k), P X m X >ε) E[X m X) 2 ] ε 2 x= x m X) 2 f X x)dx In order to describe the statistical properties of a random function one needs the notion of multivariate distributions If the result of a experiment is described by two different quantities, denoted by X and Y, eg length and height of randomly chosen individual in a population, or the value of a random function at two different time points, one has to deal with a two-dimensional random variable This is described by its two-dimensional distribution function F X,Y x,y)= PX x,y y), or the corresponding two-dimensional probability or density function, f X,Y x,y)= 2 F X,Y x,y) x y Two random variables X,Y are independent if, for all x, y, F X,Y x,y)=f X x)f Y y) An important concept is the covariance between two random variables X and Y, defined as C[X,Y]=E[X m X )Y m Y )]=E[XY] m X m Y The correlation coefficient is equal to the dimensionless, normalized covariance, ρ[x,y]= C[X,Y] V[X]V[Y] Two random variable with zero correlation, ρ[x,y] = 0, are called uncorrelated Note that if two random variables X and Y are independent, then they are also uncorrelated, but the reverse does not hold It can happen that two uncorrelated variables are dependent Speaking about the correlation between two random quantities, one often means the degree of covariation between the two However, one has to remember that the correlation only measures the degree of linear covariation Another meaning of the term correlation is used in connection with two data series, x,,x n ) and y,,y n ) Then sometimes the sum of products, n x ky k, can be called correlation, and a device that produces this sum is called a correlator

3 MULTIDIMENSIONAL NORMAL DISTRIBUTION 263 Conditional distributions If X and Y are two random variables with bivariate density function f X,Y x,y), we can define the conditional distribution for X given Y = y, by the conditional density, f X Y=y x)= f X,Yx,y), f Y y) for every y where the marginal density f Y y) is non-zero The expectation in this distribution, the conditional expectation, is a function of y, and is denoted and defined as E[X Y = y]= x f X Y=y x)dx=my) The conditional variance is defined as x V[X Y = y]= x my)) 2 f X Y=y x)dx= σ X Y y) x The unconditional expectation of X can be obtained from the law of total probability, and computed as } E[X]= x f X Y=y x)dx f Y y)dy=e[e[x Y]] y x The unconditional variance of X is given by V[X]=E[V[X Y]]+V[E[X Y]] A2 Multidimensional normal distribution A one-dimensional normal random variable X with expectation m and variance σ 2 has probability density function f X x)= exp 2πσ 2 2 ) } x m 2, σ and we write X Nm,σ 2 ) If m=0 and σ =, the normal distribution is standardized If X N0,), then σx+ m Nm,σ 2 ), and if X Nm,σ 2 ) then X m)/σ N0, ) We accept a constant random variable, X m, as a normal variable, X Nm,0)

4 264 SOME PROBABILITY AND STATISTICS Now, let X,,X n have expectation m k = E[X k ] and covariances σ jk = C[X j,x k ], and define, with for transpose), µ = m,,m n ), Σ = σ jk )=the covariance matrix for X,,X n It is a characteristic property of the normal distributions that all linear combinations of a multivariate normal variable also has a normal distribution To formulate the definition, write a=a,,a n ) and X=X,,X n ), with a X=a X + +a n X n Then, E[a X]=a µ = V[a X]=a Σa= n j= n j,k a j m j, a j a k σ jk A) Definition A The random variables X,,X n, are said to have an n-dimensional normal distribution is every linear combination a X + +a n X n has a normal distribution From A) we have that X=X,,X n ) is n-dimensional normal, if and only if a X Na µ,a Σa) for all a=a,,a n ) Obviously, X k = 0 X + + X k + +0 X n, is normal, ie, all marginal distributions in an n-dimensional normal distribution are onedimensional normal However, the reverse is not necessarily true; there are variables X,,X n, each of which is one-dimensional normal, but the vector X,,X n ) is not n-dimensional normal It is an important consequence of the definition that sums and differences of n-dimensional normal variables have a normal distribution If the covariance matrix Σ is non-singular, the n-dimensional normal distribution has a probability density function with x=x,,x n ) ) 2π) n/2 det Σ exp } 2 x µ) Σ x µ) A2) The distribution is said to be non-singular The density A2) is constant on every ellipsoid x µ) Σ x µ)= C in R n

5 MULTIDIMENSIONAL NORMAL DISTRIBUTION 265 Note: the density function of an n-dimensional normal distribution is uniquely determined by the expectations and covariances Example A Suppose X,X 2 have a two-dimensional normal distribution If then Σ is non-singular, and det Σ=σ σ 22 σ 2 2 > 0, Σ = det Σ With Qx,x 2 )=x µ) Σ x µ), σ22 σ 2 σ 2 σ Qx,x 2 )= = σ σ 22 σ2 2 x m ) 2 σ 22 2x m )x 2 m 2 )σ 2 +x 2 m 2 ) 2 } σ = ) = ρ 2 x m σ ) 2 2ρ x m σ ) x2 m 2 σ22 )+ x2 m 2 σ22 ) 2 }, where we also used the correlation coefficient ρ = σ 2 σ σ 22, and, f X,X 2 x,x 2 )= 2π σ σ 22 ρ 2 ) exp ) 2 Qx,x 2 ) A3) For variables with m = m 2 = 0 and σ = σ 22 = σ 2, the bivariate density is ) f X,X 2 x,x 2 )= 2πσ 2 ρ exp 2 2σ 2 ρ 2 ) x2 2ρx x 2 + x 2 2 ) We see that, if ρ = 0, this is the density of two independent normal variables Figure A shows the density function f X,X 2 x,x 2 ) and level curves for a bivariate normal distribution with expectation µ = 0, 0), and covariance matrix ) 05 Σ= 05 The correlation coefficient is ρ = 05 Remark A If the covariance matrix Σ is singular and non-invertible, then there exists at least one set of constants a,,a n, not all equal to 0, such that a Σa=0 From A) it follows thatv[a X]=0, which means that a X is constant equal to a µ The distribution of X is concentrated to a hyper plane a x = constant in R n The distribution is said to be singular and it has no density function inr n

6 266 SOME PROBABILITY AND STATISTICS Figure A Two-dimensional normal density Left: density function; Right: elliptic level curves at levels 00, 002, 005, 0, 05 Remark A2 Formula A) implies that every covariance matrix Σ is positive definite or positive semi-definite, ie, j,k a j a k σ jk 0 for all a,,a n Conversely, if Σ is a symmetric, positive definite matrix of size n n, ie, if j,k a j a k σ jk > 0 for all a,,a n 0,,0, then A2) defines the density function for an n-dimensional normal distribution with expectation m k and covariances σ jk Every symmetric, positive definite matrix is a covariance matrix for a non-singular distribution Furthermore, for every symmetric, positive semi-definite matrix, Σ, ie, such that a j a k σ jk 0 j,k for all a,,a n with equality holding for some choice of a,,a n 0,,0, there exists an n-dimensional normal distribution that has Σ as its covariance matrix For n-dimensional normal variables, uncorrelated and independent are equivalent Theorem A If the random variables X,,X n are n- dimensional normal and uncorrelated, then they are independent Proof We show the theorem only for non-singular variables with density function It is true also for singular normal variables If X,,X n are uncorrelated, σ jk = 0 for j k, then Σ, and also Σ are

7 MULTIDIMENSIONAL NORMAL DISTRIBUTION 267 diagonal matrices, ie, note that σ j j =V[X j ]), σ 0 det Σ= σ j j, Σ = j 0 σ nn This means that x µ) Σ x µ) = j x j µ j ) 2 /σ j j, and the density A2) is exp x j m j ) 2 } 2πσ j j j 2σ j j Hence, the joint density function for X,,X n is a product of the marginal densities, which says that the variables are independent A2 Conditional normal distribution This section deals with partial observations in a multivariate normal distribution It is a special property of this distribution, that conditioning on observed values of a subset of variables, leads to a conditional distribution for the unobserved variables that is also normal Furthermore, the expectation in the conditional distribution is linear in the observations, and the covariance matrix does not depend on the observed values This property is particularly useful in prediction of Gaussian time series, as formulated by the Kalman filter Conditioning in the bivariate normal distribution Let X and Y have a bivariate normal distribution with expectations m X and m Y, variances σx 2 and σ Y 2, respectively, and with correlation coefficient ρ = C[X,Y]/σ X σ Y ) The simultaneous density function is given by A3) The conditional density function for X given that Y = y is f X Y=y x)= f X,Yx,y) f Y y) = σ X ρ 2 2π exp x m X+ σ X ρy m Y )/σ Y )) 2 2σ 2 X ρ2 ) Hence, the conditional distribution of X given Y = y is normal with expectation and variance m X Y=y = m X + σ X ρy m Y )/σ Y, σ 2 X Y=y = σ 2 X ρ2 ) Note: the conditional expectation depends linearly on the observed y-value, and the conditional variance is constant, independent of Y = y }

8 268 SOME PROBABILITY AND STATISTICS Conditioning in the multivariate normal distribution Let X=X,,X n ) and Y=Y,,Y m ) be two multivariate normal variables, of size n and m, respectively, such that Z=X,,X n,y,,y m ) is n + m)-dimensional normal Denote the expectations E[X]=m X, E[Y]=m Y, and partition the covariance matrix for Z with Σ XY = Σ YX), [ ) )] ) X X ΣXX Σ Σ=Cov ; = XY A4) Y Y Σ YX Σ YY If the covariance matrix Σ is positive definite, the distribution of X, Y) has the density function f XY x,y)= 2π) m+n)/2 det Σ e 2 x m X,y m Y )Σ x m X,y m Y ), while the m-dimensional density of Y is f Y y)= 2π) m/2 e 2 y m Y) Σ det Σ YY To find the conditional density of X given that Y = y, we need the following matrix property YY y my) f X Y x y)= f YXy,x), A5) f Y y) Theorem A2 Matrix inversions lemma ) Let B be a p p-matrix p = n+m): ) B B B= 2, B 2 B 22 where the sub-matrices have dimension n n, n m, etc Suppose B,B,B 22 are non-singular, and partition the inverse in the same way as B, ) A=B A A = 2 A 2 A 22 Then A= B B 2 B22 B 2) B B 2 B 22 B ) 2) B 2 B 22 B 22 B 2 B B 2) B 2 B B 22 B 2 B B 2)

9 MULTIDIMENSIONAL NORMAL DISTRIBUTION 269 Proof For the proof, see a matrix theory textbook, for example, [22] Theorem A3 Conditional normal distribution ) The conditional normal distribution for X, given that Y = y, is n-dimensional normal with expectation and covariance matrix E[X Y=y] = m X Y=y = m X + Σ XY Σ YY y m Y), A6) C[X Y=y] = Σ XX Y = Σ XX Σ XY Σ YY Σ YX A7) These formulas are easy to remember: the dimension of the submatrices, for example in the covariance matrix Σ XX Y, are the only possible for the matrix multiplications in the right hand side to be meaningful Proof To simplify calculations, we start with m X = m Y = 0, and add the expectations afterwards The conditional distribution of X given that Y = y is, according to A5), the ratio between two multivariate normal densities, and hence it is of the form, c exp 2 x,y ) Σ x y ) + 2 y Σ YY y }=c exp 2 Qx,y)}, where c is a normalization constant, independent of x and y The matrix Σ can be partitioned as in A4), and if we use the matrix inversion lemma, we find that ) Σ A A = A= 2 A8) A 2 A 22 = Σ XX Σ XY Σ YY Σ YX ) Σ YY Σ YX Σ XX Σ XY ) Σ YX Σ XX We also see that Σ XX Σ XY Σ YY Σ YX ) Σ XY Σ YY Σ YY Σ YX Σ XX Σ XY ) Qx,y) = x A x+x A 2 y+y A 2 x+y A 22 y y Σ YY y = x y C )A x Cy)+ Qy) = x A x x A Cy y C A x+y C A Cy+ Qy), for some matrix C and quadratic form Qy) in y )

10 270 SOME PROBABILITY AND STATISTICS Here A = Σ solving XX Y, according to A7) and A8), while we can find C by A C=A 2, ie C= A A 2 = Σ XY Σ YY, according to A8) This is precisely the matrix in A6) If we reinstall the deleted m X and m Y, we get the conditional density for X given Y=y to be of the form c exp 2 Qx,y)}=c exp 2 x m X Y=y )Σ XX Y x m X Y=y)}, which is the normal density we were looking for A22 Complex normal variables In most of the book, we have assumed all random variables to be real valued In many applications, and also in the mathematical background, it is advantageous to consider complex variables, simply defined as Z = X+ iy, where X and Y have a bivariate distribution The mean value of a complex random variable is simply E[Z]=E[RZ]+iE[IZ], while the variance and covariances are defined with complex conjugate on the second variable, C[Z,Z 2 ]=E[Z Z 2 ] m Z m Z2, V[Z]=C[Z,Z]=E[ Z 2 ] m Z 2 Note, that for a complex Z = X+ iy, withv[x]= V[Y]=σ 2, C[Z,Z]=V[X]+V[Y]=2σ 2, C[Z,Z]=V[X] V[Y]+2iC[X,Y]=2iC[X,Y] Hence, if the real and imaginary parts are uncorrelated with the same variance, then the complex variable Z is uncorrelated with its own complex conjugate, Z Often, one uses the term orthogonal, instead of uncorrelated for complex variables

Topic 4: Multivariate random variables. Multiple random variables

Topic 4: Multivariate random variables. Multiple random variables Topic 4: Multivariate random variables Joint, marginal, and conditional pmf Joint, marginal, and conditional pdf and cdf Independence Expectation, covariance, correlation Conditional expectation Two jointly

More information

4. Joint Distributions of Two Random Variables

4. Joint Distributions of Two Random Variables 4. Joint Distributions of Two Random Variables 4.1 Joint Distributions of Two Discrete Random Variables Suppose the discrete random variables X and Y have supports S X and S Y, respectively. The joint

More information

Notes for STA 437/1005 Methods for Multivariate Data

Notes for STA 437/1005 Methods for Multivariate Data Notes for STA 437/1005 Methods for Multivariate Data Radford M. Neal, 26 November 2010 Random Vectors Notation: Let X be a random vector with p elements, so that X = [X 1,..., X p ], where denotes transpose.

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,

More information

Random Vectors and the Variance Covariance Matrix

Random Vectors and the Variance Covariance Matrix Random Vectors and the Variance Covariance Matrix Definition 1. A random vector X is a vector (X 1, X 2,..., X p ) of jointly distributed random variables. As is customary in linear algebra, we will write

More information

ECE302 Spring 2006 HW7 Solutions March 11, 2006 1

ECE302 Spring 2006 HW7 Solutions March 11, 2006 1 ECE32 Spring 26 HW7 Solutions March, 26 Solutions to HW7 Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in italics where

More information

The Bivariate Normal Distribution

The Bivariate Normal Distribution The Bivariate Normal Distribution This is Section 4.7 of the st edition (2002) of the book Introduction to Probability, by D. P. Bertsekas and J. N. Tsitsiklis. The material in this section was not included

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2014 Timo Koski () Mathematisk statistik 24.09.2014 1 / 75 Learning outcomes Random vectors, mean vector, covariance

More information

SYSM 6304: Risk and Decision Analysis Lecture 3 Monte Carlo Simulation

SYSM 6304: Risk and Decision Analysis Lecture 3 Monte Carlo Simulation SYSM 6304: Risk and Decision Analysis Lecture 3 Monte Carlo Simulation M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas Email: M.Vidyasagar@utdallas.edu September 19, 2015 Outline

More information

MT426 Notebook 3 Fall 2012 prepared by Professor Jenny Baglivo. 3 MT426 Notebook 3 3. 3.1 Definitions... 3. 3.2 Joint Discrete Distributions...

MT426 Notebook 3 Fall 2012 prepared by Professor Jenny Baglivo. 3 MT426 Notebook 3 3. 3.1 Definitions... 3. 3.2 Joint Discrete Distributions... MT426 Notebook 3 Fall 2012 prepared by Professor Jenny Baglivo c Copyright 2004-2012 by Jenny A. Baglivo. All Rights Reserved. Contents 3 MT426 Notebook 3 3 3.1 Definitions............................................

More information

Covariance and Correlation. Consider the joint probability distribution f XY (x, y).

Covariance and Correlation. Consider the joint probability distribution f XY (x, y). Chapter 5: JOINT PROBABILITY DISTRIBUTIONS Part 2: Section 5-2 Covariance and Correlation Consider the joint probability distribution f XY (x, y). Is there a relationship between X and Y? If so, what kind?

More information

P (x) 0. Discrete random variables Expected value. The expected value, mean or average of a random variable x is: xp (x) = v i P (v i )

P (x) 0. Discrete random variables Expected value. The expected value, mean or average of a random variable x is: xp (x) = v i P (v i ) Discrete random variables Probability mass function Given a discrete random variable X taking values in X = {v 1,..., v m }, its probability mass function P : X [0, 1] is defined as: P (v i ) = Pr[X =

More information

UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure

UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure UNIT 2 MATRICES - I Matrices - I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress

More information

P(a X b) = f X (x)dx. A p.d.f. must integrate to one: f X (x)dx = 1. Z b

P(a X b) = f X (x)dx. A p.d.f. must integrate to one: f X (x)dx = 1. Z b Continuous Random Variables The probability that a continuous random variable, X, has a value between a and b is computed by integrating its probability density function (p.d.f.) over the interval [a,b]:

More information

Probability & Statistics Primer Gregory J. Hakim University of Washington 2 January 2009 v2.0

Probability & Statistics Primer Gregory J. Hakim University of Washington 2 January 2009 v2.0 Probability & Statistics Primer Gregory J. Hakim University of Washington 2 January 2009 v2.0 This primer provides an overview of basic concepts and definitions in probability and statistics. We shall

More information

Covariance and Correlation

Covariance and Correlation Covariance and Correlation ( c Robert J. Serfling Not for reproduction or distribution) We have seen how to summarize a data-based relative frequency distribution by measures of location and spread, such

More information

Introduction to General and Generalized Linear Models

Introduction to General and Generalized Linear Models Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015.

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015. Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment -3, Probability and Statistics, March 05. Due:-March 5, 05.. Show that the function 0 for x < x+ F (x) = 4 for x < for x

More information

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

More information

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns L. Vandenberghe EE133A (Spring 2016) 4. Matrix inverses left and right inverse linear independence nonsingular matrices matrices with linearly independent columns matrices with linearly independent rows

More information

ST 371 (VIII): Theory of Joint Distributions

ST 371 (VIII): Theory of Joint Distributions ST 371 (VIII): Theory of Joint Distributions So far we have focused on probability distributions for single random variables. However, we are often interested in probability statements concerning two or

More information

Quadratic forms Cochran s theorem, degrees of freedom, and all that

Quadratic forms Cochran s theorem, degrees of freedom, and all that Quadratic forms Cochran s theorem, degrees of freedom, and all that Dr. Frank Wood Frank Wood, fwood@stat.columbia.edu Linear Regression Models Lecture 1, Slide 1 Why We Care Cochran s theorem tells us

More information

Basics Inversion and related concepts Random vectors Matrix calculus. Matrix algebra. Patrick Breheny. January 20

Basics Inversion and related concepts Random vectors Matrix calculus. Matrix algebra. Patrick Breheny. January 20 Matrix algebra January 20 Introduction Basics The mathematics of multiple regression revolves around ordering and keeping track of large arrays of numbers and solving systems of equations The mathematical

More information

MULTIVARIATE PROBABILITY DISTRIBUTIONS

MULTIVARIATE PROBABILITY DISTRIBUTIONS MULTIVARIATE PROBABILITY DISTRIBUTIONS. PRELIMINARIES.. Example. Consider an experiment that consists of tossing a die and a coin at the same time. We can consider a number of random variables defined

More information

Probability and Statistics

Probability and Statistics CHAPTER 2: RANDOM VARIABLES AND ASSOCIATED FUNCTIONS 2b - 0 Probability and Statistics Kristel Van Steen, PhD 2 Montefiore Institute - Systems and Modeling GIGA - Bioinformatics ULg kristel.vansteen@ulg.ac.be

More information

Chapters 5. Multivariate Probability Distributions

Chapters 5. Multivariate Probability Distributions Chapters 5. Multivariate Probability Distributions Random vectors are collection of random variables defined on the same sample space. Whenever a collection of random variables are mentioned, they are

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

An-Najah National University Faculty of Engineering Industrial Engineering Department. Course : Quantitative Methods (65211)

An-Najah National University Faculty of Engineering Industrial Engineering Department. Course : Quantitative Methods (65211) An-Najah National University Faculty of Engineering Industrial Engineering Department Course : Quantitative Methods (65211) Instructor: Eng. Tamer Haddad 2 nd Semester 2009/2010 Chapter 5 Example: Joint

More information

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

More information

Multivariate normal distribution and testing for means (see MKB Ch 3)

Multivariate normal distribution and testing for means (see MKB Ch 3) Multivariate normal distribution and testing for means (see MKB Ch 3) Where are we going? 2 One-sample t-test (univariate).................................................. 3 Two-sample t-test (univariate).................................................

More information

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4) Summary of Formulas and Concepts Descriptive Statistics (Ch. 1-4) Definitions Population: The complete set of numerical information on a particular quantity in which an investigator is interested. We assume

More information

Random Variables, Expectation, Distributions

Random Variables, Expectation, Distributions Random Variables, Expectation, Distributions CS 5960/6960: Nonparametric Methods Tom Fletcher January 21, 2009 Review Random Variables Definition A random variable is a function defined on a probability

More information

Inverses and powers: Rules of Matrix Arithmetic

Inverses and powers: Rules of Matrix Arithmetic Contents 1 Inverses and powers: Rules of Matrix Arithmetic 1.1 What about division of matrices? 1.2 Properties of the Inverse of a Matrix 1.2.1 Theorem (Uniqueness of Inverse) 1.2.2 Inverse Test 1.2.3

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

Joint Distributions. Tieming Ji. Fall 2012

Joint Distributions. Tieming Ji. Fall 2012 Joint Distributions Tieming Ji Fall 2012 1 / 33 X : univariate random variable. (X, Y ): bivariate random variable. In this chapter, we are going to study the distributions of bivariate random variables

More information

MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix.

MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix. MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix. Matrices Definition. An m-by-n matrix is a rectangular array of numbers that has m rows and n columns: a 11

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

Exercises with solutions (1)

Exercises with solutions (1) Exercises with solutions (). Investigate the relationship between independence and correlation. (a) Two random variables X and Y are said to be correlated if and only if their covariance C XY is not equal

More information

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix.

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. Inverse matrix Definition. Let A be an n n matrix. The inverse of A is an n n matrix, denoted

More information

3 Random vectors and multivariate normal distribution

3 Random vectors and multivariate normal distribution 3 Random vectors and multivariate normal distribution As we saw in Chapter 1, a natural way to think about repeated measurement data is as a series of random vectors, one vector corresponding to each unit.

More information

Elliptical copulae. Dorota Kurowicka, Jolanta Misiewicz, Roger Cooke

Elliptical copulae. Dorota Kurowicka, Jolanta Misiewicz, Roger Cooke Elliptical copulae Dorota Kurowicka, Jolanta Misiewicz, Roger Cooke Abstract: In this paper we construct a copula, that is, a distribution with uniform marginals. This copula is continuous and can realize

More information

The Solution of Linear Simultaneous Equations

The Solution of Linear Simultaneous Equations Appendix A The Solution of Linear Simultaneous Equations Circuit analysis frequently involves the solution of linear simultaneous equations. Our purpose here is to review the use of determinants to solve

More information

Multivariate Normal Distribution Rebecca Jennings, Mary Wakeman-Linn, Xin Zhao November 11, 2010

Multivariate Normal Distribution Rebecca Jennings, Mary Wakeman-Linn, Xin Zhao November 11, 2010 Multivariate Normal Distribution Rebecca Jeings, Mary Wakeman-Li, Xin Zhao November, 00. Basics. Parameters We say X ~ N n (µ, ) with parameters µ = [E[X ],.E[X n ]] and = Cov[X i X j ] i=..n, j= n. The

More information

The Scalar Algebra of Means, Covariances, and Correlations

The Scalar Algebra of Means, Covariances, and Correlations 3 The Scalar Algebra of Means, Covariances, and Correlations In this chapter, we review the definitions of some key statistical concepts: means, covariances, and correlations. We show how the means, variances,

More information

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α

More information

ELEC-E8104 Stochastics models and estimation, Lecture 3b: Linear Estimation in Static Systems

ELEC-E8104 Stochastics models and estimation, Lecture 3b: Linear Estimation in Static Systems Stochastics models and estimation, Lecture 3b: Linear Estimation in Static Systems Minimum Mean Square Error (MMSE) MMSE estimation of Gaussian random vectors Linear MMSE estimator for arbitrarily distributed

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

Summary of week 8 (Lectures 22, 23 and 24)

Summary of week 8 (Lectures 22, 23 and 24) WEEK 8 Summary of week 8 (Lectures 22, 23 and 24) This week we completed our discussion of Chapter 5 of [VST] Recall that if V and W are inner product spaces then a linear map T : V W is called an isometry

More information

Lecture 21. The Multivariate Normal Distribution

Lecture 21. The Multivariate Normal Distribution Lecture. The Multivariate Normal Distribution. Definitions and Comments The joint moment-generating function of X,...,X n [also called the moment-generating function of the random vector (X,...,X n )]

More information

2.1: MATRIX OPERATIONS

2.1: MATRIX OPERATIONS .: MATRIX OPERATIONS What are diagonal entries and the main diagonal of a matrix? What is a diagonal matrix? When are matrices equal? Scalar Multiplication 45 Matrix Addition Theorem (pg 0) Let A, B, and

More information

Joint Probability Distributions and Random Samples. Week 5, 2011 Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage

Joint Probability Distributions and Random Samples. Week 5, 2011 Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage 5 Joint Probability Distributions and Random Samples Week 5, 2011 Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage Two Discrete Random Variables The probability mass function (pmf) of a single

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

MATH36001 Background Material 2015

MATH36001 Background Material 2015 MATH3600 Background Material 205 Matrix Algebra Matrices and Vectors An ordered array of mn elements a ij (i =,, m; j =,, n) written in the form a a 2 a n A = a 2 a 22 a 2n a m a m2 a mn is said to be

More information

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions. 3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

More information

Linear Least Squares

Linear Least Squares Linear Least Squares Suppose we are given a set of data points {(x i,f i )}, i = 1,...,n. These could be measurements from an experiment or obtained simply by evaluating a function at some points. One

More information

(a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular.

(a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. Theorem.7.: (Properties of Triangular Matrices) (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. (b) The product

More information

Definition A square matrix M is invertible (or nonsingular) if there exists a matrix M 1 such that

Definition A square matrix M is invertible (or nonsingular) if there exists a matrix M 1 such that 0. Inverse Matrix Definition A square matrix M is invertible (or nonsingular) if there exists a matrix M such that M M = I = M M. Inverse of a 2 2 Matrix Let M and N be the matrices: a b d b M =, N = c

More information

A SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS

A SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS A SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS Eusebio GÓMEZ, Miguel A. GÓMEZ-VILLEGAS and J. Miguel MARÍN Abstract In this paper it is taken up a revision and characterization of the class of

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

Econometrics Simple Linear Regression

Econometrics Simple Linear Regression Econometrics Simple Linear Regression Burcu Eke UC3M Linear equations with one variable Recall what a linear equation is: y = b 0 + b 1 x is a linear equation with one variable, or equivalently, a straight

More information

EC9A0: Pre-sessional Advanced Mathematics Course

EC9A0: Pre-sessional Advanced Mathematics Course University of Warwick, EC9A0: Pre-sessional Advanced Mathematics Course Peter J. Hammond & Pablo F. Beker 1 of 55 EC9A0: Pre-sessional Advanced Mathematics Course Slides 1: Matrix Algebra Peter J. Hammond

More information

Finite dimensional C -algebras

Finite dimensional C -algebras Finite dimensional C -algebras S. Sundar September 14, 2012 Throughout H, K stand for finite dimensional Hilbert spaces. 1 Spectral theorem for self-adjoint opertors Let A B(H) and let {ξ 1, ξ 2,, ξ n

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

Homework: 2.1 (page 56): 7, 9, 13, 15, 17, 25, 27, 35, 37, 41, 46, 49, 67

Homework: 2.1 (page 56): 7, 9, 13, 15, 17, 25, 27, 35, 37, 41, 46, 49, 67 Chapter Matrices Operations with Matrices Homework: (page 56):, 9, 3, 5,, 5,, 35, 3, 4, 46, 49, 6 Main points in this section: We define a few concept regarding matrices This would include addition of

More information

Lecture 10: Invertible matrices. Finding the inverse of a matrix

Lecture 10: Invertible matrices. Finding the inverse of a matrix Lecture 10: Invertible matrices. Finding the inverse of a matrix Danny W. Crytser April 11, 2014 Today s lecture Today we will Today s lecture Today we will 1 Single out a class of especially nice matrices

More information

Outline. Random Variables. Examples. Random Variable

Outline. Random Variables. Examples. Random Variable Outline Random Variables M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno Random variables. CDF and pdf. Joint random variables. Correlated, independent, orthogonal. Correlation,

More information

Diagonal, Symmetric and Triangular Matrices

Diagonal, Symmetric and Triangular Matrices Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by

More information

Chapter 4. Multivariate Distributions

Chapter 4. Multivariate Distributions 1 Chapter 4. Multivariate Distributions Joint p.m.f. (p.d.f.) Independent Random Variables Covariance and Correlation Coefficient Expectation and Covariance Matrix Multivariate (Normal) Distributions Matlab

More information

Determinants. Dr. Doreen De Leon Math 152, Fall 2015

Determinants. Dr. Doreen De Leon Math 152, Fall 2015 Determinants Dr. Doreen De Leon Math 52, Fall 205 Determinant of a Matrix Elementary Matrices We will first discuss matrices that can be used to produce an elementary row operation on a given matrix A.

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

Probability Theory. Elementary rules of probability Sum rule. Product rule. p. 23

Probability Theory. Elementary rules of probability Sum rule. Product rule. p. 23 Probability Theory Uncertainty is key concept in machine learning. Probability provides consistent framework for the quantification and manipulation of uncertainty. Probability of an event is the fraction

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

Principle of Data Reduction

Principle of Data Reduction Chapter 6 Principle of Data Reduction 6.1 Introduction An experimenter uses the information in a sample X 1,..., X n to make inferences about an unknown parameter θ. If the sample size n is large, then

More information

1.5 Elementary Matrices and a Method for Finding the Inverse

1.5 Elementary Matrices and a Method for Finding the Inverse .5 Elementary Matrices and a Method for Finding the Inverse Definition A n n matrix is called an elementary matrix if it can be obtained from I n by performing a single elementary row operation Reminder:

More information

Joint Probability Distributions and Random Samples (Devore Chapter Five)

Joint Probability Distributions and Random Samples (Devore Chapter Five) Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01 Probability and Statistics for Engineers Winter 2010-2011 Contents 1 Joint Probability Distributions 1 1.1 Two Discrete

More information

E3: PROBABILITY AND STATISTICS lecture notes

E3: PROBABILITY AND STATISTICS lecture notes E3: PROBABILITY AND STATISTICS lecture notes 2 Contents 1 PROBABILITY THEORY 7 1.1 Experiments and random events............................ 7 1.2 Certain event. Impossible event............................

More information

Lecture 9: Introduction to Pattern Analysis

Lecture 9: Introduction to Pattern Analysis Lecture 9: Introduction to Pattern Analysis g Features, patterns and classifiers g Components of a PR system g An example g Probability definitions g Bayes Theorem g Gaussian densities Features, patterns

More information

Linear Dependence Tests

Linear Dependence Tests Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks

More information

Sec 4.1 Vector Spaces and Subspaces

Sec 4.1 Vector Spaces and Subspaces Sec 4. Vector Spaces and Subspaces Motivation Let S be the set of all solutions to the differential equation y + y =. Let T be the set of all 2 3 matrices with real entries. These two sets share many common

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

L10: Probability, statistics, and estimation theory

L10: Probability, statistics, and estimation theory L10: Probability, statistics, and estimation theory Review of probability theory Bayes theorem Statistics and the Normal distribution Least Squares Error estimation Maximum Likelihood estimation Bayesian

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE. Alexander Barvinok

THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE. Alexander Barvinok THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE Alexer Barvinok Papers are available at http://www.math.lsa.umich.edu/ barvinok/papers.html This is a joint work with J.A. Hartigan

More information

ST 371 (IV): Discrete Random Variables

ST 371 (IV): Discrete Random Variables ST 371 (IV): Discrete Random Variables 1 Random Variables A random variable (rv) is a function that is defined on the sample space of the experiment and that assigns a numerical variable to each possible

More information

Determinants. Chapter Properties of the Determinant

Determinants. Chapter Properties of the Determinant Chapter 4 Determinants Chapter 3 entailed a discussion of linear transformations and how to identify them with matrices. When we study a particular linear transformation we would like its matrix representation

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Linear Systems, Continued and The Inverse of a Matrix , Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

More information

Inverses. Stephen Boyd. EE103 Stanford University. October 27, 2015

Inverses. Stephen Boyd. EE103 Stanford University. October 27, 2015 Inverses Stephen Boyd EE103 Stanford University October 27, 2015 Outline Left and right inverses Inverse Solving linear equations Examples Pseudo-inverse Left and right inverses 2 Left inverses a number

More information

NOTES on LINEAR ALGEBRA 1

NOTES on LINEAR ALGEBRA 1 School of Economics, Management and Statistics University of Bologna Academic Year 205/6 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

More information

Probability Calculator

Probability Calculator Chapter 95 Introduction Most statisticians have a set of probability tables that they refer to in doing their statistical wor. This procedure provides you with a set of electronic statistical tables that

More information

Solution Using the geometric series a/(1 r) = x=1. x=1. Problem For each of the following distributions, compute

Solution Using the geometric series a/(1 r) = x=1. x=1. Problem For each of the following distributions, compute Math 472 Homework Assignment 1 Problem 1.9.2. Let p(x) 1/2 x, x 1, 2, 3,..., zero elsewhere, be the pmf of the random variable X. Find the mgf, the mean, and the variance of X. Solution 1.9.2. Using the

More information

Lecture 5 Principal Minors and the Hessian

Lecture 5 Principal Minors and the Hessian Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and

More information

3. The Multivariate Normal Distribution

3. The Multivariate Normal Distribution 3. The Multivariate Normal Distribution 3.1 Introduction A generalization of the familiar bell shaped normal density to several dimensions plays a fundamental role in multivariate analysis While real data

More information

EC327: Advanced Econometrics, Spring 2007

EC327: Advanced Econometrics, Spring 2007 EC327: Advanced Econometrics, Spring 2007 Wooldridge, Introductory Econometrics (3rd ed, 2006) Appendix D: Summary of matrix algebra Basic definitions A matrix is a rectangular array of numbers, with m

More information

MATHEMATICAL METHODS OF STATISTICS

MATHEMATICAL METHODS OF STATISTICS MATHEMATICAL METHODS OF STATISTICS By HARALD CRAMER TROFESSOK IN THE UNIVERSITY OF STOCKHOLM Princeton PRINCETON UNIVERSITY PRESS 1946 TABLE OF CONTENTS. First Part. MATHEMATICAL INTRODUCTION. CHAPTERS

More information

Variances and covariances

Variances and covariances Chapter 4 Variances and covariances 4.1 Overview The expected value of a random variable gives a crude measure for the center of location of the distribution of that random variable. For instance, if the

More information

DETERMINANTS. b 2. x 2

DETERMINANTS. b 2. x 2 DETERMINANTS 1 Systems of two equations in two unknowns A system of two equations in two unknowns has the form a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 This can be written more concisely in

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information