Multivariate Normal Distribution



Similar documents
Sections 2.11 and 5.8

Stat 704 Data Analysis I Probability Review

A Primer on Mathematical Statistics and Univariate Distributions; The Normal Distribution; The GLM with the Normal Distribution

Data Mining: Algorithms and Applications Matrix Math Review

Lecture 3: Linear methods for classification

Introduction to Multivariate Analysis

Multivariate normal distribution and testing for means (see MKB Ch 3)

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Dimensionality Reduction: Principal Components Analysis

Multivariate Analysis of Variance (MANOVA): I. Theory

Introduction to General and Generalized Linear Models

2DI36 Statistics. 2DI36 Part II (Chapter 7 of MR)

Service courses for graduate students in degree programs other than the MS or PhD programs in Biostatistics.

Lecture 8: Signal Detection and Noise Assumption

Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.

Multivariate Statistical Inference and Applications

Regression III: Advanced Methods

Statistical Models in R

STATISTICA Formula Guide: Logistic Regression. Table of Contents

BNG 202 Biomechanics Lab. Descriptive statistics and probability distributions I

Statistical Machine Learning

How To Understand Multivariate Models

Least Squares Estimation

Section Format Day Begin End Building Rm# Instructor. 001 Lecture Tue 6:45 PM 8:40 PM Silver 401 Ballerini

MATHEMATICAL METHODS OF STATISTICS

Why Taking This Course? Course Introduction, Descriptive Statistics and Data Visualization. Learning Goals. GENOME 560, Spring 2012

Indices of Model Fit STRUCTURAL EQUATION MODELING 2013

Basics of Statistical Machine Learning

CONTENTS OF DAY 2. II. Why Random Sampling is Important 9 A myth, an urban legend, and the real reason NOTES FOR SUMMER STATISTICS INSTITUTE COURSE

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)

Simple Linear Regression Inference

i=1 In practice, the natural logarithm of the likelihood function, called the log-likelihood function and denoted by

CS229 Lecture notes. Andrew Ng

On Mardia s Tests of Multinormality

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

Canonical Correlation Analysis

Statistics Graduate Courses

MATH4427 Notebook 2 Spring MATH4427 Notebook Definitions and Examples Performance Measures for Estimators...

Factor analysis. Angela Montanari

11 Linear and Quadratic Discriminant Analysis, Logistic Regression, and Partial Least Squares Regression

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

Linear Classification. Volker Tresp Summer 2015

From the help desk: Bootstrapped standard errors

Factor Analysis. Principal components factor analysis. Use of extracted factors in multivariate dependency models

Profile analysis is the multivariate equivalent of repeated measures or mixed ANOVA. Profile analysis is most commonly used in two cases:

Institute of Actuaries of India Subject CT3 Probability and Mathematical Statistics

Comparison of Estimation Methods for Complex Survey Data Analysis

Penalized regression: Introduction

Overview of Factor Analysis

Goodness of fit assessment of item response theory models

Chapter 6: Point Estimation. Fall Probability & Statistics

Master s Theory Exam Spring 2006

CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA

SAS Software to Fit the Generalized Linear Model

Factor Analysis. Chapter 420. Introduction

[1] Diagonal factorization

HYPOTHESIS TESTING: CONFIDENCE INTERVALS, T-TESTS, ANOVAS, AND REGRESSION

II. DISTRIBUTIONS distribution normal distribution. standard scores

STAT 830 Convergence in Distribution

Please follow the directions once you locate the Stata software in your computer. Room 114 (Business Lab) has computers with Stata software

1 Short Introduction to Time Series

LOGISTIC REGRESSION. Nitin R Patel. where the dependent variable, y, is binary (for convenience we often code these values as

Spatial Statistics Chapter 3 Basics of areal data and areal data modeling

Multivariate Analysis of Ecological Data

Simple Predictive Analytics Curtis Seare

Statistics 104: Section 6!

lavaan: an R package for structural equation modeling

Curriculum Map Statistics and Probability Honors (348) Saugus High School Saugus Public Schools

Statistics in Retail Finance. Chapter 6: Behavioural models

CHAPTER 9 EXAMPLES: MULTILEVEL MODELING WITH COMPLEX SURVEY DATA

The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series.

business statistics using Excel OXFORD UNIVERSITY PRESS Glyn Davis & Branko Pecar

Module 3: Correlation and Covariance

Overview Classes Logistic regression (5) 19-3 Building and applying logistic regression (6) 26-3 Generalizations of logistic regression (7)

Introduction to Multilevel Modeling Using HLM 6. By ATS Statistical Consulting Group

Financial Time Series Analysis (FTSA) Lecture 1: Introduction

Chapter 4: Vector Autoregressive Models

Elasticity Theory Basics

Statistics in Retail Finance. Chapter 2: Statistical models of default

171:290 Model Selection Lecture II: The Akaike Information Criterion

Thnkwell s Homeschool Precalculus Course Lesson Plan: 36 weeks

Linear Models for Continuous Data

SENSITIVITY ANALYSIS AND INFERENCE. Lecture 12

CONTENTS PREFACE 1 INTRODUCTION 1 2 DATA VISUALIZATION 19

An Introduction to Basic Statistics and Probability

Introduction to Regression and Data Analysis

521493S Computer Graphics. Exercise 2 & course schedule change

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Section 14 Simple Linear Regression: Introduction to Least Squares Regression

STA 4273H: Statistical Machine Learning

LOGNORMAL MODEL FOR STOCK PRICES

A SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS

11. Analysis of Case-control Studies Logistic Regression

Lecture 2: Descriptive Statistics and Exploratory Data Analysis

Characteristics of Binomial Distributions

Introduction to Longitudinal Data Analysis

Chicago Booth BUSINESS STATISTICS Final Exam Fall 2011

Time Series Analysis

Transcription:

Multivariate Normal Distribution Lecture 4 July 21, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2 Lecture #4-7/21/2011 Slide 1 of 41

Last Time Matrices and vectors Eigenvalues Last Time Today s Lecture Parameters Likelihood Functions in Common Methods Eigenvectors Determinants Basic descriptive statistics using matrices: Mean vectors Covariance Matrices Correlation Matrices Lecture #4-7/21/2011 Slide 2 of 41

Today s Lecture Putting our new knowledge to use with a useful statistical distribution: the Multivariate Normal Distribution Last Time Today s Lecture This roughly maps onto Chapter 4 of Johnson and Wichern Parameters Likelihood Functions in Common Methods Lecture #4-7/21/2011 Slide 3 of 41

Multivariate Normal Distribution Univariate Review Contours Parameters Likelihood Functions in Common Methods The generalization of the univariate normal distribution to multiple variables is called the multivariate normal distribution () Many multivariate techniques rely on this distribution in some manner Although real data may never come from a true, the provides a robust approximation, and has many nice mathematical properties Furthermore, because of the central limit theorem, many multivariate statistics converge to the distribution as the sample size increases Lecture #4-7/21/2011 Slide 4 of 41

Univariate Normal Distribution The univariate normal distribution function is: Univariate Review Contours The mean is µ f(x) = 1 2πσ 2 e [(x µ)/σ]2 /2 Parameters Likelihood Functions The variance is σ 2 in Common Methods The standard deviation is σ Standard notation for normal distributions is N(µ, σ 2 ), which will be extended for the distribution Lecture #4-7/21/2011 Slide 5 of 41

Univariate Normal Distribution N(0, 1) Univariate Normal Distribution Univariate Review Contours Parameters Likelihood Functions in Common Methods f(x) 0.0 0.1 0.2 0.3 0.4 6 4 2 0 2 4 6 x Lecture #4-7/21/2011 Slide 6 of 41

Univariate Normal Distribution N(0, 2) Univariate Normal Distribution Univariate Review Contours Parameters Likelihood Functions in Common Methods f(x) 0.0 0.1 0.2 0.3 0.4 6 4 2 0 2 4 6 x Lecture #4-7/21/2011 Slide 7 of 41

Univariate Normal Distribution N(1.75, 1) Univariate Normal Distribution Univariate Review Contours Parameters Likelihood Functions in Common Methods f(x) 0.0 0.1 0.2 0.3 0.4 6 4 2 0 2 4 6 x Lecture #4-7/21/2011 Slide 8 of 41

UVN - Notes The area under the curve for the univariate normal distribution is a function of the variance/standard deviation Univariate Review Contours Parameters In particular: P(µ σ X µ + σ) = 0.683 P(µ 2σ X µ + 2σ) = 0.954 Likelihood Functions in Common Methods Also note the term in the exponent: ( ) 2 (x µ) = (x µ)(σ 2 ) 1 (x µ) σ This is the square of the distance from x to µ in standard deviation units, and will be generalized for the Lecture #4-7/21/2011 Slide 9 of 41

The multivariate normal distribution function is: Univariate Review Contours Parameters Likelihood Functions f(x) = The mean vector is µ The covariance matrix is Σ 1 Σ 1 (x µ)/2 (2π) p/2 Σ 1/2e (x µ) in Common Methods Standard notation for multivariate normal distributions is N p (µ,σ) Visualizing the is difficult for more than two dimensions, so I will demonstrate some plots with two variables - the bivariate normal distribution Lecture #4-7/21/2011 Slide 10 of 41

Bivariate Normal Plot #1 µ = [ 0 0 ],Σ = [ 1 0 0 1 ] Univariate Review Contours Parameters Likelihood Functions in Common Methods 0.16 0.14 0.12 0.1 0.08 0.06 0.04 0.02 0 4 2 4 0 2 4 4 2 0 2 Lecture #4-7/21/2011 Slide 11 of 41

Bivariate Normal Plot #1a µ = [ 0 0 ],Σ = [ 1 0 0 1 ] Univariate Review Contours 4 3 2 Parameters Likelihood Functions in Common Methods 1 0 1 2 3 4 4 3 2 1 0 1 2 3 4 Lecture #4-7/21/2011 Slide 12 of 41

Bivariate Normal Plot #2 µ = [ 0 0 ],Σ = [ 1 0.5 0.5 1 ] Univariate Review Contours 0.2 Parameters Likelihood Functions 0.15 in Common Methods 0.1 0.05 0 4 2 4 0 2 4 4 2 0 2 Lecture #4-7/21/2011 Slide 13 of 41

Bivariate Normal Plot #2 µ = [ 0 0 ],Σ = [ 1 0.5 0.5 1 ] Univariate Review Contours 4 3 2 Parameters Likelihood Functions in Common Methods 1 0 1 2 3 4 4 3 2 1 0 1 2 3 4 Lecture #4-7/21/2011 Slide 14 of 41

Contours The lines of the contour plots denote places of equal probability mass for the distribution Univariate Review Contours Parameters Likelihood Functions in Common Methods The lines represent points of both variables that lead to the same height on the z-axis (the height of the surface) These contours can be constructed from the eigenvalues and eigenvectors of the covariance matrix The direction of the ellipse axes are in the direction of the eigenvalues The length of the ellipse axes are proportional to the constant times the eigenvector Specifically: (x µ) Σ 1 (x µ) = c 2 has ellipsoids centered at µ, and has axes ±c λ i e i Lecture #4-7/21/2011 Slide 15 of 41

Contours, Continued Contours are useful because they provide confidence regions for data points from the distribution Univariate Review Contours Parameters The multivariate analog of a confidence interval is given by an ellipsoid, where c is from the Chi-Squared distribution with p degrees of freedom Specifically: Likelihood Functions in Common Methods (x µ) Σ 1 (x µ) = χ 2 p(α) provides the confidence region containing 1 α of the probability mass of the distribution Lecture #4-7/21/2011 Slide 16 of 41

Contour Example Univariate Review Contours Parameters Likelihood Functions in Common Methods Imagine we had a bivariate normal distribution with: [ ] [ ] 0 1 0.5 µ =,Σ = 0 0.5 1 The covariance matrix has eigenvalues and eigenvectors: [ ] [ ] 1.5 0.707 0.707 λ =, E = 0.5 0.707 0.707 We want to find a contour where 95% of the probability will fall, corresponding to χ 2 2(0.05) = 5.99 Lecture #4-7/21/2011 Slide 17 of 41

Contour Example This contour will be centered at µ Axis 1: Univariate Review Contours µ ± 5.99 1.5 [ 0.707 0.707 ] = [ 2.12 2.12 ], [ 2.12 2.12 ] Parameters Likelihood Functions Axis 2: in Common Methods µ ± 5.99 0.5 [ 0.707 0.707 ] = [ 1.22 1.22 ], [ 1.22 1.22 ] Lecture #4-7/21/2011 Slide 18 of 41

The distribution has some convenient properties Parameters Likelihood Functions in Common Methods If X has a multivariate normal distribution, then: 1. Linear combinations of X are normally distributed 2. All subsets of the components of X have a distribution 3. Zero covariance implies that the corresponding components are independently distributed 4. The conditional distributions of the components are Lecture #4-7/21/2011 Slide 19 of 41

Linear Combinations If X N p (µ,σ), then any set of q linear combinations of variables A (q p) are also normally distributed as AX N q (Aµ, AΣA ) Parameters Likelihood Functions in Common Methods For example, let p = 3 and Y be the difference between X 1 and X 2. The combination matrix would be A = [ )1 1 0 ] For X µ = µ 1 µ 2 µ 3,Σ = σ 11 σ 12 σ 13 σ 21 σ 22 σ 23 σ 31 σ 32 σ 33 For Y = AX µ Y = [ ] µ 1 µ 2,Σ Y = [ σ 11 + σ 22 2σ 12 ] Lecture #4-7/21/2011 Slide 20 of 41

The distribution is characterized by two parameters: The mean vector µ The covariance matrix Σ Parameters UVN CLT Multi CLT Sufficient Stats Likelihood Functions in Common Methods The maximum likelihood estimates for these parameters are given by: The mean vector: x = 1 n n x i = 1 n X 1 i=1 The covariance matrix S = 1 n (x i x) 2 = 1 n n (X 1 x ) (X 1 x ) i=1 Lecture #4-7/21/2011 Slide 21 of 41

Distribution of x and S Recall back in Univariate statistics you discussed the Central Limit Theorem (CLT) Parameters UVN CLT Multi CLT Sufficient Stats Likelihood Functions in Common Methods It stated that, if the set of n observations x 1, x 2,...,x n were normal or not... The distribution of x would be normal with mean equal to µ and variance σ 2 /n We were also told that (n 1)s 2 /σ 2 had a Chi-Square distribution with n 1 degrees of freedom Note: We ended up using these pieces of information for hypothesis testing such as t-test and ANOVA. Lecture #4-7/21/2011 Slide 22 of 41

Distribution of x and S We also have a Multivariate Central Limit Theorem (CLT) Parameters UVN CLT Multi CLT Sufficient Stats Likelihood Functions in Common Methods It states that, if the set of n observations x 1, x 2,...,x n are multivariate normal or not... The distribution of x would be normal with mean equal to µ and variance/covariance matrix Σ/n We are also told that (n 1)S will have a Wishart distribution, W p (n 1,Σ), with n 1 degrees of freedom This is the multivariate analogue to a Chi-Square distribution Note: We will end up using some of this information for multivariate hypothesis testing Lecture #4-7/21/2011 Slide 23 of 41

Distribution of x and S Therefore, let x 1, x 2,...,x n be independent observations from a population with mean µ and covariance Σ Parameters UVN CLT Multi CLT Sufficient Stats The following are true: n ( X µ ) is approximately Np (0,Σ) n (X µ) S 1 (X µ) is approximately χ 2 p Likelihood Functions in Common Methods Lecture #4-7/21/2011 Slide 24 of 41

Sufficient Statistics The sample estimates X and S) are sufficient statistics Parameters UVN CLT Multi CLT Sufficient Stats Likelihood Functions in Common Methods This means that all of the information contained in the data can be summarized by these two statistics alone This is only true if the data follow a multivariate normal distribution - if they do not, other terms are needed (i.e., skewness array, kurtosis array, etc...) Some statistical methods only use one or both of these matrices in their analysis procedures and not the actual data Lecture #4-7/21/2011 Slide 25 of 41

Density and Likelihood Functions The distribution is often the core statistical distribution for a uni- or multivariate statistical technique Parameters Likelihood Functions Intro to MLE Likelihood in Common Methods Maximum likelihood estimates are preferable in statistics due to a set of desirable asymptotic properties, including: Consistency: the estimator converges in probability to the value being estimated Asymptotic Normality: the estimator has a normal distribution with a functionally known variance Efficiency: no asymptotically unbiased estimator has lower asymptotic mean squared error than the MLE The form of the ML function frequently appears in statistics, so we will briefly discuss MLE using normal distributions Lecture #4-7/21/2011 Slide 26 of 41

An Introduction to Maximum Likelihood Parameters Likelihood Functions Intro to MLE Likelihood in Common Methods Maximum likelihood estimation seeks to find parameters of a statistical model (mapping onto the mean vector and/or covariance matrix) such that the statistical likelihood function is maximized The method assumes data follow a statistical distribution, in our case the More frequently, the log-likelihood function is used instead of the likelihood function The logged and un-logged version of the function have a maximum at the same point The logged version is easier mathematically Lecture #4-7/21/2011 Slide 27 of 41

Maximum Likelihood for Univariate Normal We will start with the univariate normal case and then generalize Parameters Likelihood Functions Intro to MLE Likelihood in Common Methods Imagine we have a sample of data, X, which we will assume is normally distributed with an unknown mean but a known variance (say the variance is 1) We will build the maximum likelihood function for the mean Our function rests on two assumptions: 1. All data follow a normal distribution 2. All observations are independent Put into statistical terms: X is independent and identically distributed (iid) as N 1 (µ, 1) Lecture #4-7/21/2011 Slide 28 of 41

Building the Likelihood Function Each observation, then, follows a normal distribution with the same mean (unknown) and variance (1) Parameters Likelihood Functions Intro to MLE Likelihood in Common Methods The distribution function begins with the density the function that provides the normal curve (with (1) in place of σ 2 ): f(x i µ) = 1 exp ( (X i µ) 2 ) 2π(1) 2(1) 2 The density provides the likelihood of observing an observation X i for a given value of µ (and a known value of σ 2 = 1 The likelihood is the height of the normal curve Lecture #4-7/21/2011 Slide 29 of 41

The One-Observation Likelihood Function The graph shows f(x i µ = 1) for a range of X Parameters Likelihood Functions Intro to MLE Likelihood in Common Methods f(x mu) 0.0 0.1 0.2 0.3 0.4 The vertical lines indicate: f(x i = 0 µ = 1) =.241 f(x i = 1 µ = 1) =.399 4 2 0 2 4 X Lecture #4-7/21/2011 Slide 30 of 41

The Overall Likelihood Function Because we have a sample of N observations, our likelihood function is taken across all observations, not just one Parameters Likelihood Functions Intro to MLE Likelihood in Common Methods The joint likelihood function uses the assumption that observations are independent to be expressed as a product of likelihood functions across all observations: L(x µ) = f(x 1 µ) f(x 2 µ)... f(x N µ) N ( ) N/2 ) N 1 i=1 L(x µ) = f(x i µ) = exp( (X i µ) 2 2π(1) 2(1) 2 i=1 The value of µ that maximizes f(x µ) is the MLE (in this case, it s the sample mean) In more complicated models, the MLE does not have a closed form and therefore must be found using numeric methods Lecture #4-7/21/2011 Slide 31 of 41

The Overall Log-Likelihood Function Parameters Likelihood Functions Intro to MLE Likelihood in Common Methods For an unknown mean µ and variance σ 2, the likelihood function is: L(x µ, σ 2 ) = ( ) N/2 ) N 1 i=1 exp( 2πσ 2 (X i µ) 2 2σ 2 More commonly, the log-likelihood function is used: L(x µ, σ 2 ) = ( N 2 ) log ( ( N ) 2πσ 2) i=1 (X i µ) 2 2σ 2 Lecture #4-7/21/2011 Slide 32 of 41

The Multivariate Normal Likelihood Function For a set of N independent observations on p variables, X (N p), the multivariate normal likelihood function is formed by using a similar approach Parameters For an unknown mean vector µ and covariance Σ, the joint likelihood is: Likelihood Functions Intro to MLE Likelihood in Common Methods L(X µ,σ) = N i=1 1 (2π) p/2 Σ exp( (x 1/2 i µ) Σ 1 (x i µ)/2 ) = 1 1 (2π) np/2 Σ n/2 exp ( ) N (x i µ) Σ 1 (x i µ)/2 i=1 Lecture #4-7/21/2011 Slide 33 of 41

The Multivariate Normal Likelihood Function Occasionally, a more intricate form of the likelihood function shows up Parameters Likelihood Functions Intro to MLE Likelihood in Common Methods Although mathematically identical to the function on the last page, this version typically appears without explanation: L(X µ,σ) = (2π) np/2 Σ n/2 ( [ ( N ) exp tr Σ 1 (x i x) (x i x) + n ( x µ) ( x µ) )]/2 i=1 Lecture #4-7/21/2011 Slide 34 of 41

The Log-Likelihood Function As with the univariate case, the likelihood function is typically converted into a log-likelihood function for simplicity Parameters Likelihood Functions Intro to MLE Likelihood in Common Methods The log-likelihood function is given by: l(x µ,σ) = np 2 log (2π) n log ( Σ ) 2 ( N ) 1 (x i µ) Σ 1 (x i µ) 2 i=1 Lecture #4-7/21/2011 Slide 35 of 41

But...Why? The distribution, likelihood, and log-likelihood functions show up frequently in statistical methods Parameters Likelihood Functions in Common Methods Motivation for in Mixed Models in SEM Commonly used methods rely on versions of the distribution, methods such as: Linear models (ANOVA, Regression) Mixed models (i.e., hierarchical linear models, random effects models, multilevel models) Path models/simultaneous equation models Structural equation models (and confirmatory factor models) Many versions of finite mixture models Understanding the form of the distribution will help to understand the commonalities between each of these models Lecture #4-7/21/2011 Slide 36 of 41

in Mixed Models From SAS manual for proc mixed: Parameters Likelihood Functions in Common Methods Motivation for in Mixed Models in SEM Lecture #4-7/21/2011 Slide 37 of 41

in Structural Equation Models From SAS manual for proc calis: Parameters Likelihood Functions This package uses only the covariance matrix, so the form of the likelihood function is phrased using only the Wishart Distribution: in Common Methods Motivation for in Mixed Models in SEM w (S Σ) = S (n p 2) exp [ tr [ SΣ 1] /2 ] 2 p(n 1)/2 π p(p 1)/4 Σ (n 1)/2 p i=1 Γ ( 1 2 (n i)) Lecture #4-7/21/2011 Slide 38 of 41

Recall from earlier that IF the data have a Multivariate normal distribution then all of the previously discussed properties will hold Parameters Likelihood Functions in Common Methods Transformations to Near Normality There are a host of methods that have been developed to assess multivariate normality - just look in Johnson & Wichern Given the relative robustness of the distribution, I will skip this topic, acknowledging that extreme deviations from normality will result in poorly performing statistics More often than not, assessing MV normality is fraught with difficulty due to sample-estimated parameters of the distribution Lecture #4-7/21/2011 Slide 39 of 41

Transformations to Near Normality Historically, people have gone on an expedition to find a transformation to near-normality when learning their data may not be Parameters Likelihood Functions in Common Methods Transformations to Near Normality Modern statistical methods, however, make that a very bad idea More often than not, transformations end up changing the nature of the statistics you are interested in forming Furthermore, not all data need to be (think conditional distributions) Lecture #4-7/21/2011 Slide 40 of 41

Final Thoughts The multivariate normal distribution is an analog to the univariate normal distribution Parameters Likelihood Functions in Common Methods Final Thoughts The distribution will play a large role in the upcoming weeks We can finally put the background material to rest, and begin learning some statistics methods Tomorrow: lab with SAS - the fun of proc iml Up next week: Inferences about Mean Vectors and Multivariate ANOVA Lecture #4-7/21/2011 Slide 41 of 41