10-705/36-705 Intermediate Statistics



Similar documents
Properties of MLE: consistency, asymptotic normality. Fisher information.

SAMPLE QUESTIONS FOR FINAL EXAM. (1) (2) (3) (4) Find the following using the definition of the Riemann integral: (2x + 1)dx

Overview of some probability distributions.

Chapter 6: Variance, the law of large numbers and the Monte-Carlo method

Chapter 7 Methods of Finding Estimators

University of California, Los Angeles Department of Statistics. Distributions related to the normal distribution

Convexity, Inequalities, and Norms

Chapter 7 - Sampling Distributions. 1 Introduction. What is statistics? It consist of three major areas:

UC Berkeley Department of Electrical Engineering and Computer Science. EE 126: Probablity and Random Processes. Solutions 9 Spring 2006

Case Study. Normal and t Distributions. Density Plot. Normal Distributions

In nite Sequences. Dr. Philippe B. Laval Kennesaw State University. October 9, 2008


Lecture 13. Lecturer: Jonathan Kelner Scribe: Jonathan Pines (2009)

Hypothesis testing. Null and alternative hypotheses

Sequences and Series

Maximum Likelihood Estimators.

A probabilistic proof of a binomial identity

I. Chi-squared Distributions

Output Analysis (2, Chapters 10 &11 Law)

Asymptotic Growth of Functions

Section 11.3: The Integral Test

1. C. The formula for the confidence interval for a population mean is: x t, which was

Chapter 14 Nonparametric Statistics

Normal Distribution.

Lecture 4: Cauchy sequences, Bolzano-Weierstrass, and the Squeeze theorem

SECTION 1.5 : SUMMATION NOTATION + WORK WITH SEQUENCES

Our aim is to show that under reasonable assumptions a given 2π-periodic function f can be represented as convergent series

Lecture Notes 1. Brief Review of Basic Probability

Solutions to Selected Problems In: Pattern Classification by Duda, Hart, Stork

Lecture 5: Span, linear independence, bases, and dimension

BASIC STATISTICS. f(x 1,x 2,..., x n )=f(x 1 )f(x 2 ) f(x n )= f(x i ) (1)

1 Correlation and Regression Analysis

Week 3 Conditional probabilities, Bayes formula, WEEK 3 page 1 Expected value of a random variable

Infinite Sequences and Series

Lecture 4: Cheeger s Inequality

Non-life insurance mathematics. Nils F. Haavardsson, University of Oslo and DNB Skadeforsikring

1. MATHEMATICAL INDUCTION

Math C067 Sampling Distributions

PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY AN ALTERNATIVE MODEL FOR BONUS-MALUS SYSTEM

MEI Structured Mathematics. Module Summary Sheets. Statistics 2 (Version B: reference to new book)

Discrete Mathematics and Probability Theory Spring 2014 Anant Sahai Note 13

Trigonometric Form of a Complex Number. The Complex Plane. axis. ( 2, 1) or 2 i FIGURE The absolute value of the complex number z a bi is

LECTURE 13: Cross-validation

5: Introduction to Estimation

1 The Gaussian channel

THE REGRESSION MODEL IN MATRIX FORM. For simple linear regression, meaning one predictor, the model is. for i = 1, 2, 3,, n

Measures of Spread and Boxplots Discrete Math, Section 9.4

1 Review of Probability

Soving Recurrence Relations

3. Greatest Common Divisor - Least Common Multiple

CHAPTER 7: Central Limit Theorem: CLT for Averages (Means)

Chapter 7: Confidence Interval and Sample Size

, a Wishart distribution with n -1 degrees of freedom and scale matrix.

4.3. The Integral and Comparison Tests

1 Computing the Standard Deviation of Sample Means

Statistical inference: example 1. Inferential Statistics

Theorems About Power Series

Unbiased Estimation. Topic Introduction

Class Meeting # 16: The Fourier Transform on R n

Central Limit Theorem and Its Applications to Baseball

Parametric (theoretical) probability distributions. (Wilks, Ch. 4) Discrete distributions: (e.g., yes/no; above normal, normal, below normal)

Chapter 5: Inner Product Spaces

Parameter estimation for nonlinear models: Numerical approaches to solving the inverse problem. Lecture 11 04/01/2008. Sven Zenker

Lesson 17 Pearson s Correlation Coefficient

Basic Elements of Arithmetic Sequences and Series

Determining the sample size

THE HEIGHT OF q-binary SEARCH TREES

3 Basic Definitions of Probability Theory

FIBONACCI NUMBERS: AN APPLICATION OF LINEAR ALGEBRA. 1. Powers of a matrix

Confidence Intervals for One Mean

One-sample test of proportions

GCSE STATISTICS. 4) How to calculate the range: The difference between the biggest number and the smallest number.

Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.

NOTES ON PROBABILITY Greg Lawler Last Updated: March 21, 2016

Modified Line Search Method for Global Optimization

The Stable Marriage Problem

Confidence Intervals. CI for a population mean (σ is known and n > 30 or the variable is normally distributed in the.

Plug-in martingales for testing exchangeability on-line

MARTINGALES AND A BASIC APPLICATION

Example 2 Find the square root of 0. The only square root of 0 is 0 (since 0 is not positive or negative, so those choices don t exist here).

Concentration of Measure

WHEN IS THE (CO)SINE OF A RATIONAL ANGLE EQUAL TO A RATIONAL NUMBER?

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 8

Taking DCOP to the Real World: Efficient Complete Solutions for Distributed Multi-Event Scheduling

AP Calculus BC 2003 Scoring Guidelines Form B

INFINITE SERIES KEITH CONRAD

THE TWO-VARIABLE LINEAR REGRESSION MODEL

Building Blocks Problem Related to Harmonic Series

Notes on exponential generating functions and structures.

NATIONAL SENIOR CERTIFICATE GRADE 12

CS103X: Discrete Structures Homework 4 Solutions

Approximating Area under a curve with rectangles. To find the area under a curve we approximate the area using rectangles and then use limits to find

Universal coding for classes of sources

5 Boolean Decision Trees (February 11)

THE ABRACADABRA PROBLEM

Descriptive Statistics

Inference on Proportion. Chapter 8 Tests of Statistical Hypotheses. Sampling Distribution of Sample Proportion. Confidence Interval

Center, Spread, and Shape in Inference: Claims, Caveats, and Insights

Permutations, the Parity Theorem, and Determinants

Lecture 3. denote the orthogonal complement of S k. Then. 1 x S k. n. 2 x T Ax = ( ) λ x. with x = 1, we have. i = λ k x 2 = λ k.

Transcription:

0-705/36-705 Itermediate Statistics Larry Wasserma http://www.stat.cmu.edu/~larry/=stat705/ Fall 0 Week Class I Class II Day III Class IV Syllabus August 9 Review Review, Iequalities Iequalities September 5 No Class O P HW [sol] VC Theory September Covergece Covergece HW [sol] Test I September 9 Covergece Addedum Sufficiecy HW 3 [sol] Sufficiecy September 6 Likelihood Poit Estimatio HW 4 [sol] Miimax Theory October 3 Miimax Summary Asymptotics HW 5 [sol] Asymptotics October 0 Asymptotics Review Test II October 7 Testig Testig HW 6 [sol] Mid-semester Break October 4 Testig Cofidece Itervals HW 7 [sol] Cofidece Itervals October 3 Noparametric Noparametric Review November 7 Test III No Class HW 8 [sol] The Bootstrap November 4 The Bootstrap Bayesia Iferece HW 9 [sol] Bayesia Iferece November No Class No Class No Class November 8 Predictio Predictio HW 0 [sol] Model Selectio December 5 Multiple Testig Causatio Idividual Sequeces Practice Fial

0-705/36-705: Itermediate Statistics, Fall 00 Professor Larry Wasserma Office Baker Hall 8 A Email larry@stat.cmu.edu Phoe 68-877 Office hours Modays, :30-:30 Class Time Mo-Wed-Fri :30 - :0 Locatio GHC 4307 TAs Wajie Wag ad Xiaoli Yag Website http://www.stat.cmu.edu/ larry/=stat705 Objective This course will cover the fudametals of theoretical statistics. Topics iclude: poit ad iterval estimatio, hypothesis testig, data reductio, covergece cocepts, Bayesia iferece, oparametric statistics ad bootstrap resamplig. We will cover Chapters 5 0 from Casella ad Berger plus some supplemetary material. This course is excellet preparatio for advaced work i Statistics ad Machie Learig. Textbook Casella, G. ad Berger, R. L. (00). Statistical Iferece, d ed. Backgroud I assume that you are familiar with the material i Chapters - 4 of Casella ad Berger. Other Recommeded Texts Wasserma, L. (004). All of Statistics: A cocise course i statistical iferece. Bickel, P. J. ad Doksum, K. A. (977). Mathematical Statistics. Rice, J. A. (977). Mathematical Statistics ad Data Aalysis, Secod Editio. Gradig 0% : Test I (Sept. 6) o the material of Chapters 4 0% : Test II (October 4) 0% : Test III (November 7) 30% : Fial Exam (Date set by the Uiversity) 0% : Homework

Exams All exams are closed book. Do NOT buy a plae ticket util the fial exam has bee scheduled. Homework Homework assigmets will be posted o the web. Had i homework to Mari Alice Mcshae, 8 Baker Hall by 3 pm Thursday. No late homework. Readig ad Class Notes Class otes will be posted o the web regularly. Brig a copy to class. The otes are ot meat to be a substitute for the book ad hece are geerally quite terse. Read both the otes ad the text before lecture. Sometimes I will cover topics from other sources. Group Work You are ecouraged to work with others o the homework. But write-up your fial solutios o your ow.. Quick Review of Chapters -4. Iequalities 3. Vapik-Chervoekis Theory 4. Covergece 5. Sufficiecy 6. Likelihood 7. Poit Estimatio 8. Miimax Theory 9. Asymptotics 0. Robustess. Hypothesis Testig. Cofidece Itervals 3. Noparametric Iferece 4. Predictio ad Classificatio 5. The Bootstrap 6. Bayesia Iferece 7. Markov Chai Mote Carlo 8. Model Selectio Course Outlie

Lecture Notes Quick Review of Basic Probability (Casella ad Berger Chapters -4) Probability Review Chapters -4 are a review. I will assume you have read ad uderstood Chapters -4. Let us recall some of the key ideas.. Radom Variables A radom variable is a map X from a probability space Ω to R. We write P (X A) = P ({ω Ω : X(ω) A}) ad we write X P to mea that X has distributio P. fuctio (cdf) of X is F X (x) = F (x) = P (X x). If X is discrete, its probability mass fuctio (pmf) is The cumulative distributio p X (x) = p(x) = P (X = x). If X is cotiuous, the its probability desity fuctio fuctio (pdf) satisfies P (X A) = p X (x)dx = p(x)dx ad p X (x) = p(x) = F (x). The followig are all equivalet: A X P, X F, X p. Suppose that X P ad Y Q. We say that X ad Y have the same distributio if A P (X A) = Q(Y A) for all A. I other words, P = Q. I that case we say that X ad Y are equal i distributio ad we write X = d Y. It ca be show that X = d Y if ad oly if F X (t) = F Y (t) for all t.. Expected Values The mea or expected value of g(x) is { g(x)p(x)dx if X is cotiuous E (g(x)) = g(x)df (x) = g(x)dp (x) = j g(x j)p(x j ) if X is discrete.

Recall that:. E( k j= c jg j (X)) = k j= c je(g j (X)).. If X,..., X are idepedet the ( ) E X i = i= i E (X i ). 3. We ofte write µ = E(X). 4. σ = Var (X) = E ((X µ) ) is the Variace. 5. Var (X) = E (X ) µ. 6. If X,..., X are idepedet the ( ) Var a i X i = i= i a i Var (X i ). 7. The covariace is Cov(X, Y ) = E((X µ x )(Y µ y )) = E(XY ) µ X µ Y ad the correlatio is ρ(x, Y ) = Cov(X, Y )/σ x σ y. Recall that ρ(x, Y ). The coditioal expectatio of Y give X is the radom variable E(Y X) whose value, whe X = x is E(Y X = x) = y p(y x)dy where p(y x) = p(x, y)/p(x). The Law of Total Expectatio or Law of Iterated Expectatio: E(Y ) = E [ E(Y X) ] = E(Y X = x)p X (x)dx. The Law of Total Variace is Var(Y ) = Var [ E(Y X) ] + E [ Var(Y X) ]. The th momet is E (X ) ad the th cetral momet is E ((X µ) ). The momet geeratig fuctio (mgf) is M X (t) = E ( e tx). The, M () X (t) t=0 = E (X ). If M X (t) = M Y (t) for all t i a iterval aroud 0 the X d = Y.

.3 Expoetial Families A family of distributios {p(x; θ) : θ Θ} is called a expoetial family if { k } p(x; θ) = h(x)c(θ) exp w i (θ)t i (x). Example X Poisso(λ) is expoetial family sice p(x) = P (X = x) = e λ λ x = x! x! e λ exp{log λ x}. Example X U (0, θ) is ot a expoetial family. The desity is where I A (x) = if x A ad 0 otherwise. i= p X (x) = θ I (0,θ)(x) We ca rewrite a expoetial family i terms of a atural parameterizatio. For k = we have p(x; η) = h(x) exp{ηt(x) A(η)} where A(η) = log For example a Poisso ca be writte as h(x) exp{ηt(x)}dx. p(x; η) = exp{ηx e η }/x! where the atural parameter is η = log λ. Let X have a expoetial family distributio. The E (t(x)) = A (η), Practice Problem: Prove the above result..4 Trasformatios Var (t(x)) = A (η). Let Y = g(x). The F Y (y) = P(Y y) = P(g(X) y) = where The p Y (y) = F Y (y). If g is mootoic, the where h = g. A y = {x : g(x) y}. p Y (y) = p X (h(y)) dh(y) dy 3 A(y) p X (x)dx

Example 3 Let p X (x) = e x for x > 0. Hece F X (x) = e x. Let Y = g(x) = log X. The ad p Y (y) = e y e ey for y R. F Y (y) = P (Y y) = P (log(x) y) = P (X e y ) = F X (e y ) = e ey Example 4 Practice problem. Let X be uiform o (, ) ad let Y = X. Fid the desity of Y. Let Z = g(x, Y ). For exampe, Z = X + Y or Z = X/Y. The we fid the pdf of Z as follows:. For each z, fid the set A z = {(x, y) : g(x, y) z}.. Fid the CDF F Z (z) = P (Z z) = P (g(x, Y ) z) = P ({(x, y) : g(x, y) z}) = p X,Y (x, y)dxdy. A z 3. The pdf is p Z (z) = F Z (z). Example 5 Practice problem. Let (X, Y ) be uiform o the uit square. Let Z = X/Y. Fid the desity of Z..5 Idepedece Recall that X ad Y are idepedet if ad oly if for all A ad B. P(X A, Y B) = P(X A)P(Y B) Theorem 6 Let (X, Y ) be a bivariate radom vector with p X,Y (x, y). X ad Y are idepedet iff p X,Y (x, y) = p X (x)p Y (y). X,..., X are idepedet if ad oly if P(X A,..., X A ) = P(X i A i ). Thus, p X,...,X (x,..., x ) = i= p X i (x i ). If X,..., X are idepedet ad idetically distributed we say they are iid (or that they are a radom sample) ad we write X,..., X P or X,..., X F or X,..., X p. 4 i=

.6 Importat Distributios X N(µ, σ ) if p(x) = σ π e (x µ) /(σ). If X R d the X N(µ, Σ) if ( p(x) = (π) d/ Σ exp ) (x µ)t Σ (x µ). X χ p if X = p j= Z j where Z,..., Z p N(0, ). X Beroulli(θ) if P(X = ) = θ ad P(X = 0) = θ ad hece p(x) = θ x ( θ) x x = 0,. X Biomial(θ) if p(x) = P(X = x) = ( ) θ x ( θ) x x x {0,..., }. X Uiform(0, θ) if p(x) = I(0 x θ)/θ..7 Sample Mea ad Variace The sample mea is ad the sample variace is X = X i, S = i (X i X). Let X,..., X be iid with µ = E(X i ) = µ ad σ = Var(X i ) = σ. The E(X) = µ, Theorem 7 If X,..., X N(µ, σ ) the (a) X N(µ, σ ) i Var(X) = σ, E(S ) = σ. (b) ( )S σ χ (c) X ad S are idepedet 5

.8 Delta Method If X N(µ, σ ), Y = g(x) ad σ is small the To see this, ote that Y N(g(µ), σ (g (µ)) ). Y = g(x) = g(µ) + (X µ)g (µ) + (X µ) g (ξ) for some ξ. Now E((X µ) ) = σ which we are assumig is small ad so Y = g(x) g(µ) + (X µ)g (µ). Thus Hece, E(Y ) g(µ), Var(Y ) (g (µ)) σ. g(x) N ( g(µ), (g (µ)) σ ). Appedix: Useful Facts Facts about sums i= i = (+). i= i = (+)(+) 6. Geometric series: a + ar + ar +... = a, for 0 < r <. r Partial Geometric series a + ar + ar +... + ar = a( r ) r. Biomial Theorem x=0 ( ) a x = ( + a), x x=0 ( ) a x b x = (a + b). x Hypergeometric idetity x=0 ( )( ) a b = x x ( a + b ). 6

Commo Distributios Discrete Uiform X U (,..., N) X takes values x =,,..., N P (X = x) = /N E (X) = x xp (X = x) = x x N = N E (X ) = x x P (X = x) = x x N = N Biomial X Bi(, p) X takes values x = 0,,..., P (X = x) = ( ) x p x ( p) x Hypergeometric X Hypergeometric(N, M, K) P (X = x) = (M x )( N M K x ) ( N K) Geometric X Geom(p) P (X = x) = ( p) x p, x =,,... E (X) = x x( p)x = p x Poisso X Poisso(λ) P (X = x) = e λ λ x x! x = 0,,,... E (X) = Var (X) = λ N(N+) = (N+) N(N+)(N+) 6 d ( ( dp p)x ) = p p =. p p M X (t) = x=0 etx e λ λ x = e λ (λe t ) x x! x=0 = e λ e λet = e λ(et ). x! 7

E (X) = λe t e λ(et ) t=0 = λ. Use mgf to show: if X Poisso(λ ), X Poisso(λ ), idepedet the Y = X + X Poisso(λ + λ ). Cotiuous Distributios Normal X N(µ, σ ) p(x) = πσ exp{ σ (x µ) }, x R mgf M X (t) = exp{µt + σ t /}. E (X) = µ Var (X) = σ. e.g., If Z N(0, ) ad X = µ + σz, the X N(µ, σ ). Show this... Proof. which is the mgf of a N(µ, σ ). Alterative proof: M X (t) = E ( e tx) = E ( e t(µ+σz)) = e tµ E ( e tσz) = e tµ M Z (tσ) = e tµ e (tσ) / = e tµ+t σ / F X (x) = P (X x) = P (µ + σz x) = P ( ) x µ = F Z σ ( ) x µ p X (x) = F X(x) = p Z σ σ { = exp ( ) } x µ π σ σ { = exp ( ) } x µ, πσ σ which is the pdf of a N(µ, σ ). ( Z x µ ) σ 8

Gamma X Γ(α, β). p X (x) = Γ(α)β α x α e x/β, x a positive real. Γ(α) = 0 x α e x/β dx. β α Importat statistical distributio: χ p = Γ( p, ). χ p = p i= X i, where X i N(0, ), iid. Expoetial X expoe(β) p X (x) = β e x/β, x a positive real. expoe(β) = Γ(, β). e.g., Used to model waitig time of a Poisso Process. Suppose N is the umber of phoe calls i hour ad N P oisso(λ). Let T be the time betwee cosecutive phoe calls, the T expoe(/λ) ad E (T ) = (/λ). If X,..., X are iid expoe(β), the i X i Γ(, β). Memoryless Property: If X expoe(β), the P (X > t + s X > t) = P (X > s). Liear Regressio Model the respose (Y ) as a liear fuctio of the parameters ad covariates (x) plus radom error (ɛ). Y i = θ(x, β) + ɛ i where θ(x, β) = Xβ = β 0 + β x + β x +... + β k x k. 9

Geeralized Liear Model Model the atural parameters as liear fuctios of the the covariates. Example: Logistic Regressio. P (Y = X = x) = I other words, Y X = x Bi(, p(x)) ad where η(x) = β T x eβt x + e βt x. ( ) p(x) η(x) = log. p(x) Logistic Regressio cosists of modellig the atural parameter, which is called the log odds ratio, as a liear fuctio of covariates. Locatio ad Scale Families, CB 3.5 Let p(x) be a pdf. Locatio family : {p(x µ) = p(x µ) : µ R} Scale family : { p(x σ) = ( x ) } σ f : σ > 0 σ Locatio Scale family : { p(x µ, σ) = ( ) } x µ σ f : µ R, σ > 0 σ () Locatio family. Shifts the pdf. e.g., Uiform with p(x) = o (0, ) ad p(x θ) = o (θ, θ + ). e.g., Normal with stadard pdf the desity of a N(0, ) ad locatio family pdf N(θ, ). () Scale family. Stretches the pdf. e.g., Normal with stadard pdf the desity of a N(0, ) ad scale family pdf N(0, σ ). (3) Locatio-Scale family. Stretches ad shifts the pdf. e.g., Normal with stadard pdf the desity of a N(0, ) ad locatio-scale family pdf N(θ, σ ), i.e., x µ p( ). σ σ 0

Multiomial Distributio The multivariate versio of a Biomial is called a Multiomial. Cosider drawig a ball from a ur with has balls with k differet colors labeled color, color,..., color k. Let p = (p, p,..., p k ) where j p j = ad p j is the probability of drawig color j. Draw balls from the ur (idepedetly ad with replacemet) ad let X = (X, X,..., X k ) be the cout of the umber of balls of each color draw. We say that X has a Multiomial (, p) distributio. The pdf is ( ) p(x) = p x... p x k k x,..., x. k Multivariate Normal Distributio We ow defie the multivariate ormal distributio ad derive its basic properties. We wat to allow the possibility of multivariate ormal distributios whose covariace matrix is ot ecessarily positive defiite. Therefore, we caot defie the distributio by its desity fuctio. Istead we defie the distributio by its momet geeratig fuctio. (The reader may woder how a radom vector ca have a momet geeratig fuctio if it has o desity fuctio. However, the momet geeratig fuctio ca be defied usig more geeral types of itegratio. I this book, we assume that such a defiitio is possible but fid the momet geeratig fuctio by elemetary meas.) We fid the desity fuctio for the case of positive defiite covariace matrix i Theorem 5. Lemma 8 (a). Let X = AY + b The The M X (t) = exp (b t)m Y (A t). (b). Let c be a costat. Let Z = cy. The (c). Let Y = M Z (t) = M Y (ct). ( Y Y ), t = ( t t ( ) t M Y (t ) = M Y. 0 (d). Y ad Y are idepedet if ad oly if ( ) ( ( ) t t M Y = M t Y )M 0 Y. 0t )

We start with Z,..., Z idepedet radom variables such that Z i N (0, ). Let Z = (Z,..., Z ). The E(Z) = 0, cov(z) = I, M Z (t) = exp t i = exp t t. () Let µ be a vector ad A a matrix. Let Y = AZ + µ. The E(Y) = µ cov(y) = AA. () Let Σ = AA. We ow show that the distributio of Y depeds oly o µ ad Σ. The momet geeratig fuctio M Y (t) is give by ( M Y (t) = exp(µ t)m Z (A t) = exp µ t + t (A ) ( ) A)t = exp µ t + t Σt. With this motivatio i mid, let µ be a vector, ad let Σ be a oegative defiite matrix. The we say that the -dimesioal radom vector Y has a -dimesioal ormal distributio with mea vector µ, ad covariace matrix Σ, if Y has momet geeratig fuctio ( ) M Y (t) = exp µ t + t Σt. (3) We write Y N (µ, Σ). The followig theorem summarizes some elemetary facts about multivariate ormal distributios. Theorem 9 (a). If Y N (µ, Σ), the E(Y) = µ, cov(y) = Σ. (b). If Y N (µ, Σ), c is a scalar, the cy N (cµ, c Σ). (c). Let Y N (µ, Σ). If A is p, b is p, the AY + b N p (Aµ + b, AΣA ). (d). Let µ be ay vector, ad let Σ be ay oegative defiite matrix. The there exists Y such that Y N (µ, Σ). Proof. (a). This follows directly from () above. (b) ad (c). Homework. (d). Let Z,..., Z be idepedet, Z i N(0, ). Let Z = (Z,..., Z ). It is easily verified that Z N (0, I). Let Y = Σ / Z + µ. By part b, above, Y N (Σ / 0 + µ, Σ). We have ow show that the family of ormal distributios is preserved uder liear operatios o the radom vectors. We ow show that it is preserved uder takig margial ad coditioal distributios.

Theorem 0 Suppose that Y N (µ, Σ). Let ( ) ( ) Y µ Y =, µ =, Σ = Y µ ( Σ Σ Σ Σ where Y ad µ are p, ad Σ is p p. (a). Y N p (µ, Σ ), Y N p (µ, Σ ). (b). Y ad Y are idepedet if ad oly if Σ = 0. (c). If Σ > 0, the the coditio distributio of Y give Y is Y Y N p (µ + Σ Σ (Y µ ), Σ Σ Σ Σ ). Proof. (a). Let t = (t, t ) where t is p. The joit momet geeratig fuctio of Y ad Y is Therefore, M Y (t) = exp(µ t + µ t + (t Σ t + t Σ t + t Σ t + t Σ t )). M Y ( t 0 ) = exp(µ t + ( ) t Σ t ), M Y = exp(µ 0t t + t Σ t ). By Lemma c, we see that Y N p (µ, Σ ), Y N p (µ, Σ ). (b). We ote that ( ( ) t M Y (t) = M Y )M 0 Y 0t if ad oly if t Σ t + t Σ t = 0. Sice Σ is symmetric ad t Σ t is a scalar, we see that t Σ t = t Σ t. Fially, t Σ t = 0 for all t R p, t R p if ad oly if Σ = 0, ad the result follows from Lemma d. (c). We first fid the joit distributio of X = Y Σ Σ Y ad Y. ( ) ( X I Σ Σ )( ) Y = 0 I Y Therefore, by Theorem c, the joit distributio of X ad Y is ( ) (( X µ Σ Σ ) ( µ Σ Σ Σ )) Σ 0 N, Y µ 0 Σ ad hece X ad Y are idepedet. Therefore, the coditioal distributio of X give Y is the same as the margial distributio of X, Y ). X Y N p (µ Σ Σ µ, Σ Σ Σ Σ ). 3

Sice Y is just a costat i the coditioal distributio of X give Y we have, by Theorem c, that the coditioal distributio of Y = X + Σ Σ Y give Y is Y Y N p (µ Σ Σ µ + Σ Σ Y, Σ Σ Σ Σ ) Note that we eed Σ > 0 i part c so that Σ exists. Lemma Let Y N (µ, σ I), where Y = (Y,..., Y ), µ = (µ,..., µ ) ad σ > 0 is a scalar. The the Y i are idepedet, Y i N (µ, σ ) ad ( ) Y = Y Y µ χ µ σ σ. σ Proof. Let Y i be idepedet, Y i N (µ i, σ ). The joit momet geeratig fuctio of the Y i is M Y (t) = (exp(µ i t i + σ t i )) = exp(µ t + σ t t) i= which is the momet geeratig fuctio of a radom vector that is ormally distributed with mea vector µ ad covariace matrix σ I. Fially, Y Y = ΣYi, µ µ = Σµ i ad Y i /σ N (µ i /σ, ). Therefore Y Y/σ χ (µ µ/σ ) by the defiitio of the ocetral χ distributio. We are ow ready to derive the osigular ormal desity fuctio. Theorem Let Y N (µ, Σ), with Σ > 0. The Y has desity fuctio ( p Y (y) = exp ) (π) / Σ / (y µ) Σ (y µ). Proof. We could derive this by fidig the momet geeratig fuctio of this desity ad showig that it satisfied (3). We would also have to show that this fuctio is a desity fuctio. We ca avoid all that by startig with a radom vector whose distributio we kow. Let Z N (0, I). Z = (Z,..., Z ). The the Z i are idepedet ad Z i N (0, ), by Lemma 4. Therefore, the joit desity of the Z i is ( p Z (z) = exp ) ( (π) / z i = exp ) (π) / z z. i= Let Y = Σ / Z + µ. By Theorem c, Y N (µ, Σ). Also Z = Σ / (Y µ), ad the trasformatio from Z to Y is therefore ivertible. Furthermore, the Jacobia of this iverse trasformatio is just Σ / = Σ /. Hece the desity of Y is p Y (y) = p Z (Σ / (y µ)) Σ ( / = exp ) Σ / (π) / (y µ) Σ (y µ). 4

We ow prove a result that is useful later i the book ad is also the basis for Pearso s χ tests. Theorem 3 Let Y N (µ, Σ), Σ > 0. The (a). Y Σ Y χ (µ Σ µ). (b). (Y µ) Σ (Y µ) χ (0). Proof. (a). Let Z = Σ / Y N (Σ / µ, I). By Lemma 4, we see that (b). Follows fairly directly. Z Z = Y Σ Y χ (µ Σ µ). The Spherical Normal For the first part of this book, the most importat class of multivariate ormal distributio is the class i which Y N (µ, σ I). We ow show that this distributio is spherically symmetric about µ. A rotatio about µ is give by X = Γ(Y µ) + µ, where Γ is a orthogoal matrix (i.e., ΓΓ = I). By Theorem, X N (µ, σ I), so that the distributio is uchaged uder rotatios about µ. We therefore call this ormal distributio the spherical ormal distributio. If σ = 0, the P (Y = µ) =. Otherwise its desity fuctio (by Theorem 4) is p Y (y) = (π) / σ exp ( y µ σ By Lemma 4, we ote that the compoets of Y are idepedetly ormally distributed with commo variace σ. I fact, the spherical ormal distributio is the oly multivariate distributio with idepedet compoets that is spherically symmetric. ). 5

Probability Iequalities Lecture Notes Iequalities are useful for boudig quatities that might otherwise be hard to compute. They will also be used i the theory of covergece. Theorem (The Gaussia Tail Iequality) Let X N(0, ). The If X,..., X N(0, ) the / P( X > ɛ) e ɛ. ɛ P( X > ɛ) ɛ e ɛ /. Proof. The desity of X is φ(x) = (π) / e x /. Hece, By symmetry, P(X > ɛ) = ɛ = ɛ φ(s)ds ɛ ɛ φ (s)ds = φ(ɛ) ɛ / P( X > ɛ) e ɛ. ɛ ɛ s φ(s)ds / e ɛ. ɛ Now let X,..., X N(0, ). The X = i= X i N(0, /). Thus, X d = / Z where Z N(0, ) ad P( X > ɛ) = P( / Z > ɛ) = P( Z > ɛ) ɛ e ɛ /.

Theorem (Markov s iequality) Let X be a o-egative radom variable ad suppose that E(X) exists. For ay t > 0, Proof. Sice X > 0, P(X > t) E(X). () t E(X) = 0 x p(x)dx = x p(x)dx t t x p(x)dx + 0 t t t xp(x)dx p(x)dx = t P(X > t). Theorem 3 (Chebyshev s iequality) Let µ = E(X) ad σ = Var(X). The, P( X µ t) σ t ad P( Z k) k () where Z = (X µ)/σ. I particular, P( Z > ) /4 ad P( Z > 3) /9. Proof. We use Markov s iequality to coclude that P( X µ t) = P( X µ t ) The secod part follows by settig t = kσ. E(X µ) t = σ t. If X,..., X Beroulli(p) the ad X = i= X i The, Var(X ) = Var(X )/ = p( p)/ ad sice p( p) 4 P( X p > ɛ) Var(X ) ɛ = for all p. Hoeffdig s Iequality p( p) ɛ 4ɛ Hoeffdig s iequality is similar i spirit to Markov s iequality but it is a sharper iequality. We begi with the followig importat result. Lemma 4 Supppose that E(X) = 0 ad that a X b. The E(e tx ) e t (b a) /8.

Recall that a fuctio g is covex if for each x, y ad each α [0, ], g(αx + ( α)y) αg(x) + ( α)g(y). Proof. Sice a X b, we ca write X as a covex combiatio of a ad b, amely, X = αb + ( α)a where α = (X a)/(b a). By the covexity of the fuctio y e ty we have e tx αe tb + ( α)e ta = X a b a etb + b X b a eta. Take expectatios of both sides ad use the fact that E(X) = 0 to get Ee tx a b a etb + b b a eta = e g(u) (3) where u = t(b a), g(u) = γu + log( γ + γe u ) ad γ = a/(b a). Note that g(0) = g (0) = 0. Also, g (u) /4 for all u > 0. By Taylor s theorem, there is a ξ (0, u) such that g(u) = g(0) + ug (0) + u g (ξ) = u g (ξ) u 8 = t (b a). 8 Hece, Ee tx e g(u) e t (b a) /8. Next, we eed to use Cheroff s method. Lemma 5 Let X be a radom variable. The Proof. For ay t > 0, P(X > ɛ) if t 0 e tɛ E(e tx ). P(X > ɛ) = P(e X > e ɛ ) = P(e tx > e tɛ ) e tɛ E(e tx ). Sice this is true for every t 0, the result follows. Theorem 6 (Hoeffdig s Iequality) Let Y,..., Y be iid observatios such that E(Y i ) = µ ad a Y i b where a < 0 < b. The, for ay ɛ > 0, P ( Y µ ɛ ) e ɛ /(b a). (4) Proof. Without los of geerality, we asume that µ = 0. First we have P( Y ɛ) = P(Y ɛ) + P(Y ɛ) = P(Y ɛ) + P( Y ɛ). 3

Next we use Cheroff s method. For ay t > 0, we have, from Markov s iequality, that ( ) ( P(Y ɛ) = P Y i ɛ = P e P ) i= Y i e ɛ i= ( = P e t P i= Y i e tɛ ) e tɛ E ( e t P ) i= Y i = e tɛ i E(e ty i ) = e tɛ (E(e ty i )). From Lemma 4, E(e ty i ) e t (b a) /8. So P(Y ɛ) e tɛ e t (b a) /8. This is miimized by settig t = 4ɛ/(b a) givig P(Y ɛ) e ɛ /(b a). Applyig the same argumet to P( Y ɛ) yields the result. Example 7 Let X,..., X Beroulli(p). Chebyshev s iequality yields Accordig to Hoeffdig s iequality, which decreases much faster. P( X p > ɛ) 4ɛ. P( X p > ɛ) e ɛ Corollary 8 If X, X,..., X are idepedet with P(a X i b) = ad commo mea µ, the, with probability at least δ, ( ) c X µ log (5) δ where c = (b a). 3 The Bouded Differece Iequality So far we have focused o sums of radom variables. The followig result exteds Hoeffdig s iequality to more geeral fuctios g(x,..., x ). Here we cosider McDiarmid s iequality, also kow as the Bouded Differece iequality. 4

Theorem 9 (McDiarmid) Let X,..., X be idepedet radom variables. Suppose that sup g(x,..., x i, x i, x i+,..., x ) g(x,..., x i, x i, x i+,..., x ) c i (6) x,...,x,x i for i =,...,. The ( P g(x,..., X ) E(g(X,..., X )) ɛ ) } exp { ɛ. (7) i= c i Proof. Let V i = E(g X,..., X i ) E(g X,..., X i ). The g(x,..., X ) E(g(X,..., X )) = i= V i ad E(V i X,..., X i ) = 0. Usig a similar argumet as i Hoeffdig s Lemma we have, E(e tv i X,..., X i ) e t c i /8. (8) Now, for ay t > 0, ( ) P (g(x,..., X ) E(g(X,..., X )) ɛ) = P V i ɛ ( = P = e tɛ E e t P i= ) ( i= V i e tɛ e tɛ E e t P ) i= V i ( )) (e tv X,..., X e t P i= V i E e tɛ e t c /8 E (e t P ) i= V i. e tɛ e t P i= c i. The result follows by takig t = 4ɛ/ i= c i. Example 0 If we take g(x,..., x ) = i= x i the we get back Hoeffdig s iequality. Example Suppose we throw m balls ito bis. What fractio of bis are empty? Let Z be the umber of empty bis ad let F = Z/ be the fractio of empty bis. We ca write Z = i= Z i where Z i = of bi i is empty ad Z i = 0 otherwise. The µ = E(Z) = E(Z i ) = ( /) m = e m log( /) e m/ i= ad θ = E(F ) = µ/ e m/. How close is Z to µ? Note that the Z i s are ot idepedet so we caot just apply Hoeffdig. Istead, we proceed as follows. 5

Defie variables X,..., X m where X s = i if ball s falls ito bi i. The Z = g(x,..., X m ). If we move oe ball ito a differet bi, the Z ca chage by at most. Hece, (6) holds with c i = ad so P( Z µ > t) e t /m. Recall that he fractio of empty bis is F = Z/m with mea θ = µ/. We have P( F θ > t) = P( Z µ > t) e t /m. 4 Bouds o Expected Values Theorem (Cauchy-Schwartz iequality) If X ad Y have fiite variaces the E XY E(X )E(Y ). (9) The Cauchy-Schwarz iequality ca be writte as Cov (X, Y ) σ Xσ Y. Recall that a fuctio g is covex if for each x, y ad each α [0, ], g(αx + ( α)y) αg(x) + ( α)g(y). If g is twice differetiable ad g (x) 0 for all x, the g is covex. It ca be show that if g is covex, the g lies above ay lie that touches g at some poit, called a taget lie. A fuctio g is cocave if g is covex. Examples of covex fuctios are g(x) = x ad g(x) = e x. Examples of cocave fuctios are g(x) = x ad g(x) = log x. Theorem 3 (Jese s iequality) If g is covex, the Eg(X) g(ex). (0) If g is cocave, the Eg(X) g(ex). () Proof. Let L(x) = a+bx be a lie, taget to g(x) at the poit E(X). Sice g is covex, it lies above the lie L(x). So, Eg(X) EL(X) = E(a + bx) = a + be(x) = L(E(X)) = g(ex). Example 4 From Jese s iequality we see that E(X ) (EX). 6

Example 5 (Kullback Leibler Distace) Defie the Kullback-Leibler distace betwee two desities p ad q by ( ) p(x) D(p, q) = p(x) log dx. q(x) Note that D(p, p) = 0. We will use Jese to show that D(p, q) 0. Let X f. The ( ) ( ) q(x) q(x) D(p, q) = E log log E = log p(x) q(x) dx = log q(x)dx = log() = 0. p(x) p(x) p(x) So, D(p, q) 0 ad hece D(p, q) 0. Example 6 It follows from Jese s iequality that 3 types of meas ca be ordered. Assume that a,..., a are positive umbers ad defie the arithmetic, geometric ad harmoic meas as a A = (a +... + a ) The a H a G a A. a G = (a... a ) / a H = a +... + a ). Suppose we have a expoetial boud o P(X > ɛ). I that case we ca boud E(X ) as follows. Theorem 7 Suppose that X 0 ad that for every ɛ > 0, for some c > 0 ad c > /e. The, P(X > ɛ) c e c ɛ () where C = ( + log(c ))/c. E(X ) C. (3) Proof. Recall that for ay oegative radom variable Y, E(Y ) = P(Y t)dt. 0 Hece, for ay a > 0, E(X ) = 0 P(X t)dt = a 0 P(X t)dt + Equatio () implies that P(X > t) c e c t. Hece, E(X ) a + a P(X t)dt = a + a a P(X t)dt a + P(X t)dt a + c 7 a a P(X t)dt. e c t dt = a + c e c a c.

Set a = log(c )/(c ) ad coclude that Fially, we have E(X ) log(c ) c + c = + log(c ) c. E(X ) E(X ) + log(c ) c. Now we cosider boudig the maximum of a set of radom variables. Theorem 8 Let X,..., X be radom variables. Suppose there exists σ > 0 such that E(e tx i ) e tσ / for all t > 0. The ( ) E max X i σ log. (4) i Proof. By Jese s iequality, { ( )} exp te max X i E i ( { }) exp t max X i i ) ( = E max exp {tx i} i E (exp {tx i }) e t σ /. i= Thus, ( ) E max X i log + tσ i t. The result follows by settig t = log /σ. 5 O P ad o P I statisics, probability ad machie learig, we make use of o P ad O P otatio. Recall first, that a = o() meas that a 0 as. a = o(b ) meas that a /b = o(). a = O() meas that a is evetually bouded, that is, for all large, a C for some C > 0. a = O(b ) meas that a /b = O(). We write a b if both a /b ad b /a are evetually bouded. I computer sicece this s writte as a = Θ(b ) but we prefer usig a b sice, i statistics, Θ ofte deotes a parameter space. Now we move o to the probabilistic versios. Say that Y = o P () if, for every ɛ > 0, P( Y > ɛ) 0. 8

Say that Y = o P (a ) if, Y /a = o P (). Say that Y = O P () if, for every ɛ > 0, there is a C > 0 such that P( Y > C) ɛ. Say that Y = O P (a ) if Y /a = O P (). Let s use Hoeffdig s iequality to show that sample proportios are O P (/ ) withi the the true mea. Let Y,..., Y be coi flips i.e. Y i {0, }. Let p = P(Y i = ). Let p = Y i. i= We will show that: p p = o P () ad p p = O P (/ ). We have that P( p p > ɛ) e ɛ 0 ad so p p = o P (). Also, P( p p > C) = P ( p p > C ) e C < δ if we pick C large eough. Hece, ( p p) = O P () ad so ( ) p p = O P. Now cosider m cois with probabilities p,..., p m. The P(max p j p j > ɛ) j m P( p j p j > ɛ) uio boud j= m j= e ɛ Hoeffdig = me ɛ = exp { (ɛ log m) }. Supose that m e γ where 0 γ <. The P(max p j p j > ɛ) exp { (ɛ γ ) } 0. j Hece, max p j p j = o P (). j 9

Uiform Bouds Lecture Notes 3 Recall that, if X,..., X Beroulli(p) ad p = i= X i the, from Hoeffdig s iequality, P( p p > ɛ) e ɛ. Sometimes we wat to say more tha this. Example Suppose that X,..., X have cdf F. Let F (t) = I(X i t). i= We call F the empirical cdf. How close is F to F? That is, how big is F (t) F (t)? From Hoeffdig s iequality, P( F (t) F (t) > ɛ) e ɛ. But that is oly for oe poit t. How big is sup t F (t) F (t)? We would like a boud of the form ( ) P sup F (t) F (t) > ɛ t somethig small. Example Suppose that X,..., X P. Let P (A) = I(X i A). i= How close is P (A) to P (A)? That is, how big is P (A) P (A)? From Hoeffdig s iequality, P( P (A) P (A) > ɛ) e ɛ. But that is oly for oe set A. How big is sup A A P (A) P (A) for a class of sets A? We would like a boud of the form ( ) P sup P (A) P (A) > ɛ A A somethig small. Example 3 (Classificatio.) Suppose we observe data (X, Y ),..., (X, Y ) where Y i {0, }. Let (X, Y ) be a ew pair. Suppose we observe X. Now we wat to predict Y. A classifier h is a fuctio h(x) which takes values i {0, }. Whe we observe X we predict Y with h(x). The classificatio error, or risk, is the probability of a error: R(h) = P(Y h(x)).

The traiig error is the fractio of errors o the observed data (X, Y ),..., (X, Y ): R(h) = I(Y i h(x i )). By Hoeffdig s iequality, i= P( R(h) R(h) > ɛ) e ɛ. How do we choose a classifier? Oe way is to start with a set of classifiers H. The we defie ĥ to be the member of H that miimizes the traiig error. Thus ĥ = argmi h H R(h). A example is the set of liear classifiers. Suppose that x R d. A liear classifier has the form h(x) = of β T x 0 ad h(x) = 0 of β T x < 0 where β = (β,..., β d ) T is a set of parameters. Although ĥ miimizes R(h), it does ot miimize R(h). Let h miimize the true error R(h). A fudametal questio is: how close is R(ĥ) to R(h )? We will see later tha R(ĥ) is close to R(h ) if sup h R(h) R(h) is small. So we wat ( ) P sup R(h) R(h) > ɛ h somethig small. More geerally, we ca state out goal as follows. For ay fuctio f defie P (f) = f(x)dp (x), P (f) = f(x i ). Let F be a set of fuctios. I our first example, each f was of the form f t (x) = I(x t) ad F = {f t : t R}. We wat to boud ) ( P sup P (f) P (f) > ɛ f F We will see that the bouds we obtai have the form ( P sup P (f) P (f) > ɛ f F i= ) c κ(f)e c ɛ where c ad c are positive costats ad κ(f) is a measure of the size (or complexity) of the class F. Similarly, if A is a class of sets the we wat a boud of the form ( ) P sup P (A) P (A) > ɛ c κ(a)e c ɛ A A where P (A) = i= I(X i A). Bouds like these are called uiform bods sice they hold uiformly over a class of fuctios or over a class of sets..

Fiite Classes Let F = {f,..., f N }. Suppose that max sup f j (x) B. j N We will make use of the uio boud. Recall that ) P (A AN x N P(A j ). Let A j be the evet that P (f j ) P (f) > ɛ. From Hoeffdig s iequality, P(A j ) e ɛ /(B ). The ( ) P sup P (f) P (f) > ɛ = P(A AN ) f F N N P(A j ) e ɛ /(B ) = Ne ɛ /(B ). Thus we have show that ( ) P sup P (f) P (f) > ɛ f F κe ɛ /(B ) j= where κ = F. The same idea applies to classes of sets. Let A = {A,..., A N } be a fiite collectio of sets. By the same reasoig we have ( ) P sup P (A) P (A) > ɛ A A κe ɛ /(B ) where κ = F ad P (A) = i= I(X i A). To exted these ideas to ifiite classes like F = {f t : t R} we eed to itroduce a few more cocepts. j= j= 3 Shatterig Let A be a class of sets. Some examples are:. A = {(, t] : t R}.. A = {(a, b) : a b}. 3. A = {(a, b) (c, d) : a b c d}. 3

4. A = all discs i R d. 5. A = all rectagles i R d. 6. A = all half-spaces i R d = {x : β T x 0}. 7. A = all covex sets i R d. Let F = {x,..., x } be a fiite set. Let G be a subset of F. Say that A picks out G if A F = G for some A A. For example, let A = {(a, b) : a b}. Suppose that F = {,, 7, 8, 9} ad G = {, 7}. The A picks out G sice A F = G if we choose A = (.5, 7.5) for example. Let S(A, F ) be the umber of these subsets picked out by A. Of course S(A, F ). Example 4 Let A = {(a, b) : a b} ad F = {,, 3}. The A ca pick out:, {}, {}, {3}, {, }, {, 3}, {,, 3}. So s(a, F ) = 7. Note that 7 < 8 = 3. If F = {, 6} the A ca pick out: I this case s(a, F ) = 4 =., {}, {6}, {, 6}. We say that F is shattered if s(a, F ) = where is the umber of poits i F. Let F deote all fiite sets with elemets. Defie the shatter coefficiet Note that s (A). s (A) = sup F F s(a, F ). The followig theorem is due to Vapik ad Chervoeis. The proof is beyod the scope of the course. (If you take 0-70/36-70 you will lear the proof.) 4

Class A VC dimesio V A A = {A,..., A N } log N Itervals [a, b] o the real lie Discs i R 3 Closed balls i R d d + Rectagles i R d d Half-spaces i R d d + Covex polygos i R Covex polygos with d vertices d + Table : The VC dimesio of some classes A. Theorem 5 Let A be a class of sets. The ( ) P sup P (A) P (A) > ɛ A A 8 s (A) e ɛ /3. () This partly solves oe of our problems. But, how big ca s (A) be? Sometimes s (A) = for all. For example, let A be all polygos i the plae. The s (A) = for all. But, i may cases, we will see that s (A) = for all up to some iteger d ad the s (A) < for all > d. The Vapik-Chervoekis (VC) dimesio is d = d(a) = largest such that s (A) =. I other words, d is the size of the largest set that ca be shattered. Thus, s (A) = for all d ad s (A) < for all > d. The VC dimesios of some commo examples are summarized i Table. Now here is a iterestig questio: for > d how does s (A) behave? It is less tha but how much less? Theorem 6 (Sauer s Theorem) Suppose that A has fiite VC dimesio d. The, for all d, s(a, ) ( + ) d. () 5

We coclude that: Theorem 7 Let A be a class of sets with VC dimesio d <. The ( ) P sup P (A) P (A) > ɛ A A 8 ( + ) d e ɛ /3. (3) Example 8 Let s retur to our first example. Suppose that X,..., X have cdf F. Let F (t) = I(X i t). i= We would like to boud P(sup t F (t) F (t) > ɛ). Notice that F (t) = P (A) where A = (, t]. Let A = {(, t] : t R}. This has VC dimesio d =. So ( ) P(sup F (t) F (t) > ɛ) = P t sup P (A) P (A) > ɛ A A 8 ( + ) e ɛ /3. I fact, there is a tighter boud i this case called the DKW (Dvoretsky-Kiefer-Wolfowitz) iequality: P(sup F (t) F (t) > ɛ) e ɛ. t 4 Boudig Expectatios Eearlier we saw that we ca use expoetial bouds o probabilities to get bouds o expectatios. Let us recall how that works. Cosider a fiite collectio A = {A,..., A N }. Let We kow that Z = max j N P (A j ) P (A j ). P(Z > ɛ) me ɛ. (4) But ow we wat to boud ( ) E(Z ) = max P (A j ) P (A j ). j N We ca rewrite (4) as or, i other words, Recall that, i geeral, if Y 0 the P(Z > ɛ ) Ne ɛ. P(Z > t) Ne t. E(Y ) = 0 6 P(Y > t)dt.

Hece, for ay s, E(Z) = = 0 s 0 s + P(Z > t)dt P(Z > t)dt + s s P(Z > t)dt s + N e t dt s ( ) e s = s + N P(Z > t)dt = s + N e s. Let s = log(n)/(). The E(Z) s + N e s = log N + = log N +. Fially, we use Cauchy-Schwartz: E(Z ) ( ) log N + log N E(Z) = O. I summary: ( ) E max P (A j ) P (A j ) = O j N ( ) log N. For a sigle set A we would have E P (A) P (A) O(/ ). The boud oly icreases logarithmically with N. 7

Radom Samples Lecture Notes 4 Let X,..., X F. A statistic is ay fuctio T = g(x,..., X ). Recall that the sample mea is X = X i ad sample variace is S = Let µ = E(X i ) ad σ = Var(X i ). Recall that E(X ) = µ, i= (X i X ). i= Var(X ) = σ, E(S ) = σ. Theorem If X,..., X N(µ, σ ) the X N(µ, σ /). So, Proof. We kow that M Xi (s) = e µs+σ s /. M X (t) = E(e tx ) = E(e t P i= X i ) = (Ee txi/ ) = (M Xi (t/)) = { } = exp µt + which is the mgf of a N(µ, σ /). σ t ( ) e (µt/)+σ t /( ) Example (Example 5..0). Let Z,..., Z Cauchy(0, ). The Z Cauchy(0, ). Lemma 3 If X,..., X N(µ, σ ) the T = X µ S/ t N(0, ). Let X (),..., X () deoted the ordered values: X () X () X (). The X (),..., X () are called the order statistics.

Covergece Let X, X,... be a sequece of radom variables ad let X be aother radom variable. Let F deote the cdf of X ad let F deote the cdf of X.. X coverges almost surely to X, writte X a.s. X, if, for every ɛ > 0, P( lim X X < ɛ) =. (). X coverges to X i probability, writte X P X, if, for every ɛ > 0, as. I other words, X X = o P (). P( X X > ɛ) 0 () 3. X coverges to X i quadratic mea (also called covergece i L ), writte X qm X, if E(X X) 0 (3) as. 4. X coverges to X i distributio, writte X X, if at all t for which F is cotiuous. lim F (t) = F (t) (4) Covergece to a Costat. A radom variable X has a poit mass distributio if there exists a costat c such that P(X = c) =. The distributio for X is deoted by δ c ad we write X δ c. If X P δ c the we also write X P c. Similarly for the other types of covergece. Theorem 4 X as X if ad oly if, for every ɛ > 0, lim P(sup X m X ɛ) =. m Example 5 (Example 5.5.8). This example shows that covergece i probability does ot imply almost sure covergece. Let S = [0, ]. Let P be uiform o [0, ]. We draw S P. Let X(s) = s ad let X = s + I [0,] (s), X = s + I [0,/] (s), X 3 = s + I [/,] (s) X 4 = s + I [0,/3] (s), X 5 = s + I [/3,/3] (s), X 6 = s + I [/3,] (s) etc. The X P X. But, for each s, X (s) does ot coverge to X(s). Hece, X does ot coverge almost surely to X.

Example 6 Let X N(0, /). Ituitively, X is cocetratig at 0 so we would like to say that X coverges to 0. Let s see if this is true. Let F be the distributio fuctio for a poit mass at 0. Note that X N(0, ). Let Z deote a stadard ormal radom variable. For t < 0, sice t. For t > 0, F (t) = P(X < t) = P( X < t) = P(Z < t) 0 F (t) = P(X < t) = P( X < t) = P(Z < t) sice t. Hece, F (t) F (t) for all t 0 ad so X 0. Notice that F (0) = / F (/) = so covergece fails at t = 0. That does t matter because t = 0 is ot a cotiuity poit of F ad the defiitio of covergece i distributio oly requires covergece at cotiuity poits. Now cosider covergece i probability. For ay ɛ > 0, usig Markov s iequality, as. Hece, X P 0. P( X > ɛ) = P( X > ɛ ) E(X ) ɛ = ɛ 0 The ext theorem gives the relatioship betwee the types of covergece. Theorem 7 The followig relatioships hold: (a) X qm X implies that X P X. (b) X P X implies that X X. (c) If X X ad if P(X = c) = for some real umber c, the X P X. as (d) X X implies X P X. I geeral, oe of the reverse implicatios hold except the special case i (c). Proof. We start by provig (a). Suppose that X qm X. Fix ɛ > 0. The, usig Markov s iequality, P( X X > ɛ) = P( X X > ɛ ) E X X ɛ 0. Proof of (b). Fix ɛ > 0 ad let x be a cotiuity poit of F. The F (x) = P(X x) = P(X x, X x + ɛ) + P(X x, X > x + ɛ) P(X x + ɛ) + P( X X > ɛ) = F (x + ɛ) + P( X X > ɛ). 3

Also, F (x ɛ) = P(X x ɛ) = P(X x ɛ, X x) + P(X x ɛ, X > x) F (x) + P( X X > ɛ). Hece, F (x ɛ) P( X X > ɛ) F (x) F (x + ɛ) + P( X X > ɛ). Take the limit as to coclude that F (x ɛ) lim if F (x) lim sup F (x) F (x + ɛ). This holds for all ɛ > 0. Take the limit as ɛ 0 ad use the fact that F is cotiuous at x ad coclude that lim F (x) = F (x). Proof of (c). Fix ɛ > 0. The, P( X c > ɛ) = P(X < c ɛ) + P(X > c + ɛ) Proof of (d). This follows from Theorem 4. P(X c ɛ) + P(X > c + ɛ) = F (c ɛ) + F (c + ɛ) F (c ɛ) + F (c + ɛ) = 0 + = 0. Let us ow show that the reverse implicatios do ot hold. Covergece i probability does ot imply covergece i quadratic mea. Let U Uif(0, ) ad let X = I (0,/) (U). The P( X > ɛ) = P( I (0,/) (U) > ɛ) = P(0 U < /) = / 0. Hece, X P 0. But E(X) = / du = for all so X 0 does ot coverge i quadratic mea. Covergece i distributio does ot imply covergece i probability. Let X N(0, ). Let X = X for =,, 3,...; hece X N(0, ). X has the same distributio fuctio as X for all so, trivially, lim F (x) = F (x) for all x. Therefore, X X. But P( X X > ɛ) = P( X > ɛ) = P( X > ɛ/) 0. So X does ot coverge to X i probability. The relatioships betwee the types of covergece ca be summarized as follows: q.m. a.s. prob distributio 4

Example 8 Oe might cojecture that if X P b, the E(X ) b. This is ot true. Let X be a radom variable defied by P(X = ) = / ad P(X = 0) = (/). Now, P( X < ɛ) = P(X = 0) = (/). Hece, X P 0. However, E(X ) = [ (/)] + [0 ( (/))] =. Thus, E(X ). Example 9 Let X,..., X Uiform(0, ). Let X () = max i X i. First we claim that P. This follows sice X () P( X () > ɛ) = P(X () ɛ) = i P(X i ɛ) = ( ɛ) 0. Also So ( X () ) Exp(). P(( X () ) t) = P(X () (t/)) = ( t/) e t. Some covergece properties are preserved uder trasformatios. Theorem 0 Let X, X, Y, Y be radom variables. Let g be a cotiuous fuctio. (a) If X P X ad Y P Y, the X + Y P X + Y. (b) If X qm X ad Y qm Y, the X + Y qm X + Y. (c) If X X ad Y c, the X + Y X + c. (d) If X P X ad Y P Y, the X Y P XY. (e) If X X ad Y c, the X Y cx. (f) If X P X, the g(x ) P g(x). (g) If X X, the g(x ) g(x). Parts (c) ad (e) are kow as Slutzky s theorem Parts (f) ad (g) are kow as The Cotiuous Mappig Theorem. It is worth otig that X X ad Y Y does ot i geeral imply that X +Y X + Y. 3 The Law of Large Numbers The law of large umbers (LLN) says that the mea of a large sample is close to the mea of the distributio. For example, the proportio of heads of a large umber of tosses of a fair coi is expected to be close to /. We ow make this more precise. Let X, X,... be a iid sample, let µ = E(X ) ad σ = Var(X ). Recall that the sample mea is defied as X = i= X i ad that E(X ) = µ ad Var(X ) = σ /. 5

Theorem (The Weak Law of Large Numbers (WLLN)) If X,..., X are iid, the X P µ. Thus, X µ = o P (). Iterpretatio of the WLLN: The distributio of X aroud µ as gets large. becomes more cocetrated Proof. Assume that σ <. This is ot ecessary but it simplifies the proof. Usig Chebyshev s iequality, P ( X µ > ɛ ) Var(X ) = σ ɛ ɛ which teds to 0 as. Theorem The Strog Law of Large Numbers. Let X,..., X be iid with mea µ. as The X µ. The proof is beyod the scope of this course. 4 The Cetral Limit Theorem The law of large umbers says that the distributio of X piles up ear µ. This is t eough to help us approximate probability statemets about X. For this we eed the cetral limit theorem. Suppose that X,..., X are iid with mea µ ad variace σ. The cetral limit theorem (CLT) says that X = i X i has a distributio which is approximately Normal with mea µ ad variace σ /. This is remarkable sice othig is assumed about the distributio of X i, except the existece of the mea ad variace. Theorem 3 (The Cetral Limit Theorem (CLT)) Let X,..., X be iid with mea µ ad variace σ. Let X = i= X i. The Z X µ Var(X ) where Z N(0, ). I other words, = (X µ) σ Z lim P(Z z) = Φ(z) = z π e x / dx. Iterpretatio: Probability statemets about X ca be approximated usig a Normal distributio. It s the probability statemets that we are approximatig, ot the radom variable itself. 6

A cosequece of the CLT is that X µ = O P ( I additio to Z N(0, ), there are several forms of otatio to deote the fact that the distributio of Z is covergig to a Normal. They all mea the same thig. Here they are: ) Z N(0, ) ) X N (µ, σ ) X µ N (0, σ (X µ) N ( 0, σ ) (X µ) N(0, ). σ Recall that if X is a radom variable, its momet geeratig fuctio (mgf) is ψ X (t) = Ee tx. Assume i what follows that the mgf is fiite i a eighborhood aroud t = 0. Lemma 4 Let Z, Z,... be a sequece of radom variables. Let ψ be the mgf of Z. Let Z be aother radom variable ad deote its mgf by ψ. If ψ (t) ψ(t) for all t i some ope iterval aroud 0, the Z Z. Proof of the cetral limit theorem. Let Y i = (X i µ)/σ. The, Z = / i Y i. Let ψ(t) be the mgf of Y i. The mgf of i Y i is (ψ(t)) ad mgf of Z is [ψ(t/ )] ξ (t). Now ψ (0) = E(Y ) = 0, ψ (0) = E(Y ) = Var(Y ) =. So,. Now, ψ(t) = ψ(0) + tψ (0) + t! ψ (0) + t3 3! ψ (0) + = + 0 + t + t3 3! ψ (0) + = + t + t3 3! ψ (0) + ξ (t) = [ ψ ( t )] = = [ + t + t3 3! 3/ ψ (0) + [ t + + t3 ψ (0) + 3! / e t / 7 ] ]

which is the mgf of a N(0,). The result follows from Lemma 4. I the last step we used the fact that if a a the ( + a ) e a. The cetral limit theorem tells us that Z = (X µ)/σ is approximately N(0,). However, we rarely kow σ. We ca estimate σ from X,..., X by S = (X i X ). i= This raises the followig questio: if we replace σ with S, is the cetral limit theorem still true? The aswer is yes. Theorem 5 Assume the same coditios as the CLT. The, T = (X µ) S N(0, ). Proof. We have that where ad Now Z N(0, ) ad W T = Z W (X µ) Z = σ W = σ. S P. The result follows from Slutzky s theorem. There is also a multivariate versio of the cetral limit theorem. Recall that X = (X,..., X k ) T has a multivariate Normal distributio with mea vector µ ad covariace matrix Σ if ( f(x) = exp ) (π) k/ Σ / (x µ)t Σ (x µ). I this case we write X N(µ, Σ). Theorem 6 (Multivariate cetral limit theorem) Let X,..., X be iid radom vectors where X i = (X i,..., X ki ) T with mea µ = (µ,..., µ k ) T ad covariace matrix Σ. Let X = (X,..., X k ) T where X j = i= X ji. The, (X µ) N(0, Σ). 8

5 The Delta Method If Y has a limitig Normal distributio the the delta method allows us to fid the limitig distributio of g(y ) where g is ay smooth fuctio. Theorem 7 (The Delta Method) Suppose that (Y µ) N(0, ) σ ad that g is a differetiable fuctio such that g (µ) 0. The I other words, Y N ( µ, σ ) (g(y ) g(µ)) g (µ) σ implies that N(0, ). g(y ) N ( g(µ), (g (µ)) σ ). Example 8 Let X,..., X be iid with fiite mea µ ad fiite variace σ. By the cetral limit theorem, (X µ)/σ N(0, ). Let W = e X. Thus, W = g(x ) where g(s) = e s. Sice g (s) = e s, the delta method implies that W N(e µ, e µ σ /). There is also a multivariate versio of the delta method. Theorem 9 (The Multivariate Delta Method) Suppose that Y = (Y,..., Y k ) is a sequece of radom vectors such that (Y µ) N(0, Σ). Let g : R k R ad let g(y) = Let µ deote g(y) evaluated at y = µ ad assume that the elemets of µ are ozero. The (g(y ) g(µ)) N ( 0, T µ Σ µ ). Example 0 Let ( X X ), ( X X g y. g y k. ),..., ( X X be iid radom vectors with mea µ = (µ, µ ) T ad variace Σ. Let X = X i, i= 9 X = i= ) X i

ad defie Y = X X. Thus, Y = g(x, X ) where g(s, s ) = s s. By the cetral limit theorem, ( ) X µ N(0, Σ). X µ Now ad so g(s) = ( T σ σ µ Σ µ = (µ µ ) σ σ ( g s g s ) ( µ µ ) = ( s s ) ) = µ σ + µ µ σ + µ σ. Therefore, (X X µ µ ) N (0, µ σ + µ µ σ + µ σ ). 0

Addedum to Lecture Notes 4 where Here is the proof that T = (X µ) S N(0, ) S = (X i X ). i= Step. We first show that R Note that R = P σ where R = (X i X ). i= Xi i= ( ) X i. Defie Y i = X i. The, usig the LLN (law of large umbers) Next, by the LLN, i= X i = i= Y P i E(Y i ) = E(Xi ) = µ + σ. i= X P i µ. Sice g(t) = t is cotiuous, the cotiuous mappig theorem implies that ( ) X i P µ. Thus R i= i= P (µ + σ ) µ = σ. Step. Note that Sice, R S = ( ) R. P σ ad /( ), we have that S P σ. Step 3. Sice g(t) = t is cotiuous, (for t 0) the cotiuous mappig theorem implies that S P σ.

Step 4. Sice g(t) = t/σ is cotiuous, the cotiuous mappig theorem implies that S /σ P. Step 5. Sice g(t) = /t is cotiuous (for t > 0) the cotiuous mappig theorem implies that σ/s P. Sice covergece i probability implies covergece i distributio, σ/s. Step 5. Note that ( ) ( ) (X µ) σ T = V W. σ S Now V Z where Z N(0, ) by the CLT. Ad we showed that W. By Slutzky s theorem, T = V W Z = Z.

Lecture Notes 5 Statistical Models A statistical model P is a collectio of probability distributios (or a collectio of desities). A example of a oparametric model is { } P = p : (p (x)) dx <. A parametric model has the form { } P = p(x; θ) : θ Θ where Θ R d. A example is the set of Normal desities {p(x; θ) = (π) / e (x θ) / }. For ow, we focus o parametric models. The model comes from assumptios. Some examples: Time util somethig fails is ofte modeled by a expoetial distributio. Number of rare evets is ofte modeled by a Poisso distributio. Legths ad weights are ofte modeled by a Normal distributio. These models are ot correct. But they might be useful. Later we cosider oparametric methods that do ot assume a parametric model Statistics Let X,..., X p(x; θ). Let X (X,..., X ). Ay fuctio T = T (X,..., X ) is itself a radom variable which we will call a statistic. Some examples are: order statistics, X () X () X ()

sample mea: X = i X i, sample variace: S = i (X i x), sample media: middle value of ordered statistics, sample miimum: X () sample maximum: X () sample rage: X () X () sample iterquartile rage: X (.75) X (.5) Example If X,..., X Γ(α, β), the X Γ(α, β/). Proof: This is the mgf of Γ(α, β/). M X = E[e tx ] = E[e P X i t/ ] = E[e Xi(t/) ] i [( ) α ] [ ] α = [M X (t/)] = =. βt/ β/t Example If X,..., X N(µ, σ ) the X N(µ, σ /). Example 3 If X,..., X iid Cauchy(0,), for x R, the X Cauchy(0,). p(x) = π( + x ) Example 4 If X,..., X N(µ, σ ) the The proof is based o the mgf. ( ) S χ σ ( ).

Example 5 Let X (), X (),..., X () be the order statistics, which meas that the sample X, X,..., X has bee ordered from smallest to largest: X () X () X (). Now, F X(k) (x) = P (X (k) x) = P (at least k of the X,..., X x) = P (exactly j of the X,..., X x) = j=k j=k ( ) [F X (x)] j [ F X (x)] j j Differetiate to fid the pdf (See CB p. 9): p X(k) (x) =! (k )!( k)! [F X(x)] k p(x) [ F X (x)] k. 3 Sufficiecy (Ch 6 CB) We cotiue with parametric iferece. reductio as a formal cocept. I this sectio we discuss data Sample X = X,, X F. Assume F belogs to a family of distributios, (e.g. F is Normal), idexed by some parameter θ. We wat to lear about θ ad try to summarize the data without throwig ay iformatio about θ away. If a statistic T (X,, X ) cotais all the iformatio about θ i the sample we say T is sufficiet. 3

3. Sufficiet Statistics Defiitio: T is sufficiet for θ if the coditioal distributio of X T does ot deped o θ. Thus, f(x,..., x t; θ) = f(x,..., x t). Example 6 X,, X Poisso(θ). Let T = i= X i. The, But Hece, p X T (x t) = P(X = x T (X ) = t) = P (X = x ad T = t). P (T = t) 0 if T (x ) t P (X = x ad T = t) = P (X = x ) if T (X ) = t P (X = x ) = Now, T (x ) = x i = t ad so e θ θ x i i= x i! = e θ θ P x i (xi!) = e θ θ t (xi!). P (T = t) = e θ (θ) t t! sice T Poisso(θ). Thus, P (X = x ) P (T = t) = t! ( x i )! t which does ot deped o θ. So T = i X i is a sufficiet statistic for θ. Other sufficiet statistics are: T = 3.7 i X i, T = ( i X i, X 4 ), ad T (X,..., X ) = (X,..., X ). 3. Sufficiet Partitios It is better to describe sufficiecy i terms of partitios of the sample space. Example 7 Let X, X, X 3 Beroulli(θ). Let T = X i. 4

x t p(x t) (0, 0, 0) t = 0 (0, 0, ) t = /3 (0,, 0) t = /3 (, 0, 0) t = /3 (0,, ) t = /3 (, 0, ) t = /3 (,, 0) t = /3 (,, ) t = 3 8 elemets 4 elemets. A partitio B,..., B k is sufficiet if f(x X B) does ot deped o θ.. A statistic T iduces a partitio. For each t, {x : T (x) = t} is oe elemet of the partitio. T is sufficiet if ad oly if the partitio is sufficiet. 3. Two statistics ca geerate the same partitio: example: i X i ad 3 i X i. 4. If we split ay elemet B i of a sufficiet partitio ito smaller pieces, we get aother sufficiet partitio. Example 8 Let X, X, X 3 Beroulli(θ). The T = X is ot sufficiet. Look at its partitio: 5