THE CENTRAL LIMIT THEOREM DANIEL RÜDT UNIVERSITY OF TORONTO MARCH, 2010
Contents 1 Introduction 1 2 Mathematical Background 3 3 The Central Limit Theorem 4 4 Examples 4 4.1 Roulette...................................... 4 4.2 Cauchy Distribution............................... 5 5 Historical Background 6 6 Proof 8 6.1 Outline of the Proof............................... 8 6.2 Lévy Continuity Theorem............................ 8 6.3 Lemmas...................................... 11 6.4 Proof of the Central Limit Theorem...................... 12 7 References 13
1 Introduction The Central Limit Theorem (CLT) is one of the most remarkable results in probability theory because it s not only very easy to phrase, but also has very useful applications. The CLT can tell us about the distribution of large sums of random variables even if the distribution of the random variables is almost unknown. With this result we are able to approximate how likely it is that the arithmetic mean deviates from its expected value. I give such an example in chapter 4. The CLT provides answers to many statistical problems. Using the CLT we can verify hypotheses by making statistical decisions, because we are able to determine the asymptotic distribution of certain test statistics. As a warm up, we attempt to understand what happens to the distribution of random variables if we sum them. Suppose X and Y are continuous and independent random variables with densities f X and f Y. If we define 1 A (x) to be the indicator function of a set A, then we want to recall that for independent random variables P (X A, Y B) = R R 1 A (x)1 B (y)f X (x)f Y (y) dx dy. Now we can see that the density of X + Y is given by the convolution of f X and f Y. [ z y ] P (X + Y z) = 1 [x+y z] (x, y)f X (x)f Y (y) dx dy = f X (x) dx f Y (y) dx dy R R R [ z ] z = f X (x y) dx f Y (y) dx dy F ubini = f X (x y)f Y (y) dy dx R R = z f X f Y (x) dx. In order to visualize this result I did some calculations. I determined the density of the sum of independent, uniformly distributed random variables. The following pictures will show the density of n i=1 X i for X i U[ 0.5, 0.5]. 1
n = 1 n = 2 n = 3 n = 10 If we compare these graphs to the density of a standard normally distributed random variable, we can see remarkable similarities even for small n. Density of a standard normally distributed random variable This result leads us to suspect that sums of random variables somehow behave normally. The CLT frames this fact. 2
2 Mathematical Background In this section I just want to recall some of the basic definitions and theorems used in this paper. Elementary definitions of probability theory are assumed to be well known. To keep things simple, we just consider the sample space Ω = R. Definition. A random variable X is called normally distributed with parameters µ and σ (X N(µ, σ)) if the density of the random variable is given by φ(x) = 1 2πσ 2 e (x µ)2 2σ 2. Definition. If a random variable X with probability measure P X has density f, then we define the distribution function F X as follows F X (x) = P (X x) = P X ((, x]) = x dp X (x) = x f(x) dx Definition. A sequence of random variables X 1, X 2... is converging in distribution to a random variable X if lim F X n n (x) = F X (x) for every x R at which F X is continuous. We can write this as X n d X. The following theorem is also used to define convergence in distribution and will become handy in proving the Central Limit Theorem: Theorem 2.1. Suppose X 1, X 2... is a sequence of random variables, then X n only if d X if and for every bounded and continuous function f. lim E[f(X n)] = E[f(X)] n Definition. The characteristic function of a random variable X is defined by ϕ(t) = E [ e itx] = e itx dp X (x). R 3
Proposition. Every characteristic function ϕ is continuous, ϕ(0) = 1 and ϕ(t) 1. Theorem 2.2. If X,Y are random variables and ϕ X (t) = ϕ Y (t) for all t R, then i.e. X and Y have the same distribution. X d = Y Theorem 2.3. Suppose X and Y are independent random variables, then ϕ X+Y (t) = ϕ X (t) ϕ Y (t) t R. 3 The Central Limit Theorem The Central Limit Theorem. If {X n } is a sequence of identically and independent distributed random variables each having finite expectation µ and finite positive variance σ 2, then: X 1 + X 2 +... + X n n µ σ n d N(0, 1) i.e. a centered and normalized sum of independent and identically distributed (i.i.d.) random variables is distributed standard normally as n goes to infinity. 4 Examples 4.1 Roulette It s nothing new that on average you should lose when playing roulette. Despite this it s still interesting to examine the chances of winning. The CLT gives an answer to this question: 4
A roulette wheel has 37 numbers in total. 18 are black, 18 are red and 1 is green. Players are allowed to bet on black or red. Assume a player is always betting $1 on black. We define X i to be the winnings of the ith spin. X 1, X 2,... are clearly independent and P (X i = 1) = 18 37, P (X i = 1) = 19 37 E(X i ) = 1 37, V ar(x i) = E(X i 2 ) [E(X i )] 2 1. We want to approximate the probability that S n = X 1 +... + X n is bigger than 0 ( Sn nµ P (S n > 0) = P > nµ ). nσ nσ Let s say we want to play n = 100 times, then Now the CLT states that nµ 100 (1/37) = = 10 nσ 100 37. P (S n > 0) P (X > 10 37 ) for a standard normally distributed random variable X. Since P (X > 10 37 ) 0.39 we can assume that the chance to win money by playing roulette 100 times is about 39%. 4.2 Cauchy Distribution The Cauchy distribution shows that the conditions of finite variance and finite expectation can not be dropped. Definition. A random variable X is called Cauchy distributed if the density of X is given by f(x) = 1 π(1 + x 2 ). Proposition. If X is a Cauchy distributed random variable, then E[X] = and V ar[x] =. 5
Lemma. If X is Cauchy distributed, then ϕ X (t) = e t. Proposition. If {X n } is a sequence of independent Cauchy distributed random variables, then Y n = 1 n n i=1 X i has Cauchy distribution. Proof. To prove this statement we want to compute the characteristic function of Y n and compare it to the characteristic function of a Cauchy distributed random variable. If they are the same, then the claim follows by Theorem 2.2. ϕ Yn (t) = n i=1 = e t ϕ X i (t) = n n ( ) t ϕ Xi = n i=1 ( ϕ X1 ( t n)) n = ) (e t n n The first step is true because of the Theorem about sums of independent random variables and its characteristic functions (Theorem 2.3). The third step follows from Theorem 2.2 since all random variables are identically distributed. So as we can see the arithmetic mean of Cauchy distributed random variables is always Cauchy distributed and therefore the CLT doesn t hold. 5 Historical Background The CLT has a long and vivid history. It developed over time and there are many different versions and proofs of the CLT. 1st Chapter (1776-1829) In 1776 Laplace published a paper about the inclining angles of meteors. In this paper he tried to calculate the probability that the actual data collected differed from the theoretical mean he calculated. This was the first attempt to study summed random variables. From this it is clear that the CLT was motivated by statistics. His work was continued by Poisson, 6
who published two papers in 1824 and 1829. In these papers he tried to generalize the work of Laplace, and also make it more rigorous. At this time probability theory still was not considered a branch of real mathematics. For most mathematicians it was sufficient that the CLT worked in practice, so they didn t make a lot of effort to give real proofs. 2nd Chapter (1870-1913) This mindset changed during the 19th century. Bessel, Dirichlet and especially Cauchy turned probability theory into a respected branch of pure mathematics. They succeeded in giving rigorous proofs, but there were still some issues. They had problems in dealing with distributions with infinite support and quantifying the rate of convergence. Moreover the conditions for the CLT were not satisfying. Between 1870 and 1913 the famous Russian mathematicians Markov, Chebyshev and Lyapunov did a lot of research on the CLT and are considered to be the most important contributers to the CLT. To prove the CLT they worked in two different directions: Markov and Chebyshev attempted to prove the CLT using the Method of Moments, whereas Lyapunov was using characteristic functions. 3rd Chapter (1920-1937) During that period Lindeberg, Feller and Lévy studied the CLT. Lindeberg was able to apply the CLT to random vectors and he quantified the rate of convergence. His proof was a big step since he was able to give all sufficient conditions of the CLT. Later, Feller and Lévy also succeeded in giving also all necessary conditions, which could be proven by the work of Cramer. The CLT, as it is known today, was born. The CLT today People have continued to improve the CLT. There has been various research on theorems related to dependent random variables, but the basic principles of Lindeberg, Feller and Lévy are still up to date. 7
6 Proof 6.1 Outline of the Proof The idea of the proof is to use nice properties of characteristic functions. The Lévy continuity theorem states that the distribution of the limit of random variables is uniquely defined by the limit of the corresponding characteristic functions. So all we have to do is understand the limit of the characteristic functions of our summed random variables. We will see that the characteristic function of summed i.i.d. random variables behave very well. The first step is to understand the Lévy continuity theorem. The second step will deal with the evaluation of the characteristic function of summed i.i.d. random variables. The final step will put all things together in a very nice and smooth proof. 6.2 Lévy Continuity Theorem The actual proof of the CLT is straight forward. The difficulty is to understand all the contributing theorems and lemmas. Since the most important theorem is the Lévy continuity theorem, I want to have a close look at this result. Lévy Continuity Theorem. Suppose X 1, X 2,...X n, X are random variables and ϕ(t) 1,..., ϕ(t) n, ϕ X (t) the corresponding sequence of characteristic functions, then X n X lim n ϕ n(t) = ϕ X (t) t R. To understand how the proof works we need some more tools: Bounded Convergence Theorem. If X, X 1, X 2,... are random variables, X n d X, C R and X n C for all n N, then lim E[X n] = E[X]. n Definition. A sequence of random variables X n is defined as tight if ɛ > 0 there exists a M R s.t. P ( X n > M) ɛ for all n N. Lemma 6.2.1. If X n is tight, then there exists a random variable X s.t. X n d X. 8
Lemma 6.2.2. If X n is tight and if each subsequence of X n that converges at all converges to a random variable Y, then also X n d Y Proof of the Lévy Continuity Theorem : Since cos(tx n ) and sin(tx n ) are continuous and bounded functions we can see that by Theorem 2.1 ϕ n (t) = E[e itx ] = E[cos(tX n )]+ie[sin(tx n )] n E[cos(tX)]+iE[sin(tX)] = ϕ n (t) : This part of the proof proceeds in two steps. First we want to show that pointwise convergence of characteristic functions implies tightness. After this, we are able to use nice properties of tight random variables to prove the claim. We will show tightness by estimating the following term which will turn out to be a nice upper bound for the probability that X n is big. For arbitrary δ > 0 δ 1 (1 ϕ n (t)) dt = δ 1 1 E[e itxn ] dt = δ 1 E[1 e itxn ] dt [ = δ 1 F ubini = δ 1 = δ 1 R R 1 e itx dp n (x) R [ ] [ 2δ 1 e itx dt ] dt dp n (x) cos(tx) + i sin(tx) dt ] [ = δ 1 2δ cos(tx) dt dp n (x) R = δ 1 2δ 2 sin(δx) dp n (x) R x = 2 1 sin(δx) dp n (x). δx R ] dp n (x) Now that this term has a nice shape we want to find a lower bound for it. We know that 1 sin(ux) ux 0. This is true because: sin x = x 0 cos(y)dy x. So the integral gets smaller if we discard an interval 9
2 1 sin(δx) dp n (x) 2 1 sin(δx) dp n (x) 2 R ux x δ/2 δx x δ/2 = P (X n δ/2 ). x δ/2 dp n (x) = P n ({x : x δ/2}) 1 1 δx } {{ } 1/2 Pick ɛ > 0. Because ϕ is continuous in 0 and ϕ(0) = 1 we can find a δ > 0 s.t. dp n (x) 1 ϕ(t) ɛ 4 t δ. We can use this to estimate the following term δ 1 1 ϕ(t) dt δ 1 2δ ɛ 4 = ɛ 2. Since ϕ n (t) 1 the bounded convergence theorem implies (1 ϕ n (t)) dt n Because of that there exists a N R s.t. for all n > N (1 ϕ(t)) dt. δ 1 (1 ϕ n (t)) dt δ 1 (1 ϕ(t)) dt ɛ 2. If we put the three bounds together, we get for all n > N P (X n δ/2 ) δ 1 1 ϕ n (t) dt = δ 1 (1 ϕ n (t)) + (1 ϕ(t)) (1 ϕ(t)) dt δ 1 1 ϕ n (t) dt δ 1 δ (1 ϕ(t)) dt + δ 1 (1 ϕ n (t)) dt ɛ. We just proved that the point δ/2 has the property that the probability of X n exceeding this value is small for infinite many n. To show tightness we just need to find a bound for all the finite cases. Because P n ([ m, m]) m 1 we know that for all n {1,..., N} there exists a m n R such that 10
P ( X n m n ) ɛ. Now define M = max{m 1,..., m N, δ/2}. Because of the monotonicity of distribution functions we just proved that P ( X n > M) ɛ for all n N X n is tight. Lemma 6.2.1 tells us that X n has a limit. Because of Lemma 6.2.2 we just need to show that every converging subsequence converges to X. So Suppose X nk converges to some random variable Y in distribution. Then we know because of that Y has the characteristic function ϕ(t). Now Y = d X because of Theorem 2.2. Since this holds for any converging subsequence we just have shown X n d X. 6.3 Lemmas To apply the Lévy Continuity Theorem to the characteristic function of summed i.i.d. random variables we need two more lemmas. Lemma 6.3.1. Suppose X is a random variable and E [ X 2] <, then ϕ X (t) can be written as the following Taylor expansion ϕ X (t) = 1 + ite[x] + t2 2 E [ X 2] + o ( t 2) Recall o(t 2 ) means that as t 0 o(t 2 ) t 2 0. The Lemma can be proven by using the following estimate for the error term o(t 2 ) < min( X 3 3!, 2 X 2 2 ). Lemma 6.3.2. Suppose c n is a sequence of complex numbers and c n n c, then ( lim 1 + c ) n n = e c n n 11
6.4 Proof of the Central Limit Theorem Without loss of generality, we can assume that µ = 0 and σ 2 = 1 because E ] and V ar = 1. [ Xn µ σ [ ] Xn µ σ = 0 The Lévy Continuity Theorem tells us that it is sufficient to show that the characteristic function of our sum converges to the characteristic function of a standard normally distributed random variable, i.e. and S n = X 1 + X 2 +... + X n. ( ) ϕ Sn (t) = ϕ t Sn n n e t2 /2 as n Now we want to use independence by applying Theorem 2.3 to the sum. Fix t R ϕ Sn ( t n ) = n k=1 ( ) ( ( )) t t n ϕ Xk = ϕ X1. n n The second equality is true because all random variables are identically distributed and therefore have the same characteristic function (Theorem 2.2). Lemma 6.3.1 and the basic fact that V ar[y ] = E [ Y 2] E[Y ] 2 for any random variable Y yields ( ( )) t n ϕ X1 = 1 + i t E[X 1 ] n n } {{ } =0 ( )) n = (1 + t2 1 2n + o n ( = 1 + t2 /2 + n o ( ) 1 n n + t2 n E [ ( ) n X1 2 ] t 2 +o } {{ } n =1 ) n. By using Lemma 6.3.2 and because o ( n 1) /n 1 0 as n we just identified the limit ( ) t lim ϕ S n n = e t2 /2. n 12
7 References P. Billingsley (1986) Probability and Measure. John Wiley and Sons, New York R. Durrett (2010) Probability: Theory and Examples M. Mether (2003) The History Of The Central Limit Theorem 13