RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS. DISCRETE RANDOM VARIABLES.. Definition of a Discrete Random Variable. A random variable X is said to be discrete if it can assume only a finite or countable infinite number of distinct values. A discrete random variable can be defined on both a countable or uncountable sample space... Probability for a discrete random variable. The probability that X takes on the value x, P(Xx, is defined as the sum of the probabilities of all sample points in Ω that are assigned the value x. We may denote P(Xx by p(x. The expression p(x is a function that assigns probabilities to each possible value x; thus it is often called the probability function for X..3. Probability distribution for a discrete random variable. The probability distribution for a discrete random variable X can be represented by a formula, a table, or a graph, which provides p(x P(Xx for all x. The probability distribution for a discrete random variable assigns nonzero probabilities to only a countable number of distinct x values. Any value x not explicitly assigned a positive probability is understood to be such that P(Xx 0. The function f(x p(x P(Xx for each x within the range of X is called the probability distribution of X. It is often called the probability mass function for the discrete random variable X..4. Properties of the probability distribution for a discrete random variable. A function can serve as the probability distribution for a discrete random variable X if and only if it s values, f(x, satisfy the conditions: a: f(x 0 for each value within its domain b: f(x, where the summation extends over all the values within its domain x.5. Examples of probability mass functions..5.. Example. Find a formula for the probability distribution of the total number of heads obtained in four tosses of a balanced coin. The sample space, probabilities and the value of the random variable are given in table. From the table we can determine the probabilities as P (X 0 6,P(X 4 6,P(X 6 6,P(X 3 4 6,P(X 4 ( 6 Notice that the denominators of the five fractions are the same and the numerators of the five fractions are, 4, 6, 4,. The numbers in the numerators is a set of binomial coefficients. ( ( ( ( ( 4 4 4 6 4 4 4 4 6 0 6 6 6 3 6 4 We can then write the probability mass function as Date: August 0, 004.
RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS TABLE. Probability of a Function of the Number of Heads from Tossing a Coin Four Times. Table R. Tossing a Coin Four Times Element of sample space Probability Value of random variable X (x HHHH /6 4 HHHT /6 3 HHTH /6 3 HTHH /6 3 THHH /6 3 HHTT /6 HTHT /6 HTTH /6 THHT /6 THTH /6 TTHH /6 HTTT /6 THTT /6 TTHT /6 TTTH /6 TTTT /6 0 ( 4 f(x x for x 0,,, 3, 4 ( 6 Note that all the probabilities are positive and that they sum to one..5.. Example. Roll a red die and a green die. Let the random variable be the larger of the two numbers if they are different and the common value if they are the same. There are 36 points in the sample space. In table the outcomes are listed along with the value of the random variable associated with each outcome. The probability that X, P(X P[(, ] /36. The probability that X, P(X P[(,, (,, (, ] 3/36. Continuing we obtain ( ( ( 3 5 P (X,P(X P (X 3 36 36 36 ( ( ( 7 9 P (X 4,P(X 5,P(X 6 36 36 36 We can then write the probability mass function as f(x P (X x x for x,, 3, 4, 5, 6 36 Note that all the probabilities are positive and that they sum to one..6. Cumulative Distribution Functions.
RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 3 TABLE. Possible Outcomes of Rolling a Red Die and a Green Die First Number in Pair is Number on Red Die Green (A 3 4 5 6 Red (D 3 4 5 6 3 4 5 6 3 4 5 6 3 4 5 6 3 3 33 34 35 36 3 3 3 3 4 5 6 4 4 4 4 4 43 4 44 4 45 5 46 6 5 5 5 5 5 53 5 54 5 55 5 56 6 6 6 6 6 6 63 6 64 6 65 6 66 6.6.. Definition of a Cumulative Distribution Function. If X is a discrete random variable, the function given by F (x P (x X t x f(t for x (3 where f(t is the value of the probability distribution of X at t, is called the cumulative distribution function of X. The function F(x is also called the distribution function of X..6.. Properties of a Cumulative Distribution Function. The values F(X of the distribution function of a discrete random variable X satisfy the conditions : F(- 0 and F( ; : If a < b, then F(a F(b for any real numbers a and b.6.3. First example of a cumulative distribution function. Consider tossing a coin four times. The possible outcomes are contained in table and the values of f in equation. From this we can determine the cumulative distribution function as follows. F (0 f(0 6 F ( f(0 + f( 6 + 4 6 5 6 F ( f(0 + f( + f( 6 + 4 6 + 6 6 6 F (3 f(0 + f( + f( + f(3 6 + 4 6 + 6 6 + 4 6 5 6 F (4 f(0 + f( + f( + f(3 + f(4 6 + 4 6 + 6 6 + 4 6 + 6 6 6 We can write this in an alternative fashion as
4 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS F (x 0 for x < 0 6 for 0 x < 5 6 for x < 6 for x < 3 5 for 3 x < 4 6 for x 4.6.4. Second example of a cumulative distribution function. Consider a group of N individuals, M of whom are female. Then N-M are male. Now pick n individuals from this population without replacement. Let x be the number of females chosen. There are ( M x ways of choosing x females from the M in the population and ( N M n x ways of choosing n-x of the N - M males. Therefore, there are ( ( M x N M ( n x ways of choosing x females and n-x males. Because there are N n ways of choosing n of the N elements in the set, and because we will assume that they all are equally likely the probability of x females in a sample of size n is given by ( M ( N M x n x ( f(x P (X x N for x 0,,, 3,,n n (4 and x M, and n x N M. For this discrete distribution we compute the cumulative density by adding up the appropriate terms of the probability mass function. F (0 f(0 F ( f(0 + f( F ( f(0 + f( + f( F (3 f(0 + f( + f( + f(3. F (n f(0 + f( + f( + f(3 + + f(n Consider a population with four individuals, three of whom are female, denoted respectively by A, B, C, D where A is a male and the others are females. Then consider drawing two from this population. Based on equation 4 there should be ( 4 6 elements in the sample space. The sample space is given by TABLE 3. Drawing Two Individuals from a Population of Four where Order Does Not Matter (no replacement Element of sample space Probability Value of random variable X AB /6 AC /6 AD /6 BC /6 BD /6 CD /6 (5
RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 5 We can see that the probability of females is. We can also obtain this using the formula as follows. ( 3 ( f( P (X ( 0 4 (3( (6 6 Similarly ( 3 ( f( P (X ( 4 (3( (7 6 We cannot use the formula to compute f(0 because ( - 0 (4-3. f(0 is then equal to 0. We can then compute the cumulative distribution function as F (0 f(0 0 F ( f(0 + f( F ( f(0 + f( + f( (8.7. Expected value..7.. Definition of expected value. Let X be a discrete random variable with probability function p(x. Then the expected value of X, E(X, is defined to be E(X x xp(x (9 if it exists. The expected value exists if x p(x < (0 x The expected value is kind of a weighted average. It is also sometimes referred to as the population mean of the random variable and denoted µ X..7.. First example computing an expected value. Toss a die that has six sides. Observe the number that comes up. The probability mass or frequency function is given by p (x P (X x We compute the expected value as E(X xɛx 6 i i { 6 for x,, 3, 4, 5, 6 0 otherwise xp X (x ( 6 ++3+4+5+6 6 6 3 ( (
6 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS.7.3. Second example computing an expected value. Consider a group of television sets, two of which have white cords and ten which have black cords. Suppose three of them are chosen at random and shipped to a care center. What are the probabilities that zero, one, or two of the sets with white cords are shipped? What is the expected number with white cords that will be shipped? It is clear that x of the two sets with white cords and 3-x of the ten sets with black cords can be chosen in ( ( x 0 ( 3 x ways. The three sets can be chosen in 3 ways. So we have a probability mass function as follows. ( 0 f(x P ( X x x( 3 x for x 0,, (3 For example f(x P (X x We collect this information as in table 4. ( 3 ( 0 0( 3 0 ( 3 ( (0 0 6 TABLE 4. Probabilities for Television Problem x 0 f(x 6/ 9/ / F(x 6/ / (4 We compute the expected value as E(X xɛx (0 xp X (x ( 6 + ( ( 9 + ( ( (5.8. Expected value of a function of a random variable. Theorem. Let X be a discrete random variable with probability mass function p(x and g(x be a realvalued function of X. Then the expected value of g(x is given by E[g(X] x g(x p(x. (6 Proof for case of finite values of X. Consider the case where the random variable X takes on a finite number of values x,x,x 3, x n. The function g(x may not be one-to-one (the different values of x i may yield the same value of g(x i. Suppose that g(x takes on m different values (m n. It follows that g(x is also a random variable with possible values g,g,g,...g m and probability distribution P [g(x g i ] x jsuch that g(x j g i p(x j p (g i (7
RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 7 for all i,,... m. Here p (g i is the probability that the experiment results in a value for the function f of the initial random variable of g i. Using the definition of expected value in equation we obtain Now substitute in to obtain E[g(X] m i g i p (g i. (8 E[g(X] m i m i m i n j g i p (g i. g i x j g ( x j g i x j g ( x j g i g (x j p( x j. p ( x j g i p ( x j (9.9. Properties of mathematical expectation..9.. Constants. Theorem. Let X be a discrete random variable with probability function p(x and c be a constant. Then E(c c. Proof. Consider the function g(x c. Then by theorem E[c] x cp(x c x p(x (0 But by property.4b, we have and hence x p(x E (c c ( c. (
8 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS.9.. Constants multiplied by functions of random variables. Theorem 3. Let X be a discrete random variable with probability function p(x, g(x be a function of X, and let c be a constant. Then Proof. By theorem we have E [ cg( X ] ce[(g ( X ] ( E[c g(x] x c x cg(x p(x g(x p(x (3 ce[g(x].9.3. Sums of functions of random variables. Theorem 4. Let X be a discrete random variable with probability function p(x, g (X,g (X,g 3 (X,,g k (X be k functions of X. Then E [g (X+g (X+g 3 (X+ + g k (X] E[g (X] + E[g (X] + + E[g k (X] (4 Proof for the case of k. By theorem we have we have E [g (X + g (X ] x [g (x +g (x] p(x x g (x p(x + x g (x p(x (5.0. Variance of a random variable. E [g (X] + E [ g (X],.0.. Definition of variance. The variance of a random variable X is defined to be the expected value of (X µ. That is V (X E [ ( X µ ] (6 The standard deviation of X is the positive square root of V(X..0.. Example. Consider a random variable with the following probability distribution. TABLE 5. Probability Distribution for X x p(x 0 /8 /4 3/8 3 /4
RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 9 We can compute the expected value as µ E(X 3 x 0 (0 We compute the variance as xp X (x ( 8 + ( ( 4 + ( ( 3 8 + (3 ( 4 3 4 (7 σ E[X µ ] Σ 3 x 0 (x µ p X (x (0.75 ( 8 +(.75 ( 4 +(.75 ( 3 8 +(3.75 ( 4.9375 and the standard deviation as σ 0.9375.0.3. Alternative formula for the variance. σ + σ.9375 0.97. Theorem 5. Let X be a discrete random variable with probability function p X (x; then V (X σ E [ (X µ ] E ( X µ (8 Proof. First write out the first part of equation 8 as follows V (X σ E [ (X µ ] E ( X µx + µ E ( X E ( µx+e ( µ (9 where the last step follows from theorem 4. Note that µ is a constant then apply theorems 3 and to the second and third terms in equation 8 to obtain V (X σ E [ ( X µ ] E ( X µe(x + µ (30 Then making the substitution that E(X µ, we obtain V (X σ E ( X µ (3.0.4. Example. Die toss. Toss a die that has six sides. Observe the number that comes up. The probability mass or frequency function is given by p(x P (X x We compute the expected value as { 6 for x,, 3, 4, 5, 6. (3 0 otherwise
0 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS E(X xɛx 6 i i xp X (x ( 6 ++3+4+5+6 6 6 3 We compute the variance by then computing the E(X as follows E(X x p X (x xɛx 6 ( i 6 i +4+9+6++36 6 (33 (34 9 6 5 6 We can then compute the variance using the formula Var(X E(X -E (X and the fact the E(X /6 from equation 33. Var(X E (X E (X 9 6 9 6 ( 6 ( 44 36 546 36 44 36 05 36 35.966 (35
RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS. THE DISTRIBUTION OF RANDOM VARIABLES IN GENERAL.. Cumulative distribution function. The cumulative distribution function (cdf of a random variable X, denoted by F X (, is defined to be the function with domain the real line and range the interval [0,], which satisfies F X (x P X [X x] P [ { ω : X(ω x } ] for every real number x. F has the following properties: F X ( lim F X(x 0,F X (+ lim F X(x, x x + F X (a F X (b for a < b, nondecreasing function of x, lim F X(x + h F X (x, continuous from the right, 0<h 0.. Example of a cumulative distribution function. Consider the following function (36a (36b (36c Check condition 36a as follows. F X (x +e x (37 lim x F X(x lim x lim +e x x +e x 0 (38 lim F X(x lim x x +e x To check condition 36b differentiate the cdf as follows ( df X ( x d +e x dx dx (39 e x ( + e x > 0 Condition 36c is satisfied because F X (x is a continuous function..3. Discrete and continuous random variables..3.. Discrete random variable. A random variable X will be said to be discrete if the range of X is countable, that is if it can assume only a finite or countably infinite number of values. Alternatively, a random variable is discrete if F X (x is a step function of x..3.. Continuous random variable. A random variable X is continuous if F X (x is a continuous function of x..4. Frequency (probability mass function of a discrete random variable..4.. Definition of a frequency (discrete density function. If X is a discrete random variable with the distinct values, x,x,,x n,, then the function denoted by p( and defined by { P [X x j ] x x j, j,,..., n,... p(x (40 0 x x j is defined to be the frequency, discrete density, or probability mass function of X. We will often write f(x for p(x to denote frequency as compared to probability. A discrete probability distribution on R k is a probability measure P such that
RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS i P ({x i } (4 for some sequence of points in R k, i.e. the sequence of points that occur as an outcome of the experiment. Given the definition of the frequency function in equation 40, we can also say that any non-negative function p on R k that vanishes except on a sequence x,x,,x n, of vectors and that satisfies p(x i i defines a unique probability distribution by the relation P ( A x i ɛ A p (x i (4.4.. Properties of discrete density functions. As defined in section.4, a probability mass function must satisfy p(x j > 0 for j,,... (43a p(x 0forx x j ; j,,..., (43b p(x j (43c j.4.3. Example of a discrete density function. Consider a probability model where there are two possible outcomes to a single action (say heads and tails and consider repeating this action several times until one of the outcomes occurs. Let the random variable be the number of actions required to obtain a particular outcome (say heads. Let p be the probability that outcome is a head and (-p the probability of a tail. Then to obtain the first head on the xth toss, we need to have a tail on the previous x- tosses. So the probability of the first had occurring on the xth toss is given by { ( p x p for x,,... p(x P (X x (44 0 otherwise For example the probability that it takes 4 tosses to get a head is /6 while the probability it takes tosses is /4..4.4. Example of a discrete density function. Consider tossing a die. The sample space is {,, 3, 4, 5, 6}. The elements are {}, {},.... The frequency function is given by { p(x P (X x 6 for x,, 3, 4, 5, 6. (45 0 otherwise The density function is represented in figure..5. Probability density function of a continuous random variable.
RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 3 f x FIGURE. Frequency Function for Tossing a Die 5 6 3 3 6 3 4 5 6 7 8 9 x.5.. Alternative definition of continuous random variable. In section.3., we defined a random variable to be continuous if F X (x is a continuous function of x. We also say that a random variable X is continuous if there exists a function f( such that F X (x x f(u du (46 for every real number x. The integral in equation 46 is a Riemann integral evaluated from - to a real number x..5.. Definition of a probability density frequency function (pdf. The probability density function, f X (x, of a continuous random variable X is the function f( that satisfies F X (x.5.3. Properties of continuous density functions. x f X (u du (47 f(x 0 x f(x dx, (48a (48b Analogous to equation 4, we can write in the continuous case P (X ɛa f X (x dx (49 A
4 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS where the integral is interpreted in the sense of Lebesgue. Theorem 6. For a density function f(x defined over the set of all real numbers the following holds for any real constants a and b with a b. P (a X b b a f X (x dx (50 Also note that for a continuous random variable X the following are equivalent P (a X b P (a X < b P (a < X b P (a < X < b (5 Note that we can obtain the various probabilities by integrating the area under the density function as seen in figure. FIGURE. Area under the Density Function as Probability f x.5.4. Example of a continuous density function. Consider the following function { k e 3 x for x > 0 f(x 0 elsewhere. (5 First we must find the value of k that makes this a valid density function? Given the condition in equation 48b we must have that f(x dx 0 k e 3 x dx (53
RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 5 Integrate the second term to obtain 0 k e 3 x dx k lim t e 3 x 3 t 0 k 3 Given that this must be equal to one we obtain k 3 (55 k 3 The density is then given by { 3 e 3 x for x > 0 f(x 0 elsewhere. (56 Now find the probability that ( X. P ( X 3 e 3 x dx e 3 x e 6 + e 3 0.0047875 + 0.049787 0.047308.5.5. Example of a continuous density function. Let X have p.d.f. { x e x for x x f(x 0 elsewhere This density function is shown in figure 3. We can find the probability that ( X by integration P ( X (54 (57. (58 x e x dx (59 First integrate the expression on the right by parts letting u x and dv e x dx. Then du dx and v - e x dx. We then have P ( X xe x e x dx e + e [ e x ] e + e e + e 3 e + e 3 e + e 0.406 + 0.73575 0.3975 (60
6 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS f x FIGURE 3. Graph of Density Function xe x 0.35 0.3 0.5 0. 0.5 0. 0.05 4 6 8 x This is represented by the area between the lines in figure 4. We can also find the distribution function in this case. F (x x 0 Make the u dv substitution as before to obtain t e t dt (6 F (x te t x 0 x 0 e t dt te t x 0 e t x 0 e t ( t x 0 e x ( x e 0 ( 0 (6 e x ( x + e x ( + x The distribution function is shown in figure 5. Now consider the probability that ( X
RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 7 f x FIGURE 4. P ( X 0.35 0.3 0.5 0. 0.5 0. 0.05 3 4 5 6 7 8 9 x P ( X F ( F ( e ( + +e ( + e 3 e (63 0.73575 0.406 0.3975 We can see this as the difference in the values of F(x at and at in figure 6.5.6. Example 3 of a continuous density function. Consider the normal density function given by f( x : µ, σ πσ e ( x µ σ (64 where µ and σ are parameters of the function. The shape and location of the density function depends on the parameters µ and σ. In figure 7 the diagram the density is drawn for µ 0, and σ and σ..5.7. Example 4 of a continuous density function. Consider a random variable with density function given by { (p +x p 0 x f(x (65 0 otherwise
8 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS f x FIGURE 5. Graph of Distribution Function of Density Function xe x 0.8 0.6 0.4 0. 3 4 5 6 7 x where p is greater than -. For example, if p 0, then f(x, if p, then f(x x and so on. The density function with p is shown in figure 8. The distribution function with p is shown in figure 9..6. Expected value..6.. Expectation of a single random variable. Let X be a random variable with density f(x. The expected value of the random variable, denoted E(X, is defined to be { xf(x dx if X is continuous E(X xɛx xp. (66 X (x if X is discrete provided the sum or integral is defined. The expected value is kind of a weighted average. It is also sometimes referred to as the population mean of the random variable and denoted µ X..6.. Expectation of a function of a single random variable. Let X be a random variable with density f(x. The expected value of a function g( of the random variable, denoted E(g(X, is defined to be E(g(X g(x f (xdx (67 if the integral is defined. The expectation of a random variable can also be defined using the Riemann-Stieltjes integral where F is a monotonically increasing function of X. Specifically
RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 9 f x FIGURE 6. P ( X using the Distribution Function 0.8 0.6 0.4 0. 3 4 5 6 7 x E(X xdf(x xdf (68.7. Properties of expectation..7.. Constants. E[a] a a af(xdx f(xdx (69.7.. Constants multiplied by a random variable. E[aX] a axf(xdx ae[x] xf(xdx (70
0 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS FIGURE 7. Normal Density Function f x 0.4 0.3 0. 0. 4 4 x.7.3. Constants multiplied by a function of a random variable. E[ag(X] a ag(x f(xdx ae[g(x] g(x f(xdx (7.7.4. Sums of expected values. Let X be a continuous random variable with density function f(x and let g (X,g (X,g 3 (X,,g k (X be k functions of X. Also let c,c,c 3, c k be k constants. Then E [c g (X+c g (X+ + c k g k (X ] E [c g (X] + E [c g (X] + + E [c k g k (X] (7.8. Example. Consider the density function f(x { (p +x p 0 x 0 otherwise (73 where p is greater than -. We can compute the E(X as follows.
RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS f x 3 FIGURE 8. Density Function (p +x p.5.5 0.5 0. 0.4 0.6 0.8 x E(X 0 0 xf(xdx x(p +x p dx x (p+ (p +dx x(p+ (p + (p + p + p + 0 (74.9. Example. Consider the exponential distribution which has density function We can compute the E(X as follows. f(x λ e x λ 0 x,λ > 0 (75
RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS F x FIGURE 9. Density Function (p x p 0.8 0.6 0.4 0. 0. 0.4 0.6 0.8 x E(X 0 x λ e x λ dx xe x λ 0 + ( e x λ dx u x 0 λ,du λ 0 + λ dx λe x λ 0 e x λ 0 dx, v λe x λ,dv e x λ dx (76.0. Variance..0.. Definition of variance. The variance of a single random variable X with mean µ is given by Var(X σ E [(X E (X ] E [( ] X µ (x µ f(xdx We can write this in a different fashion by expanding the last term in equation 77. (77
RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 3 Var(X (x µ f(xdx (x µx + µ f(xdx x f(x dx µ E [ X ] µe [X] +µ E [ X ] µ + µ E [ X ] µ x f(xdx [ xf(x dx + µ xf(xdx ] f(x dx The variance is a measure of the dispersion of the random variable about the mean..0.. Variance example. Consider the density function f(x { (p +x p 0 x 0 otherwise (78 (79 where p is greater than -. We can compute the Var(X as follows. E(X E(X 0 xf(xdx x(p +x p dx x(p+ (p + 0 (p + p + p + 0 x (p +x p dx x(p +3 (p + 0 (p +3 p + p +3 Var( X E (X E ( X p + ( p + p +3 p + p + (p + (p +3 The values of the mean and variances for various values of p are given in table 6. (80
4 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS TABLE 6. Mean and Variance for Distribution f(x (p +x p for alternative values of p p -.5 0 E(x 0.333 0.5 0.66667 0.75 Var(x 0.08888 0.833333 0.77778 0.00047 0.0.3. Variance example. Consider the exponential distribution which has density function We can compute the E(X as follows f(x λ e x λ 0 x,λ > 0 (8 E(X 0 x λ e x λ x e x λ 0 + 0+ 0 dx xe x λ 0 dx λxe x λ 0 + 0 + λ (λ 0 e x λ dx ( λe x λ 0 ( xe x λ dx u x λ,du x λ 0 λe x λ dx ( x dx, v λe x λ,dv e λ dx u x, dudx, v λe x λ,dv e x λ dx (8 (λ(λ λ We can then compute the variance as 3.. Moments. Var(X E (X E (X λ λ (83 λ 3. MOMENTS AND MOMENT GENERATING FUNCTIONS 3... Moments about the origin (raw moments. The rth moment about the origin of a random variable X, denoted by µ r, is the expected value of X r ; symbolically, µ r E(X r x x r f(x (84 for r 0,,,... when X is discrete and
RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 5 µ r E( X r x r f(x dx when X is continuous. The rth moment about the origin is only defined if E[X r ] exists. A moment about the origin is sometimes called a raw moment. Note that µ E(X µ X, the mean of the distribution of X, or simply the mean of X. The rth moment is sometimes written as function of θ where θ is a vector of parameters that characterize the distribution of X. 3... Central moments. The rth moment about the mean of a random variable X, denoted by µ r,is the expected value of (X µ X r symbolically, (85 µ r E[(X µ X r ] x (x µ X r f(x (86 for r 0,,,... when X is discrete and µ r E[(X µ X r ] (x µ X r f(x dx when X is continuous. The rth moment about the mean is only defined if E[(X µ X r ] exists. The rth moment about the mean of a random variable X is sometimes called the rth central moment of X. The rth central moment of X about a is defined as E[(X a r ].Ifaµ X, we have the rth central moment of X about µ X. Note that µ E[(X µ X ] 0 and µ E[(X µ X ] Var[X]. Also note that all odd moments of X around its mean are zero for symmetrical distributions, provided such moments exist. 3..3. Alternative formula for the variance. (87 Theorem 7. Proof. σx µ µ X (88 Var(X σx E [(X E(X ] E [(X µ X ] E [ X µ X X + µ ] X E [ X ] µ X E [X ]+µ (89 E [ X ] µ X + µ X E [ X ] µ X µ µ X 3.. Moment generating functions.
6 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 3... Definition of a moment generating function. The moment generating function of a random variable X is given by M X (t Ee tx (90 provided that the expectation exists for t in some neighborhood of 0. That is, there is an h>0 such that, for all t in h <t<h, Ee tx exists. We can write M X (t as { M X ( t etx f X (x dx if X is continuous. (9 x etx P (X x if X is discrete To understand why we call this a moment generating function consider first the discrete case. We can write e tx in an alternative way using a Maclaurin series expansion. The Maclaurin series of a function f(t is given by f(t n 0 f (n (0 n! t n f (0 + f ( (0! n 0 f (n (0 tn n! t + f ( (0! f (0 + f ( (0 t! + f ( (0 t! t + f (3 (0 3! + f (3 (0 t3 3! t 3 + + + + where f (n is the nth derivative of the function with respect to t and f (n (0 is the nth derivative of f with respect to t evaluated at t 0. For the function e tx, the requisite derivatives are de tx xe tx, dt d e tx dt x e tx, d 3 e tx dt 3 x 3 e tx, d j e tx dt j x j e tx, We can then write the Maclaurin series as de tx ] dt d e tx ] dt t 0 t 0 x x d 3 e tx ] dt 3 x 3 t 0. d j e tx ] dt j x j t 0 (9 (93 e tx n 0 n 0 d n e tx dt n x n tn n! tn (0 n! + tx + t x! We can then compute E(e tx M X (t as + t3 x 3 3! + + tr x r r! + (94
RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 7 E [ e tx] M X (t e tx f(x (95 x [+tx+ t x + t3 x 3 ] + + trxr + f(x! 3! r! x f(x+t xf(x+ t x f(x+ t3 x 3 f(x+ + tr x r f(x+! 3! r! x x x x x + µt + µ t! + t 3 µ 3 3! + + t r µ r + r! In the expansion, the coefficient of tr t! is µ r, the rth moment about the origin of the random variable X. 3... Example derivation of a moment generating function. Find the moment-generating function of the random variable whose probability density is given by f(x { e x for x > 0 0 elsewhere (96 and use it to find an expression for µ r. By definition M X (t E ( e tx o e tx e x dx e x ( t dx t e x ( t 0 [ ] 0 t t for t < (97 As is well known, when t < the Maclaurin s series for t is given by M x (t t + t + t + t 3 + + t r + +! t! +! t! +3! t 3 3!+ + r! t r r! + or we can derive it directly using equation 9. To derive it directly utilizing the Maclaurin series we need the all derivatives of the function t evaluated at 0. The derivatives are as follows (98
8 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS f(t t ( t f ( ( t f ( f (3 f (4 f (5 ( t 3 6( t 4 4 ( t 5 0 ( t 6 (99 Evaluating them at zero gives. f (n (n + n!( t. f(0 0 ( 0 f ( ( 0! f ( ( 0 3! f (3 6( 0 4 6 3! f (4 4 ( 0 5 4 4! (00 f (5 0 ( 0 6 0 5!. f (n n! ( 0 (n + n!. Now substituting in appropriate values for the derivatives of the function f(t we obtain t f(t n 0 f (n (0 n! t n f (0 + f ( (0 t + f ( (0!! +!! t +!! t + 3! + t + t + t 3 + + 3! t3 + + t + f (3 (0 3! t 3 + + A further issue is to determine the radius of convergence for this particular function. Consider an arbitrary series where the nth term is denoted by a n. The ratio test says that (0
RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 9 If lim n lim n a n + a n L <, then the series is absolutely convergent (0a L > or lim a n + n a n, then the series is divergent (0b a n + a n Now consider the nth term and the (n+th term of the Maclaurin series expansion of lim n t n + t n a n t. t n lim t L (03 n The only way for this to be less than one in absolute value is for the absolute value of t to be less than one, i.e., t <. Now writing out the Maclaurin series as in equation 98 and remembering that in the expansion, the coefficient of tr r! is µ r, the rth moment about the origin of the random variable X t! M x (t t + t + t + t 3 + + t r + t t t 3 +! +! +3!!! 3!+ + r! t r (04 + r! it is clear that µ r r! for r 0,,,... For this density function E[X] because the coefficient of is. We can verify this by finding E[X] directly by integrating. E (X 0 x e x dx (05 To do so we need to integrate by parts with u x and dv e x dx. Then du dx and v e x dx. We then have E (X 0 x e x dx, u x, du dx, v e x, dv e x dx xe x 0 0 [0 0] [ ] e x 0 e x dx (06 0 [0 ] 3..3. Moment property of the moment generating functions for discrete random variables. Theorem 8. If M X (t exists, then for any positive integer k, d k ] M X ( t dt k t 0 M (k X (0 µ k. (07 In other words, if you find the kth derivative of M X (t with respect to t and then set t 0, the result will be µ k. d k M X (t dt k Proof. know that,orm (k X (t, is the kth derivative of M X (t with respect to t. From equation 95 we
30 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS M X (t E ( e tx + tµ + t! µ + t3 It then follows that 3! µ 3 + (08 M ( X (t µ + t! µ + 3 t 3! M ( X (t µ + t! µ 3 + 3 t 3! where we note that n n! (n!. In general we find that M ( k X ( t µ k + t! µ k + + 3 t 3! Setting t 0 in each of the above derivatives, we obtain µ 3 µ 4 µ k + + (09a + (09b +. (0 and, in general, M ( X (0 µ (a M ( X (0 µ (b M (k X (0 µ k ( These operations involve interchanging derivatives and infinite sums, which can be justified if M X (t exists. 3..4. Moment property of the moment generating functions for continuous random variables. Theorem 9. If X has mgf M X (t, then where we define EX n M (n X (0, (3 M (n dn X (0 dt n M X (t t 0 (4 The nth moment of the distribution is equal to the nth derivative of M X (t evaluated at t 0. Proof. We will assume that we can differentiate under the integral sign and differentiate equation 9. d dt M X (t d dt e tx f X (x dx ( d dt etx f X (x dx ( xe tx f X (x dx E ( Xe tx (5
RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 3 Now evaluate equation 5 at t 0. d dt M X (t t 0 E ( Xe tx t 0EX (6 We can proceed in a similar fashion for other derivatives. We illustrate for n. d dt M X (t d dt Now evaluate equation 7 at t 0. e tx f X (x dx ( d dt etx f X (x dx ( d dt xetx f X (x dx ( x e tx f X (x dx E ( X e tx (7 d dt M X (t t 0 E ( X e tx t 0EX (8 3.3. Some properties of moment generating functions. If a and b are constants, then M X+a (t E ( e (X + a t e at M X (t M bx (t E ( e bxt M X ( bt M X + a b (t E (e ( X+a b ( t e a t b t M X b (9a (9b (9c 3.4. Examples of moment generating functions. 3.4.. Example. Consider a random variable with two possible values, 0 and, and corresponding probabilities f( p, f(0 -p. For this distribution M X (t E ( e tx e t f ( + e t 0 f (0 e t p + e 0 ( p e 0 ( p +e t p (0 The derivatives are p + e t p + p ( e t
3 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS M ( X (t pet M ( X (t pet M (3 X (t pet. M (k X (t pet. ( Thus E [ X k] M (k X (0 pe0 p ( We can also find this by expanding M X (t using the Maclaurin series for the moment generating function for this problem M X (t E ( e tx + p ( e t (3 To obtain this we first need the series expansion of e t. All derivatives of e t are equal to e t. The expansion is then given by e t n 0 n 0 d n e t dt n t n n! tn (0 n! (4 + t + t! + t3 3! Substituting equation 4 into equation 3 we obtain + + tr r! + M X (t + pe t p + p [ +t + t! + p + pt + p t! + pt + p t! + p t3 3! + t3 3! + p t3 3! + + tr r! + + p tr r! + + p tr r! ] + + p + p We can then see that all moments are equal to p. This is also clear by direct computation (5
RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 33 E (X ( p + (0 ( p p E ( X ( p +(0 ( p p E ( X 3 ( 3 p +(0 3 ( p p. E ( X k ( k p +(0 k ( p p. (6 3.4.. Example. Consider the exponential distribution which has a density function given by For λt<, we have f(x λ e x λ 0 x,λ > 0 (7 M X (t λ e tx 0 0 λ e x λ λ 0 [ ] λ λ λt [ ] λt [ 0 λt λt dx e ( λ t x dx e ( λt λ x dx e ( λt λ x 0 e ( λt λ x 0 ] e 0 (8 We can then find the moments by differentiation. The first moment is E(X d dt ( λt t 0 λ ( λt t 0 λ (9 The second moment is
34 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS E(X d dt ( λt t 0 d ( λ ( λt t 0 dt (30 λ ( λt 3 t 0 λ 3.4.3. Example 3. Consider the normal distribution which has a density function given by f(x ; µ, σ πσ e ( x µ σ (3 Let g(x X - µ, where X is a normally distributed random variable with mean µ and variance σ. Find the moment-generating function for (X - µ. This is the moment generating function for central moments of the normal distribution. M X (t E[e t (X µ ] To integrate, let u x - µ. Then du dx and πσ e t ( x µ e ( x µ σ dx (3 M X (t σ π σ π σ π e tu e u σ [ tu e du ] u σ du e [ σ ( σ tu u ] du [( σ ] (u σ tu σ exp du π To simplify the integral, complete the square in the exponent of e. That is, write the second term in brackets as This then will give exp [( ] [( (u σ tu exp σ σ [( exp σ [( exp σ (33 ( u σ tu ( u σ tu + σ 4 t σ 4 t (34 ] (u σ tu + σ 4 t σ 4 t ] (u σ tu+ σ 4 t exp ] (u σ tu + σ 4 t exp [( σ [ σ t ] ] ( σ 4 t (35 Now substitute equation 35 into [ equation ] 33 and simplify. We begin by making the substitution and factoring out the term exp. σ t
RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 35 Now move simplify M X (t σ π [( exp σ [( exp σ ] (u σ tu du σ π [ σ t ] [ ] exp σ π [ ] σ inside the integral sign, take the square root of (u π [ σ t ] M X (t exp [ σ t ] exp [ e t σ e ] [ σ (u σ tu+ σ 4 t t exp [( exp σ (u σ tu+ σ 4 t exp [( σ (u σ tu+ σ 4 t ] σ π exp [( σ (u σ t ] ] u σ t σ σ π du σ π du ] du ] du (36 σ tu+ σ 4 t and du (37 The function inside the integral is a normal density function with mean and variance equal to σ t and σ, respectively. Hence the integral is equal to. Then M X (t e t σ. (38 The moments of u x - µ can be obtained from M X (t by differentiating. For example the first central moment is E(X µ d dt (e t σ ( tσ e t σ t 0 t 0 (39 0 The second central moment is E(X µ The third central moment is ( d e t σ dt t 0 d (t ( σ e t σ t 0 dt (t ( ( σ 4 e t σ + σ e t σ t 0 σ (40
36 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS E(X µ 3 ( d3 dt 3 e t σ t 0 d (t ( σ 4 e t σ dt ( ( t 3 σ 6 e t σ ( ( t 3 σ 6 e t σ ( + σ e t σ t 0 ( +tσ 4 e t σ ( +3tσ 4 e t σ t 0 ( + tσ 4 e t σ t 0 (4 0 The fourth central moment is E(X µ 4 ( d4 e t σ dt 4 t 0 d ( ( t 3 σ 6 e t σ dt ( ( t 4 σ 8 e t σ ( ( t 4 σ 8 e t σ ( +3tσ 4 e t σ t 0 ( +3t σ 6 e t σ ( +6t σ 6 e t σ ( +3t σ 6 e t σ ( +3σ 4 e t σ t 0 ( +3σ 4 e t σ t 0 3σ 4 (4 3.4.4. Example 4. Now consider the raw moments of the normal distribution. The density function is given by f(x ; µ, σ πσ e ( x µ σ (43 To find the moment-generating function for X we integrate the following function. M X (t E[e tx ] πσ e tx e ( x µ σ dx (44 First rewrite the integral as follows by putting the exponents over a common denominator. M X (t E[e tx ] πσ πσ πσ πσ Now square the term in the exponent and simplify e tx e ( x µ σ dx e σ (x µ + tx dx e σ ( x µ + σ tx σ dx e σ [(x µ σ tx] dx (45
RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 37 M X (t E[e tx ] πσ πσ e σ [x µx + µ σ tx] dx e σ [ x x ( µ + σ t + µ ] dx (46 Now consider the exponent of e and complete the square for the portion in brackets as follows. x x ( µ + σ t + µ x x ( µ + σ t + µ + µσ t + σ 4 t µσ t σ 4 t ( x (µ + σ t µσ t σ 4 t (47 To simplify the integral, complete the square in the exponent of e by multiplying and dividing by [ ][ ] e µσ t+σ 4 t σ e µσ t σ 4 t σ (48 in the following manner M X (t πσ [ ] e µσ t+σ 4 t σ [ ] e µσ t+σ 4 t σ Now find the square root of e σ [x x(µ+σ t + µ ] dx πσ πσ [ ] e σ [x x(µ+σ t+µ ] e µσ t σ 4 t σ dx e σ [x x( µ+σ t+µ +µσ t+σ 4 t ] dx (49 x x ( µ + σ t + µ +µσ t + σ 4 t (50 Given we would like to have (x something, try squaring x (µ + σ t as follows [ x (µ + σ t ] x ( x(µ + σ t + ( µ + σ t x x ( µ σ t + µ +µσ t + σ 4 t (5 So [ x (µ + σ t ] is the square root of x x ( µ σ t + µ + µσ t + σ 4 t. Making the substitution in equation 49 we obtain M X (t [ ] e µσ t+σ 4 t σ [ ] e µσ t+σ 4 t σ πσ πσ e σ [x x(µ+σ t+µ +µσ t+σ 4 t ] dx e σ ([x (µ+σ t] (5 The expression to the right of e µσ t+σ 4 t σ is a normal density function with mean and variance equal to µ + σ t and σ, respectively. Hence the integral is equal to. Then
38 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS M X (t [ ] e µσ t+σ 4 t σ e µt+ t σ. (53 The moments of X can be obtained from M X (t by differentiating with respect to t. For example the first raw moment is E(X d (e µt + t σ t 0 dt (µ + tσ (e µt+ t σ t 0 (54 µ The second raw moment is ( E(x d dt e µt+ t σ d dt µ + σ The third raw moment is t0 ( (µ + tσ ( e µt+ t σ ( (µ + tσ ( e µt+ t σ t0 + σ ( e µt+ t σ t 0 (55 E(X 3 d3 dt 3 ( d dt e µt+ t σ t0 ( (µ + tσ ( e µt+ t σ [ (µ + tσ 3 ( e µ+ t σ ( (µ + tσ 3 ( e µ+ t σ µ 3 +3σ µ The fourth raw moment is ( + σ e µt+ t σ t 0 +σ ( µ + tσ ( e µ+ t σ +3σ ( µ + tσ ( e µ+ t σ t 0 + σ ( µ + tσ ( e µ+ t σ ] t0 (56 ( E(X 4 d4 dt 4 e µ+ t σ d dt + t0 ( (µ + tσ 3 ( e µ+ t σ ( (µ + tσ 4 ( e µ+ t σ (3σ ( µ + tσ ( e µ+ t σ ( (µ + tσ 4 ( e µ+ t σ +3σ ( µ + tσ ( e µ+ t σ t0 +3σ ( µ + tσ ( e µ+ t σ t0 ( +3σ 4 e µ+ t σ t0 +6σ ( µ + tσ ( e µ+ t σ ( +3σ 4 e µ+ t σ t0 (57 µ 4 +6µ σ +3σ 4
RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 39 4. CHEBYSHEV S INEQUALITY Chebyshev s inequality applies equally well to discrete and continuous random variables. We state it here as a theorem. 4.. A Theorem of Chebyshev. Theorem 0. Let X be a random variable with mean µ and finite variance σ. Then, for any constant k>0, P ( X µ < kσ k or P ( X µ kσ k. (58 The result applies for any probability distribution, whether the probability histogram is bellshaped or not. The results of the theorem are very conservative in the sense that the actual probability that X is in the interval µ ± kσ usually exceeds the lower bound for the probability, /k, by a considerable amount. Chebyshev s theorem enables us to find bounds for probabilities that ordinarily would have to be obtained by tedious mathematical manipulations (integration or summation. We often can obtain estimates of the means and variances of random variables without specifying the distribution of the variable. In situations like these, Chcbyshev s inequality provides meaningful bounds for probabilities of interest. Proof. Let f(x denote the density function of X. Then V (X σ + µ kσ µ + kσ µ kσ + (x µ f (x dx µ + kσ (x µ f(x dx (x µ f(x dx The second integral is always greater than or equal to zero. (x µ f(x dx. (59 Now consider relationship between (x µ and kσ. And similarly, x µ kσ x kσ µ µ x kσ ( µ x k σ ( x µ k σ (60
40 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS x µ + kσ x µ kσ (6 ( x µ k σ Now replace (x µ with kσ in the first and third integrals of equation 59 to obtain the inequality V (X σ µ kσ k σ f(x dx + µ + kσ k σ f(x dx. (6 Then σ k σ [ µ kσ f(x dx + + µ + kσ ] f(x dx (63 W can write this in the following useful manner σ k σ {P ( X µ kσ+p ( X + µ + kσ} k σ P ( X µ kσ. (64 Dividing by k σ, we obtain P ( X µ kσ k, (65 or, equivalently, P ( X µ < kσ k. (66 4.. Example. The number of accidents that occur during a given month at a particular intersection, X, tabulated by a group of Boy Scouts over a long time period is found to have a mean of and a standard deviation of. The underlying distribution is not known. What is the probability that, next month, X will be greater than eight but less than sixteen. We thus want P [8 <X<6]. We can write equation 58 in the following useful manner. P [(µ kσ < X < ( µ + kµ] k (67 For this problem µ and σ soµ kσ - k. We can solve this equation for the k that gives us the desired bounds on the probability.
RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 4 We then obtain µ kµ (k ( 8 k 4 k and + ( k ( 6 k 4 k (68 P [(8 < X < (6 ] 4 3 4 Therefore the probability that X is between 8 and 6 is at least 3/4. (69 4.3. Alternative statement of Chebyshev s inequality. Theorem. Let X be a random variable and let g(x be a non-negative function. Then for r>0, Proof. Eg(X r P [g(x r] Eg(X r g (x f X (x dx [x : g (x r] [ x : g(x r ] rp [ g (X r ] g (x f X (x dx P [g (X r ] Eg(X r (g is nonnegative f X (x dx (g (x r (70 (7 4.4. Another version of Chebyshev s inequality as special case of general version. Corollary. Let X be a random variable with mean µ and variance σ. Then for any k>0 or any ε>0 P [ X µ kσ] k (7a P [ X µ ε ] σ ε (7b
4 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS Proof. Let g(x (x µ, where µ E(X and σ Var (X. Then let r k. Then σ [ ] ( X µ P k ( (X µ σ k E σ E (X µ k σ k because E(X µ σ. We can then rewrite equation 73 as follows [ ] (X µ P σ k k P [ ( X µ k σ ] k (73 (74 P [ X µ kσ] k
RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 43 REFERENCES [] Amemiya, T. Advanced Econometrics. Cambridge: Harvard University Press, 985. [] Bickel P.J., and K.A. Doksum. Mathematical Statistics: Basic Ideas and Selected Topics, Vol. nd Edition. Upper Saddle River, NJ: Prentice Hall, 00. [3] Billingsley, P. Probability and Measure. 3rd edition. New York: Wiley, 995. [4] Casella, G. And R.L. Berger. Statistical Inference. Pacific Grove, CA: Duxbury, 00. [5] Cramer, H. Mathematical Methods of Statistics. Princeton: Princeton University Press, 946. [6] Goldberger, A.S. Econometric Theory. New York: Wiley, 964. [7] Lindgren, B.W. Statistical Theory 4 th edition. Boca Raton, FL: Chapman & Hall/CRC, 993. [8] Rao, C.R. Linear Statistical Inference and its Applications. nd edition. New York: Wiley, 973. [9] Theil, H. Principles of Econometrics. New York: Wiley, 97.