RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS

Size: px
Start display at page:

Download "RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS"

Transcription

1 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS. DISCRETE RANDOM VARIABLES.. Definition of a Discrete Random Variable. A random variable X is said to be discrete if it can assume only a finite or countable infinite number of distinct values. A discrete random variable can be defined on both a countable or uncountable sample space... Probability for a discrete random variable. The probability that X takes on the value x, P(Xx, is defined as the sum of the probabilities of all sample points in Ω that are assigned the value x. We may denote P(Xx by p(x. The expression p(x is a function that assigns probabilities to each possible value x; thus it is often called the probability function for X..3. Probability distribution for a discrete random variable. The probability distribution for a discrete random variable X can be represented by a formula, a table, or a graph, which provides p(x P(Xx for all x. The probability distribution for a discrete random variable assigns nonzero probabilities to only a countable number of distinct x values. Any value x not explicitly assigned a positive probability is understood to be such that P(Xx 0. The function f(x p(x P(Xx for each x within the range of X is called the probability distribution of X. It is often called the probability mass function for the discrete random variable X..4. Properties of the probability distribution for a discrete random variable. A function can serve as the probability distribution for a discrete random variable X if and only if it s values, f(x, satisfy the conditions: a: f(x 0 for each value within its domain b: f(x, where the summation extends over all the values within its domain x.5. Examples of probability mass functions..5.. Example. Find a formula for the probability distribution of the total number of heads obtained in four tosses of a balanced coin. The sample space, probabilities and the value of the random variable are given in table. From the table we can determine the probabilities as P (X 0 6,P(X 4 6,P(X 6 6,P(X 3 4 6,P(X 4 ( 6 Notice that the denominators of the five fractions are the same and the numerators of the five fractions are, 4, 6, 4,. The numbers in the numerators is a set of binomial coefficients. ( ( ( ( ( We can then write the probability mass function as Date: August 0, 004.

2 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS TABLE. Probability of a Function of the Number of Heads from Tossing a Coin Four Times. Table R. Tossing a Coin Four Times Element of sample space Probability Value of random variable X (x HHHH /6 4 HHHT /6 3 HHTH /6 3 HTHH /6 3 THHH /6 3 HHTT /6 HTHT /6 HTTH /6 THHT /6 THTH /6 TTHH /6 HTTT /6 THTT /6 TTHT /6 TTTH /6 TTTT /6 0 ( 4 f(x x for x 0,,, 3, 4 ( 6 Note that all the probabilities are positive and that they sum to one..5.. Example. Roll a red die and a green die. Let the random variable be the larger of the two numbers if they are different and the common value if they are the same. There are 36 points in the sample space. In table the outcomes are listed along with the value of the random variable associated with each outcome. The probability that X, P(X P[(, ] /36. The probability that X, P(X P[(,, (,, (, ] 3/36. Continuing we obtain ( ( ( 3 5 P (X,P(X P (X ( ( ( 7 9 P (X 4,P(X 5,P(X We can then write the probability mass function as f(x P (X x x for x,, 3, 4, 5, 6 36 Note that all the probabilities are positive and that they sum to one..6. Cumulative Distribution Functions.

3 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 3 TABLE. Possible Outcomes of Rolling a Red Die and a Green Die First Number in Pair is Number on Red Die Green (A Red (D Definition of a Cumulative Distribution Function. If X is a discrete random variable, the function given by F (x P (x X t x f(t for x (3 where f(t is the value of the probability distribution of X at t, is called the cumulative distribution function of X. The function F(x is also called the distribution function of X..6.. Properties of a Cumulative Distribution Function. The values F(X of the distribution function of a discrete random variable X satisfy the conditions : F(- 0 and F( ; : If a < b, then F(a F(b for any real numbers a and b.6.3. First example of a cumulative distribution function. Consider tossing a coin four times. The possible outcomes are contained in table and the values of f in equation. From this we can determine the cumulative distribution function as follows. F (0 f(0 6 F ( f(0 + f( F ( f(0 + f( + f( F (3 f(0 + f( + f( + f( F (4 f(0 + f( + f( + f(3 + f( We can write this in an alternative fashion as

4 4 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS F (x 0 for x < 0 6 for 0 x < 5 6 for x < 6 for x < 3 5 for 3 x < 4 6 for x Second example of a cumulative distribution function. Consider a group of N individuals, M of whom are female. Then N-M are male. Now pick n individuals from this population without replacement. Let x be the number of females chosen. There are ( M x ways of choosing x females from the M in the population and ( N M n x ways of choosing n-x of the N - M males. Therefore, there are ( ( M x N M ( n x ways of choosing x females and n-x males. Because there are N n ways of choosing n of the N elements in the set, and because we will assume that they all are equally likely the probability of x females in a sample of size n is given by ( M ( N M x n x ( f(x P (X x N for x 0,,, 3,,n n (4 and x M, and n x N M. For this discrete distribution we compute the cumulative density by adding up the appropriate terms of the probability mass function. F (0 f(0 F ( f(0 + f( F ( f(0 + f( + f( F (3 f(0 + f( + f( + f(3. F (n f(0 + f( + f( + f(3 + + f(n Consider a population with four individuals, three of whom are female, denoted respectively by A, B, C, D where A is a male and the others are females. Then consider drawing two from this population. Based on equation 4 there should be ( 4 6 elements in the sample space. The sample space is given by TABLE 3. Drawing Two Individuals from a Population of Four where Order Does Not Matter (no replacement Element of sample space Probability Value of random variable X AB /6 AC /6 AD /6 BC /6 BD /6 CD /6 (5

5 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 5 We can see that the probability of females is. We can also obtain this using the formula as follows. ( 3 ( f( P (X ( 0 4 (3( (6 6 Similarly ( 3 ( f( P (X ( 4 (3( (7 6 We cannot use the formula to compute f(0 because ( - 0 (4-3. f(0 is then equal to 0. We can then compute the cumulative distribution function as F (0 f(0 0 F ( f(0 + f( F ( f(0 + f( + f( (8.7. Expected value..7.. Definition of expected value. Let X be a discrete random variable with probability function p(x. Then the expected value of X, E(X, is defined to be E(X x xp(x (9 if it exists. The expected value exists if x p(x < (0 x The expected value is kind of a weighted average. It is also sometimes referred to as the population mean of the random variable and denoted µ X..7.. First example computing an expected value. Toss a die that has six sides. Observe the number that comes up. The probability mass or frequency function is given by p (x P (X x We compute the expected value as E(X xɛx 6 i i { 6 for x,, 3, 4, 5, 6 0 otherwise xp X (x ( ( (

6 6 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS.7.3. Second example computing an expected value. Consider a group of television sets, two of which have white cords and ten which have black cords. Suppose three of them are chosen at random and shipped to a care center. What are the probabilities that zero, one, or two of the sets with white cords are shipped? What is the expected number with white cords that will be shipped? It is clear that x of the two sets with white cords and 3-x of the ten sets with black cords can be chosen in ( ( x 0 ( 3 x ways. The three sets can be chosen in 3 ways. So we have a probability mass function as follows. ( 0 f(x P ( X x x( 3 x for x 0,, (3 For example f(x P (X x We collect this information as in table 4. ( 3 ( 0 0( 3 0 ( 3 ( (0 0 6 TABLE 4. Probabilities for Television Problem x 0 f(x 6/ 9/ / F(x 6/ / (4 We compute the expected value as E(X xɛx (0 xp X (x ( 6 + ( ( 9 + ( ( (5.8. Expected value of a function of a random variable. Theorem. Let X be a discrete random variable with probability mass function p(x and g(x be a realvalued function of X. Then the expected value of g(x is given by E[g(X] x g(x p(x. (6 Proof for case of finite values of X. Consider the case where the random variable X takes on a finite number of values x,x,x 3, x n. The function g(x may not be one-to-one (the different values of x i may yield the same value of g(x i. Suppose that g(x takes on m different values (m n. It follows that g(x is also a random variable with possible values g,g,g,...g m and probability distribution P [g(x g i ] x jsuch that g(x j g i p(x j p (g i (7

7 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 7 for all i,,... m. Here p (g i is the probability that the experiment results in a value for the function f of the initial random variable of g i. Using the definition of expected value in equation we obtain Now substitute in to obtain E[g(X] m i g i p (g i. (8 E[g(X] m i m i m i n j g i p (g i. g i x j g ( x j g i x j g ( x j g i g (x j p( x j. p ( x j g i p ( x j (9.9. Properties of mathematical expectation..9.. Constants. Theorem. Let X be a discrete random variable with probability function p(x and c be a constant. Then E(c c. Proof. Consider the function g(x c. Then by theorem E[c] x cp(x c x p(x (0 But by property.4b, we have and hence x p(x E (c c ( c. (

8 8 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS.9.. Constants multiplied by functions of random variables. Theorem 3. Let X be a discrete random variable with probability function p(x, g(x be a function of X, and let c be a constant. Then Proof. By theorem we have E [ cg( X ] ce[(g ( X ] ( E[c g(x] x c x cg(x p(x g(x p(x (3 ce[g(x].9.3. Sums of functions of random variables. Theorem 4. Let X be a discrete random variable with probability function p(x, g (X,g (X,g 3 (X,,g k (X be k functions of X. Then E [g (X+g (X+g 3 (X+ + g k (X] E[g (X] + E[g (X] + + E[g k (X] (4 Proof for the case of k. By theorem we have we have E [g (X + g (X ] x [g (x +g (x] p(x x g (x p(x + x g (x p(x (5.0. Variance of a random variable. E [g (X] + E [ g (X],.0.. Definition of variance. The variance of a random variable X is defined to be the expected value of (X µ. That is V (X E [ ( X µ ] (6 The standard deviation of X is the positive square root of V(X..0.. Example. Consider a random variable with the following probability distribution. TABLE 5. Probability Distribution for X x p(x 0 /8 /4 3/8 3 /4

9 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 9 We can compute the expected value as µ E(X 3 x 0 (0 We compute the variance as xp X (x ( 8 + ( ( 4 + ( ( (3 ( (7 σ E[X µ ] Σ 3 x 0 (x µ p X (x (0.75 ( 8 +(.75 ( 4 +(.75 ( 3 8 +(3.75 ( and the standard deviation as σ Alternative formula for the variance. σ + σ Theorem 5. Let X be a discrete random variable with probability function p X (x; then V (X σ E [ (X µ ] E ( X µ (8 Proof. First write out the first part of equation 8 as follows V (X σ E [ (X µ ] E ( X µx + µ E ( X E ( µx+e ( µ (9 where the last step follows from theorem 4. Note that µ is a constant then apply theorems 3 and to the second and third terms in equation 8 to obtain V (X σ E [ ( X µ ] E ( X µe(x + µ (30 Then making the substitution that E(X µ, we obtain V (X σ E ( X µ ( Example. Die toss. Toss a die that has six sides. Observe the number that comes up. The probability mass or frequency function is given by p(x P (X x We compute the expected value as { 6 for x,, 3, 4, 5, 6. (3 0 otherwise

10 0 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS E(X xɛx 6 i i xp X (x ( We compute the variance by then computing the E(X as follows E(X x p X (x xɛx 6 ( i 6 i (33 ( We can then compute the variance using the formula Var(X E(X -E (X and the fact the E(X /6 from equation 33. Var(X E (X E (X ( 6 ( (35

11 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS. THE DISTRIBUTION OF RANDOM VARIABLES IN GENERAL.. Cumulative distribution function. The cumulative distribution function (cdf of a random variable X, denoted by F X (, is defined to be the function with domain the real line and range the interval [0,], which satisfies F X (x P X [X x] P [ { ω : X(ω x } ] for every real number x. F has the following properties: F X ( lim F X(x 0,F X (+ lim F X(x, x x + F X (a F X (b for a < b, nondecreasing function of x, lim F X(x + h F X (x, continuous from the right, 0<h 0.. Example of a cumulative distribution function. Consider the following function (36a (36b (36c Check condition 36a as follows. F X (x +e x (37 lim x F X(x lim x lim +e x x +e x 0 (38 lim F X(x lim x x +e x To check condition 36b differentiate the cdf as follows ( df X ( x d +e x dx dx (39 e x ( + e x > 0 Condition 36c is satisfied because F X (x is a continuous function..3. Discrete and continuous random variables..3.. Discrete random variable. A random variable X will be said to be discrete if the range of X is countable, that is if it can assume only a finite or countably infinite number of values. Alternatively, a random variable is discrete if F X (x is a step function of x..3.. Continuous random variable. A random variable X is continuous if F X (x is a continuous function of x..4. Frequency (probability mass function of a discrete random variable..4.. Definition of a frequency (discrete density function. If X is a discrete random variable with the distinct values, x,x,,x n,, then the function denoted by p( and defined by { P [X x j ] x x j, j,,..., n,... p(x (40 0 x x j is defined to be the frequency, discrete density, or probability mass function of X. We will often write f(x for p(x to denote frequency as compared to probability. A discrete probability distribution on R k is a probability measure P such that

12 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS i P ({x i } (4 for some sequence of points in R k, i.e. the sequence of points that occur as an outcome of the experiment. Given the definition of the frequency function in equation 40, we can also say that any non-negative function p on R k that vanishes except on a sequence x,x,,x n, of vectors and that satisfies p(x i i defines a unique probability distribution by the relation P ( A x i ɛ A p (x i (4.4.. Properties of discrete density functions. As defined in section.4, a probability mass function must satisfy p(x j > 0 for j,,... (43a p(x 0forx x j ; j,,..., (43b p(x j (43c j.4.3. Example of a discrete density function. Consider a probability model where there are two possible outcomes to a single action (say heads and tails and consider repeating this action several times until one of the outcomes occurs. Let the random variable be the number of actions required to obtain a particular outcome (say heads. Let p be the probability that outcome is a head and (-p the probability of a tail. Then to obtain the first head on the xth toss, we need to have a tail on the previous x- tosses. So the probability of the first had occurring on the xth toss is given by { ( p x p for x,,... p(x P (X x (44 0 otherwise For example the probability that it takes 4 tosses to get a head is /6 while the probability it takes tosses is / Example of a discrete density function. Consider tossing a die. The sample space is {,, 3, 4, 5, 6}. The elements are {}, {},.... The frequency function is given by { p(x P (X x 6 for x,, 3, 4, 5, 6. (45 0 otherwise The density function is represented in figure..5. Probability density function of a continuous random variable.

13 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 3 f x FIGURE. Frequency Function for Tossing a Die x.5.. Alternative definition of continuous random variable. In section.3., we defined a random variable to be continuous if F X (x is a continuous function of x. We also say that a random variable X is continuous if there exists a function f( such that F X (x x f(u du (46 for every real number x. The integral in equation 46 is a Riemann integral evaluated from - to a real number x..5.. Definition of a probability density frequency function (pdf. The probability density function, f X (x, of a continuous random variable X is the function f( that satisfies F X (x.5.3. Properties of continuous density functions. x f X (u du (47 f(x 0 x f(x dx, (48a (48b Analogous to equation 4, we can write in the continuous case P (X ɛa f X (x dx (49 A

14 4 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS where the integral is interpreted in the sense of Lebesgue. Theorem 6. For a density function f(x defined over the set of all real numbers the following holds for any real constants a and b with a b. P (a X b b a f X (x dx (50 Also note that for a continuous random variable X the following are equivalent P (a X b P (a X < b P (a < X b P (a < X < b (5 Note that we can obtain the various probabilities by integrating the area under the density function as seen in figure. FIGURE. Area under the Density Function as Probability f x.5.4. Example of a continuous density function. Consider the following function { k e 3 x for x > 0 f(x 0 elsewhere. (5 First we must find the value of k that makes this a valid density function? Given the condition in equation 48b we must have that f(x dx 0 k e 3 x dx (53

15 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 5 Integrate the second term to obtain 0 k e 3 x dx k lim t e 3 x 3 t 0 k 3 Given that this must be equal to one we obtain k 3 (55 k 3 The density is then given by { 3 e 3 x for x > 0 f(x 0 elsewhere. (56 Now find the probability that ( X. P ( X 3 e 3 x dx e 3 x e 6 + e Example of a continuous density function. Let X have p.d.f. { x e x for x x f(x 0 elsewhere This density function is shown in figure 3. We can find the probability that ( X by integration P ( X (54 (57. (58 x e x dx (59 First integrate the expression on the right by parts letting u x and dv e x dx. Then du dx and v - e x dx. We then have P ( X xe x e x dx e + e [ e x ] e + e e + e 3 e + e 3 e + e (60

16 6 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS f x FIGURE 3. Graph of Density Function xe x x This is represented by the area between the lines in figure 4. We can also find the distribution function in this case. F (x x 0 Make the u dv substitution as before to obtain t e t dt (6 F (x te t x 0 x 0 e t dt te t x 0 e t x 0 e t ( t x 0 e x ( x e 0 ( 0 (6 e x ( x + e x ( + x The distribution function is shown in figure 5. Now consider the probability that ( X

17 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 7 f x FIGURE 4. P ( X x P ( X F ( F ( e ( + +e ( + e 3 e ( We can see this as the difference in the values of F(x at and at in figure Example 3 of a continuous density function. Consider the normal density function given by f( x : µ, σ πσ e ( x µ σ (64 where µ and σ are parameters of the function. The shape and location of the density function depends on the parameters µ and σ. In figure 7 the diagram the density is drawn for µ 0, and σ and σ Example 4 of a continuous density function. Consider a random variable with density function given by { (p +x p 0 x f(x (65 0 otherwise

18 8 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS f x FIGURE 5. Graph of Distribution Function of Density Function xe x x where p is greater than -. For example, if p 0, then f(x, if p, then f(x x and so on. The density function with p is shown in figure 8. The distribution function with p is shown in figure Expected value..6.. Expectation of a single random variable. Let X be a random variable with density f(x. The expected value of the random variable, denoted E(X, is defined to be { xf(x dx if X is continuous E(X xɛx xp. (66 X (x if X is discrete provided the sum or integral is defined. The expected value is kind of a weighted average. It is also sometimes referred to as the population mean of the random variable and denoted µ X..6.. Expectation of a function of a single random variable. Let X be a random variable with density f(x. The expected value of a function g( of the random variable, denoted E(g(X, is defined to be E(g(X g(x f (xdx (67 if the integral is defined. The expectation of a random variable can also be defined using the Riemann-Stieltjes integral where F is a monotonically increasing function of X. Specifically

19 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 9 f x FIGURE 6. P ( X using the Distribution Function x E(X xdf(x xdf (68.7. Properties of expectation..7.. Constants. E[a] a a af(xdx f(xdx ( Constants multiplied by a random variable. E[aX] a axf(xdx ae[x] xf(xdx (70

20 0 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS FIGURE 7. Normal Density Function f x x.7.3. Constants multiplied by a function of a random variable. E[ag(X] a ag(x f(xdx ae[g(x] g(x f(xdx ( Sums of expected values. Let X be a continuous random variable with density function f(x and let g (X,g (X,g 3 (X,,g k (X be k functions of X. Also let c,c,c 3, c k be k constants. Then E [c g (X+c g (X+ + c k g k (X ] E [c g (X] + E [c g (X] + + E [c k g k (X] (7.8. Example. Consider the density function f(x { (p +x p 0 x 0 otherwise (73 where p is greater than -. We can compute the E(X as follows.

21 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS f x 3 FIGURE 8. Density Function (p +x p x E(X 0 0 xf(xdx x(p +x p dx x (p+ (p +dx x(p+ (p + (p + p + p + 0 (74.9. Example. Consider the exponential distribution which has density function We can compute the E(X as follows. f(x λ e x λ 0 x,λ > 0 (75

22 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS F x FIGURE 9. Density Function (p x p x E(X 0 x λ e x λ dx xe x λ 0 + ( e x λ dx u x 0 λ,du λ 0 + λ dx λe x λ 0 e x λ 0 dx, v λe x λ,dv e x λ dx (76.0. Variance..0.. Definition of variance. The variance of a single random variable X with mean µ is given by Var(X σ E [(X E (X ] E [( ] X µ (x µ f(xdx We can write this in a different fashion by expanding the last term in equation 77. (77

23 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 3 Var(X (x µ f(xdx (x µx + µ f(xdx x f(x dx µ E [ X ] µe [X] +µ E [ X ] µ + µ E [ X ] µ x f(xdx [ xf(x dx + µ xf(xdx ] f(x dx The variance is a measure of the dispersion of the random variable about the mean..0.. Variance example. Consider the density function f(x { (p +x p 0 x 0 otherwise (78 (79 where p is greater than -. We can compute the Var(X as follows. E(X E(X 0 xf(xdx x(p +x p dx x(p+ (p + 0 (p + p + p + 0 x (p +x p dx x(p +3 (p + 0 (p +3 p + p +3 Var( X E (X E ( X p + ( p + p +3 p + p + (p + (p +3 The values of the mean and variances for various values of p are given in table 6. (80

24 4 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS TABLE 6. Mean and Variance for Distribution f(x (p +x p for alternative values of p p E(x Var(x Variance example. Consider the exponential distribution which has density function We can compute the E(X as follows f(x λ e x λ 0 x,λ > 0 (8 E(X 0 x λ e x λ x e x λ dx xe x λ 0 dx λxe x λ λ (λ 0 e x λ dx ( λe x λ 0 ( xe x λ dx u x λ,du x λ 0 λe x λ dx ( x dx, v λe x λ,dv e λ dx u x, dudx, v λe x λ,dv e x λ dx (8 (λ(λ λ We can then compute the variance as 3.. Moments. Var(X E (X E (X λ λ (83 λ 3. MOMENTS AND MOMENT GENERATING FUNCTIONS 3... Moments about the origin (raw moments. The rth moment about the origin of a random variable X, denoted by µ r, is the expected value of X r ; symbolically, µ r E(X r x x r f(x (84 for r 0,,,... when X is discrete and

25 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 5 µ r E( X r x r f(x dx when X is continuous. The rth moment about the origin is only defined if E[X r ] exists. A moment about the origin is sometimes called a raw moment. Note that µ E(X µ X, the mean of the distribution of X, or simply the mean of X. The rth moment is sometimes written as function of θ where θ is a vector of parameters that characterize the distribution of X Central moments. The rth moment about the mean of a random variable X, denoted by µ r,is the expected value of (X µ X r symbolically, (85 µ r E[(X µ X r ] x (x µ X r f(x (86 for r 0,,,... when X is discrete and µ r E[(X µ X r ] (x µ X r f(x dx when X is continuous. The rth moment about the mean is only defined if E[(X µ X r ] exists. The rth moment about the mean of a random variable X is sometimes called the rth central moment of X. The rth central moment of X about a is defined as E[(X a r ].Ifaµ X, we have the rth central moment of X about µ X. Note that µ E[(X µ X ] 0 and µ E[(X µ X ] Var[X]. Also note that all odd moments of X around its mean are zero for symmetrical distributions, provided such moments exist Alternative formula for the variance. (87 Theorem 7. Proof. σx µ µ X (88 Var(X σx E [(X E(X ] E [(X µ X ] E [ X µ X X + µ ] X E [ X ] µ X E [X ]+µ (89 E [ X ] µ X + µ X E [ X ] µ X µ µ X 3.. Moment generating functions.

26 6 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 3... Definition of a moment generating function. The moment generating function of a random variable X is given by M X (t Ee tx (90 provided that the expectation exists for t in some neighborhood of 0. That is, there is an h>0 such that, for all t in h <t<h, Ee tx exists. We can write M X (t as { M X ( t etx f X (x dx if X is continuous. (9 x etx P (X x if X is discrete To understand why we call this a moment generating function consider first the discrete case. We can write e tx in an alternative way using a Maclaurin series expansion. The Maclaurin series of a function f(t is given by f(t n 0 f (n (0 n! t n f (0 + f ( (0! n 0 f (n (0 tn n! t + f ( (0! f (0 + f ( (0 t! + f ( (0 t! t + f (3 (0 3! + f (3 (0 t3 3! t where f (n is the nth derivative of the function with respect to t and f (n (0 is the nth derivative of f with respect to t evaluated at t 0. For the function e tx, the requisite derivatives are de tx xe tx, dt d e tx dt x e tx, d 3 e tx dt 3 x 3 e tx, d j e tx dt j x j e tx, We can then write the Maclaurin series as de tx ] dt d e tx ] dt t 0 t 0 x x d 3 e tx ] dt 3 x 3 t 0. d j e tx ] dt j x j t 0 (9 (93 e tx n 0 n 0 d n e tx dt n x n tn n! tn (0 n! + tx + t x! We can then compute E(e tx M X (t as + t3 x 3 3! + + tr x r r! + (94

27 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 7 E [ e tx] M X (t e tx f(x (95 x [+tx+ t x + t3 x 3 ] + + trxr + f(x! 3! r! x f(x+t xf(x+ t x f(x+ t3 x 3 f(x+ + tr x r f(x+! 3! r! x x x x x + µt + µ t! + t 3 µ 3 3! + + t r µ r + r! In the expansion, the coefficient of tr t! is µ r, the rth moment about the origin of the random variable X Example derivation of a moment generating function. Find the moment-generating function of the random variable whose probability density is given by f(x { e x for x > 0 0 elsewhere (96 and use it to find an expression for µ r. By definition M X (t E ( e tx o e tx e x dx e x ( t dx t e x ( t 0 [ ] 0 t t for t < (97 As is well known, when t < the Maclaurin s series for t is given by M x (t t + t + t + t t r + +! t! +! t! +3! t 3 3!+ + r! t r r! + or we can derive it directly using equation 9. To derive it directly utilizing the Maclaurin series we need the all derivatives of the function t evaluated at 0. The derivatives are as follows (98

28 8 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS f(t t ( t f ( ( t f ( f (3 f (4 f (5 ( t 3 6( t 4 4 ( t 5 0 ( t 6 (99 Evaluating them at zero gives. f (n (n + n!( t. f(0 0 ( 0 f ( ( 0! f ( ( 0 3! f (3 6( ! f (4 4 ( ! (00 f (5 0 ( !. f (n n! ( 0 (n + n!. Now substituting in appropriate values for the derivatives of the function f(t we obtain t f(t n 0 f (n (0 n! t n f (0 + f ( (0 t + f ( (0!! +!! t +!! t + 3! + t + t + t ! t3 + + t + f (3 (0 3! t A further issue is to determine the radius of convergence for this particular function. Consider an arbitrary series where the nth term is denoted by a n. The ratio test says that (0

29 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 9 If lim n lim n a n + a n L <, then the series is absolutely convergent (0a L > or lim a n + n a n, then the series is divergent (0b a n + a n Now consider the nth term and the (n+th term of the Maclaurin series expansion of lim n t n + t n a n t. t n lim t L (03 n The only way for this to be less than one in absolute value is for the absolute value of t to be less than one, i.e., t <. Now writing out the Maclaurin series as in equation 98 and remembering that in the expansion, the coefficient of tr r! is µ r, the rth moment about the origin of the random variable X t! M x (t t + t + t + t t r + t t t 3 +! +! +3!!! 3!+ + r! t r (04 + r! it is clear that µ r r! for r 0,,,... For this density function E[X] because the coefficient of is. We can verify this by finding E[X] directly by integrating. E (X 0 x e x dx (05 To do so we need to integrate by parts with u x and dv e x dx. Then du dx and v e x dx. We then have E (X 0 x e x dx, u x, du dx, v e x, dv e x dx xe x 0 0 [0 0] [ ] e x 0 e x dx (06 0 [0 ] Moment property of the moment generating functions for discrete random variables. Theorem 8. If M X (t exists, then for any positive integer k, d k ] M X ( t dt k t 0 M (k X (0 µ k. (07 In other words, if you find the kth derivative of M X (t with respect to t and then set t 0, the result will be µ k. d k M X (t dt k Proof. know that,orm (k X (t, is the kth derivative of M X (t with respect to t. From equation 95 we

30 30 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS M X (t E ( e tx + tµ + t! µ + t3 It then follows that 3! µ 3 + (08 M ( X (t µ + t! µ + 3 t 3! M ( X (t µ + t! µ t 3! where we note that n n! (n!. In general we find that M ( k X ( t µ k + t! µ k t 3! Setting t 0 in each of the above derivatives, we obtain µ 3 µ 4 µ k + + (09a + (09b +. (0 and, in general, M ( X (0 µ (a M ( X (0 µ (b M (k X (0 µ k ( These operations involve interchanging derivatives and infinite sums, which can be justified if M X (t exists Moment property of the moment generating functions for continuous random variables. Theorem 9. If X has mgf M X (t, then where we define EX n M (n X (0, (3 M (n dn X (0 dt n M X (t t 0 (4 The nth moment of the distribution is equal to the nth derivative of M X (t evaluated at t 0. Proof. We will assume that we can differentiate under the integral sign and differentiate equation 9. d dt M X (t d dt e tx f X (x dx ( d dt etx f X (x dx ( xe tx f X (x dx E ( Xe tx (5

31 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 3 Now evaluate equation 5 at t 0. d dt M X (t t 0 E ( Xe tx t 0EX (6 We can proceed in a similar fashion for other derivatives. We illustrate for n. d dt M X (t d dt Now evaluate equation 7 at t 0. e tx f X (x dx ( d dt etx f X (x dx ( d dt xetx f X (x dx ( x e tx f X (x dx E ( X e tx (7 d dt M X (t t 0 E ( X e tx t 0EX ( Some properties of moment generating functions. If a and b are constants, then M X+a (t E ( e (X + a t e at M X (t M bx (t E ( e bxt M X ( bt M X + a b (t E (e ( X+a b ( t e a t b t M X b (9a (9b (9c 3.4. Examples of moment generating functions Example. Consider a random variable with two possible values, 0 and, and corresponding probabilities f( p, f(0 -p. For this distribution M X (t E ( e tx e t f ( + e t 0 f (0 e t p + e 0 ( p e 0 ( p +e t p (0 The derivatives are p + e t p + p ( e t

32 3 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS M ( X (t pet M ( X (t pet M (3 X (t pet. M (k X (t pet. ( Thus E [ X k] M (k X (0 pe0 p ( We can also find this by expanding M X (t using the Maclaurin series for the moment generating function for this problem M X (t E ( e tx + p ( e t (3 To obtain this we first need the series expansion of e t. All derivatives of e t are equal to e t. The expansion is then given by e t n 0 n 0 d n e t dt n t n n! tn (0 n! (4 + t + t! + t3 3! Substituting equation 4 into equation 3 we obtain + + tr r! + M X (t + pe t p + p [ +t + t! + p + pt + p t! + pt + p t! + p t3 3! + t3 3! + p t3 3! + + tr r! + + p tr r! + + p tr r! ] + + p + p We can then see that all moments are equal to p. This is also clear by direct computation (5

33 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 33 E (X ( p + (0 ( p p E ( X ( p +(0 ( p p E ( X 3 ( 3 p +(0 3 ( p p. E ( X k ( k p +(0 k ( p p. ( Example. Consider the exponential distribution which has a density function given by For λt<, we have f(x λ e x λ 0 x,λ > 0 (7 M X (t λ e tx 0 0 λ e x λ λ 0 [ ] λ λ λt [ ] λt [ 0 λt λt dx e ( λ t x dx e ( λt λ x dx e ( λt λ x 0 e ( λt λ x 0 ] e 0 (8 We can then find the moments by differentiation. The first moment is E(X d dt ( λt t 0 λ ( λt t 0 λ (9 The second moment is

34 34 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS E(X d dt ( λt t 0 d ( λ ( λt t 0 dt (30 λ ( λt 3 t 0 λ Example 3. Consider the normal distribution which has a density function given by f(x ; µ, σ πσ e ( x µ σ (3 Let g(x X - µ, where X is a normally distributed random variable with mean µ and variance σ. Find the moment-generating function for (X - µ. This is the moment generating function for central moments of the normal distribution. M X (t E[e t (X µ ] To integrate, let u x - µ. Then du dx and πσ e t ( x µ e ( x µ σ dx (3 M X (t σ π σ π σ π e tu e u σ [ tu e du ] u σ du e [ σ ( σ tu u ] du [( σ ] (u σ tu σ exp du π To simplify the integral, complete the square in the exponent of e. That is, write the second term in brackets as This then will give exp [( ] [( (u σ tu exp σ σ [( exp σ [( exp σ (33 ( u σ tu ( u σ tu + σ 4 t σ 4 t (34 ] (u σ tu + σ 4 t σ 4 t ] (u σ tu+ σ 4 t exp ] (u σ tu + σ 4 t exp [( σ [ σ t ] ] ( σ 4 t (35 Now substitute equation 35 into [ equation ] 33 and simplify. We begin by making the substitution and factoring out the term exp. σ t

35 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 35 Now move simplify M X (t σ π [( exp σ [( exp σ ] (u σ tu du σ π [ σ t ] [ ] exp σ π [ ] σ inside the integral sign, take the square root of (u π [ σ t ] M X (t exp [ σ t ] exp [ e t σ e ] [ σ (u σ tu+ σ 4 t t exp [( exp σ (u σ tu+ σ 4 t exp [( σ (u σ tu+ σ 4 t ] σ π exp [( σ (u σ t ] ] u σ t σ σ π du σ π du ] du ] du (36 σ tu+ σ 4 t and du (37 The function inside the integral is a normal density function with mean and variance equal to σ t and σ, respectively. Hence the integral is equal to. Then M X (t e t σ. (38 The moments of u x - µ can be obtained from M X (t by differentiating. For example the first central moment is E(X µ d dt (e t σ ( tσ e t σ t 0 t 0 (39 0 The second central moment is E(X µ The third central moment is ( d e t σ dt t 0 d (t ( σ e t σ t 0 dt (t ( ( σ 4 e t σ + σ e t σ t 0 σ (40

36 36 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS E(X µ 3 ( d3 dt 3 e t σ t 0 d (t ( σ 4 e t σ dt ( ( t 3 σ 6 e t σ ( ( t 3 σ 6 e t σ ( + σ e t σ t 0 ( +tσ 4 e t σ ( +3tσ 4 e t σ t 0 ( + tσ 4 e t σ t 0 (4 0 The fourth central moment is E(X µ 4 ( d4 e t σ dt 4 t 0 d ( ( t 3 σ 6 e t σ dt ( ( t 4 σ 8 e t σ ( ( t 4 σ 8 e t σ ( +3tσ 4 e t σ t 0 ( +3t σ 6 e t σ ( +6t σ 6 e t σ ( +3t σ 6 e t σ ( +3σ 4 e t σ t 0 ( +3σ 4 e t σ t 0 3σ 4 ( Example 4. Now consider the raw moments of the normal distribution. The density function is given by f(x ; µ, σ πσ e ( x µ σ (43 To find the moment-generating function for X we integrate the following function. M X (t E[e tx ] πσ e tx e ( x µ σ dx (44 First rewrite the integral as follows by putting the exponents over a common denominator. M X (t E[e tx ] πσ πσ πσ πσ Now square the term in the exponent and simplify e tx e ( x µ σ dx e σ (x µ + tx dx e σ ( x µ + σ tx σ dx e σ [(x µ σ tx] dx (45

37 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 37 M X (t E[e tx ] πσ πσ e σ [x µx + µ σ tx] dx e σ [ x x ( µ + σ t + µ ] dx (46 Now consider the exponent of e and complete the square for the portion in brackets as follows. x x ( µ + σ t + µ x x ( µ + σ t + µ + µσ t + σ 4 t µσ t σ 4 t ( x (µ + σ t µσ t σ 4 t (47 To simplify the integral, complete the square in the exponent of e by multiplying and dividing by [ ][ ] e µσ t+σ 4 t σ e µσ t σ 4 t σ (48 in the following manner M X (t πσ [ ] e µσ t+σ 4 t σ [ ] e µσ t+σ 4 t σ Now find the square root of e σ [x x(µ+σ t + µ ] dx πσ πσ [ ] e σ [x x(µ+σ t+µ ] e µσ t σ 4 t σ dx e σ [x x( µ+σ t+µ +µσ t+σ 4 t ] dx (49 x x ( µ + σ t + µ +µσ t + σ 4 t (50 Given we would like to have (x something, try squaring x (µ + σ t as follows [ x (µ + σ t ] x ( x(µ + σ t + ( µ + σ t x x ( µ σ t + µ +µσ t + σ 4 t (5 So [ x (µ + σ t ] is the square root of x x ( µ σ t + µ + µσ t + σ 4 t. Making the substitution in equation 49 we obtain M X (t [ ] e µσ t+σ 4 t σ [ ] e µσ t+σ 4 t σ πσ πσ e σ [x x(µ+σ t+µ +µσ t+σ 4 t ] dx e σ ([x (µ+σ t] (5 The expression to the right of e µσ t+σ 4 t σ is a normal density function with mean and variance equal to µ + σ t and σ, respectively. Hence the integral is equal to. Then

38 38 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS M X (t [ ] e µσ t+σ 4 t σ e µt+ t σ. (53 The moments of X can be obtained from M X (t by differentiating with respect to t. For example the first raw moment is E(X d (e µt + t σ t 0 dt (µ + tσ (e µt+ t σ t 0 (54 µ The second raw moment is ( E(x d dt e µt+ t σ d dt µ + σ The third raw moment is t0 ( (µ + tσ ( e µt+ t σ ( (µ + tσ ( e µt+ t σ t0 + σ ( e µt+ t σ t 0 (55 E(X 3 d3 dt 3 ( d dt e µt+ t σ t0 ( (µ + tσ ( e µt+ t σ [ (µ + tσ 3 ( e µ+ t σ ( (µ + tσ 3 ( e µ+ t σ µ 3 +3σ µ The fourth raw moment is ( + σ e µt+ t σ t 0 +σ ( µ + tσ ( e µ+ t σ +3σ ( µ + tσ ( e µ+ t σ t 0 + σ ( µ + tσ ( e µ+ t σ ] t0 (56 ( E(X 4 d4 dt 4 e µ+ t σ d dt + t0 ( (µ + tσ 3 ( e µ+ t σ ( (µ + tσ 4 ( e µ+ t σ (3σ ( µ + tσ ( e µ+ t σ ( (µ + tσ 4 ( e µ+ t σ +3σ ( µ + tσ ( e µ+ t σ t0 +3σ ( µ + tσ ( e µ+ t σ t0 ( +3σ 4 e µ+ t σ t0 +6σ ( µ + tσ ( e µ+ t σ ( +3σ 4 e µ+ t σ t0 (57 µ 4 +6µ σ +3σ 4

39 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS CHEBYSHEV S INEQUALITY Chebyshev s inequality applies equally well to discrete and continuous random variables. We state it here as a theorem. 4.. A Theorem of Chebyshev. Theorem 0. Let X be a random variable with mean µ and finite variance σ. Then, for any constant k>0, P ( X µ < kσ k or P ( X µ kσ k. (58 The result applies for any probability distribution, whether the probability histogram is bellshaped or not. The results of the theorem are very conservative in the sense that the actual probability that X is in the interval µ ± kσ usually exceeds the lower bound for the probability, /k, by a considerable amount. Chebyshev s theorem enables us to find bounds for probabilities that ordinarily would have to be obtained by tedious mathematical manipulations (integration or summation. We often can obtain estimates of the means and variances of random variables without specifying the distribution of the variable. In situations like these, Chcbyshev s inequality provides meaningful bounds for probabilities of interest. Proof. Let f(x denote the density function of X. Then V (X σ + µ kσ µ + kσ µ kσ + (x µ f (x dx µ + kσ (x µ f(x dx (x µ f(x dx The second integral is always greater than or equal to zero. (x µ f(x dx. (59 Now consider relationship between (x µ and kσ. And similarly, x µ kσ x kσ µ µ x kσ ( µ x k σ ( x µ k σ (60

40 40 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS x µ + kσ x µ kσ (6 ( x µ k σ Now replace (x µ with kσ in the first and third integrals of equation 59 to obtain the inequality V (X σ µ kσ k σ f(x dx + µ + kσ k σ f(x dx. (6 Then σ k σ [ µ kσ f(x dx + + µ + kσ ] f(x dx (63 W can write this in the following useful manner σ k σ {P ( X µ kσ+p ( X + µ + kσ} k σ P ( X µ kσ. (64 Dividing by k σ, we obtain P ( X µ kσ k, (65 or, equivalently, P ( X µ < kσ k. ( Example. The number of accidents that occur during a given month at a particular intersection, X, tabulated by a group of Boy Scouts over a long time period is found to have a mean of and a standard deviation of. The underlying distribution is not known. What is the probability that, next month, X will be greater than eight but less than sixteen. We thus want P [8 <X<6]. We can write equation 58 in the following useful manner. P [(µ kσ < X < ( µ + kµ] k (67 For this problem µ and σ soµ kσ - k. We can solve this equation for the k that gives us the desired bounds on the probability.

41 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 4 We then obtain µ kµ (k ( 8 k 4 k and + ( k ( 6 k 4 k (68 P [(8 < X < (6 ] Therefore the probability that X is between 8 and 6 is at least 3/4. ( Alternative statement of Chebyshev s inequality. Theorem. Let X be a random variable and let g(x be a non-negative function. Then for r>0, Proof. Eg(X r P [g(x r] Eg(X r g (x f X (x dx [x : g (x r] [ x : g(x r ] rp [ g (X r ] g (x f X (x dx P [g (X r ] Eg(X r (g is nonnegative f X (x dx (g (x r (70 ( Another version of Chebyshev s inequality as special case of general version. Corollary. Let X be a random variable with mean µ and variance σ. Then for any k>0 or any ε>0 P [ X µ kσ] k (7a P [ X µ ε ] σ ε (7b

42 4 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS Proof. Let g(x (x µ, where µ E(X and σ Var (X. Then let r k. Then σ [ ] ( X µ P k ( (X µ σ k E σ E (X µ k σ k because E(X µ σ. We can then rewrite equation 73 as follows [ ] (X µ P σ k k P [ ( X µ k σ ] k (73 (74 P [ X µ kσ] k

43 RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 43 REFERENCES [] Amemiya, T. Advanced Econometrics. Cambridge: Harvard University Press, 985. [] Bickel P.J., and K.A. Doksum. Mathematical Statistics: Basic Ideas and Selected Topics, Vol. nd Edition. Upper Saddle River, NJ: Prentice Hall, 00. [3] Billingsley, P. Probability and Measure. 3rd edition. New York: Wiley, 995. [4] Casella, G. And R.L. Berger. Statistical Inference. Pacific Grove, CA: Duxbury, 00. [5] Cramer, H. Mathematical Methods of Statistics. Princeton: Princeton University Press, 946. [6] Goldberger, A.S. Econometric Theory. New York: Wiley, 964. [7] Lindgren, B.W. Statistical Theory 4 th edition. Boca Raton, FL: Chapman & Hall/CRC, 993. [8] Rao, C.R. Linear Statistical Inference and its Applications. nd edition. New York: Wiley, 973. [9] Theil, H. Principles of Econometrics. New York: Wiley, 97.

MULTIVARIATE PROBABILITY DISTRIBUTIONS

MULTIVARIATE PROBABILITY DISTRIBUTIONS MULTIVARIATE PROBABILITY DISTRIBUTIONS. PRELIMINARIES.. Example. Consider an experiment that consists of tossing a die and a coin at the same time. We can consider a number of random variables defined

More information

The sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1].

The sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1]. Probability Theory Probability Spaces and Events Consider a random experiment with several possible outcomes. For example, we might roll a pair of dice, flip a coin three times, or choose a random real

More information

5. Continuous Random Variables

5. Continuous Random Variables 5. Continuous Random Variables Continuous random variables can take any value in an interval. They are used to model physical characteristics such as time, length, position, etc. Examples (i) Let X be

More information

Section 5.1 Continuous Random Variables: Introduction

Section 5.1 Continuous Random Variables: Introduction Section 5. Continuous Random Variables: Introduction Not all random variables are discrete. For example:. Waiting times for anything (train, arrival of customer, production of mrna molecule from gene,

More information

Chapter 4 Lecture Notes

Chapter 4 Lecture Notes Chapter 4 Lecture Notes Random Variables October 27, 2015 1 Section 4.1 Random Variables A random variable is typically a real-valued function defined on the sample space of some experiment. For instance,

More information

Notes on Continuous Random Variables

Notes on Continuous Random Variables Notes on Continuous Random Variables Continuous random variables are random quantities that are measured on a continuous scale. They can usually take on any value over some interval, which distinguishes

More information

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4) Summary of Formulas and Concepts Descriptive Statistics (Ch. 1-4) Definitions Population: The complete set of numerical information on a particular quantity in which an investigator is interested. We assume

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES Contents 1. Random variables and measurable functions 2. Cumulative distribution functions 3. Discrete

More information

Lecture 7: Continuous Random Variables

Lecture 7: Continuous Random Variables Lecture 7: Continuous Random Variables 21 September 2005 1 Our First Continuous Random Variable The back of the lecture hall is roughly 10 meters across. Suppose it were exactly 10 meters, and consider

More information

Lecture 6: Discrete & Continuous Probability and Random Variables

Lecture 6: Discrete & Continuous Probability and Random Variables Lecture 6: Discrete & Continuous Probability and Random Variables D. Alex Hughes Math Camp September 17, 2015 D. Alex Hughes (Math Camp) Lecture 6: Discrete & Continuous Probability and Random September

More information

ST 371 (IV): Discrete Random Variables

ST 371 (IV): Discrete Random Variables ST 371 (IV): Discrete Random Variables 1 Random Variables A random variable (rv) is a function that is defined on the sample space of the experiment and that assigns a numerical variable to each possible

More information

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Module No. #01 Lecture No. #15 Special Distributions-VI Today, I am going to introduce

More information

Probability Generating Functions

Probability Generating Functions page 39 Chapter 3 Probability Generating Functions 3 Preamble: Generating Functions Generating functions are widely used in mathematics, and play an important role in probability theory Consider a sequence

More information

For a partition B 1,..., B n, where B i B j = for i. A = (A B 1 ) (A B 2 ),..., (A B n ) and thus. P (A) = P (A B i ) = P (A B i )P (B i )

For a partition B 1,..., B n, where B i B j = for i. A = (A B 1 ) (A B 2 ),..., (A B n ) and thus. P (A) = P (A B i ) = P (A B i )P (B i ) Probability Review 15.075 Cynthia Rudin A probability space, defined by Kolmogorov (1903-1987) consists of: A set of outcomes S, e.g., for the roll of a die, S = {1, 2, 3, 4, 5, 6}, 1 1 2 1 6 for the roll

More information

Probability Theory. Florian Herzog. A random variable is neither random nor variable. Gian-Carlo Rota, M.I.T..

Probability Theory. Florian Herzog. A random variable is neither random nor variable. Gian-Carlo Rota, M.I.T.. Probability Theory A random variable is neither random nor variable. Gian-Carlo Rota, M.I.T.. Florian Herzog 2013 Probability space Probability space A probability space W is a unique triple W = {Ω, F,

More information

E3: PROBABILITY AND STATISTICS lecture notes

E3: PROBABILITY AND STATISTICS lecture notes E3: PROBABILITY AND STATISTICS lecture notes 2 Contents 1 PROBABILITY THEORY 7 1.1 Experiments and random events............................ 7 1.2 Certain event. Impossible event............................

More information

Statistics 100A Homework 8 Solutions

Statistics 100A Homework 8 Solutions Part : Chapter 7 Statistics A Homework 8 Solutions Ryan Rosario. A player throws a fair die and simultaneously flips a fair coin. If the coin lands heads, then she wins twice, and if tails, the one-half

More information

M2S1 Lecture Notes. G. A. Young http://www2.imperial.ac.uk/ ayoung

M2S1 Lecture Notes. G. A. Young http://www2.imperial.ac.uk/ ayoung M2S1 Lecture Notes G. A. Young http://www2.imperial.ac.uk/ ayoung September 2011 ii Contents 1 DEFINITIONS, TERMINOLOGY, NOTATION 1 1.1 EVENTS AND THE SAMPLE SPACE......................... 1 1.1.1 OPERATIONS

More information

Random variables P(X = 3) = P(X = 3) = 1 8, P(X = 1) = P(X = 1) = 3 8.

Random variables P(X = 3) = P(X = 3) = 1 8, P(X = 1) = P(X = 1) = 3 8. Random variables Remark on Notations 1. When X is a number chosen uniformly from a data set, What I call P(X = k) is called Freq[k, X] in the courseware. 2. When X is a random variable, what I call F ()

More information

Lecture 5: Mathematical Expectation

Lecture 5: Mathematical Expectation Lecture 5: Mathematical Expectation Assist. Prof. Dr. Emel YAVUZ DUMAN MCB1007 Introduction to Probability and Statistics İstanbul Kültür University Outline 1 Introduction 2 The Expected Value of a Random

More information

UNIT I: RANDOM VARIABLES PART- A -TWO MARKS

UNIT I: RANDOM VARIABLES PART- A -TWO MARKS UNIT I: RANDOM VARIABLES PART- A -TWO MARKS 1. Given the probability density function of a continuous random variable X as follows f(x) = 6x (1-x) 0

More information

ECE302 Spring 2006 HW5 Solutions February 21, 2006 1

ECE302 Spring 2006 HW5 Solutions February 21, 2006 1 ECE3 Spring 6 HW5 Solutions February 1, 6 1 Solutions to HW5 Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in italics

More information

Normal distribution. ) 2 /2σ. 2π σ

Normal distribution. ) 2 /2σ. 2π σ Normal distribution The normal distribution is the most widely known and used of all distributions. Because the normal distribution approximates many natural phenomena so well, it has developed into a

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 18. A Brief Introduction to Continuous Probability

Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 18. A Brief Introduction to Continuous Probability CS 7 Discrete Mathematics and Probability Theory Fall 29 Satish Rao, David Tse Note 8 A Brief Introduction to Continuous Probability Up to now we have focused exclusively on discrete probability spaces

More information

Introduction to Probability

Introduction to Probability Introduction to Probability EE 179, Lecture 15, Handout #24 Probability theory gives a mathematical characterization for experiments with random outcomes. coin toss life of lightbulb binary data sequence

More information

5.1 Radical Notation and Rational Exponents

5.1 Radical Notation and Rational Exponents Section 5.1 Radical Notation and Rational Exponents 1 5.1 Radical Notation and Rational Exponents We now review how exponents can be used to describe not only powers (such as 5 2 and 2 3 ), but also roots

More information

Representation of functions as power series

Representation of functions as power series Representation of functions as power series Dr. Philippe B. Laval Kennesaw State University November 9, 008 Abstract This document is a summary of the theory and techniques used to represent functions

More information

Lecture 13: Martingales

Lecture 13: Martingales Lecture 13: Martingales 1. Definition of a Martingale 1.1 Filtrations 1.2 Definition of a martingale and its basic properties 1.3 Sums of independent random variables and related models 1.4 Products of

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Math 541: Statistical Theory II Lecturer: Songfeng Zheng Maximum Likelihood Estimation 1 Maximum Likelihood Estimation Maximum likelihood is a relatively simple method of constructing an estimator for

More information

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015.

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015. Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment -3, Probability and Statistics, March 05. Due:-March 5, 05.. Show that the function 0 for x < x+ F (x) = 4 for x < for x

More information

Section 6.1 Joint Distribution Functions

Section 6.1 Joint Distribution Functions Section 6.1 Joint Distribution Functions We often care about more than one random variable at a time. DEFINITION: For any two random variables X and Y the joint cumulative probability distribution function

More information

Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab

Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab Monte Carlo Simulation: IEOR E4703 Fall 2004 c 2004 by Martin Haugh Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab 1 Overview of Monte Carlo Simulation 1.1 Why use simulation?

More information

Section 1.3 P 1 = 1 2. = 1 4 2 8. P n = 1 P 3 = Continuing in this fashion, it should seem reasonable that, for any n = 1, 2, 3,..., = 1 2 4.

Section 1.3 P 1 = 1 2. = 1 4 2 8. P n = 1 P 3 = Continuing in this fashion, it should seem reasonable that, for any n = 1, 2, 3,..., = 1 2 4. Difference Equations to Differential Equations Section. The Sum of a Sequence This section considers the problem of adding together the terms of a sequence. Of course, this is a problem only if more than

More information

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

More information

Random variables, probability distributions, binomial random variable

Random variables, probability distributions, binomial random variable Week 4 lecture notes. WEEK 4 page 1 Random variables, probability distributions, binomial random variable Eample 1 : Consider the eperiment of flipping a fair coin three times. The number of tails that

More information

4. Continuous Random Variables, the Pareto and Normal Distributions

4. Continuous Random Variables, the Pareto and Normal Distributions 4. Continuous Random Variables, the Pareto and Normal Distributions A continuous random variable X can take any value in a given range (e.g. height, weight, age). The distribution of a continuous random

More information

WHERE DOES THE 10% CONDITION COME FROM?

WHERE DOES THE 10% CONDITION COME FROM? 1 WHERE DOES THE 10% CONDITION COME FROM? The text has mentioned The 10% Condition (at least) twice so far: p. 407 Bernoulli trials must be independent. If that assumption is violated, it is still okay

More information

STAT 830 Convergence in Distribution

STAT 830 Convergence in Distribution STAT 830 Convergence in Distribution Richard Lockhart Simon Fraser University STAT 830 Fall 2011 Richard Lockhart (Simon Fraser University) STAT 830 Convergence in Distribution STAT 830 Fall 2011 1 / 31

More information

Principle of Data Reduction

Principle of Data Reduction Chapter 6 Principle of Data Reduction 6.1 Introduction An experimenter uses the information in a sample X 1,..., X n to make inferences about an unknown parameter θ. If the sample size n is large, then

More information

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5.

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5. PUTNAM TRAINING POLYNOMIALS (Last updated: November 17, 2015) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include

More information

4.1 4.2 Probability Distribution for Discrete Random Variables

4.1 4.2 Probability Distribution for Discrete Random Variables 4.1 4.2 Probability Distribution for Discrete Random Variables Key concepts: discrete random variable, probability distribution, expected value, variance, and standard deviation of a discrete random variable.

More information

TOPIC 4: DERIVATIVES

TOPIC 4: DERIVATIVES TOPIC 4: DERIVATIVES 1. The derivative of a function. Differentiation rules 1.1. The slope of a curve. The slope of a curve at a point P is a measure of the steepness of the curve. If Q is a point on the

More information

Algebra I Vocabulary Cards

Algebra I Vocabulary Cards Algebra I Vocabulary Cards Table of Contents Expressions and Operations Natural Numbers Whole Numbers Integers Rational Numbers Irrational Numbers Real Numbers Absolute Value Order of Operations Expression

More information

Sequences and Series

Sequences and Series Sequences and Series Consider the following sum: 2 + 4 + 8 + 6 + + 2 i + The dots at the end indicate that the sum goes on forever. Does this make sense? Can we assign a numerical value to an infinite

More information

Taylor and Maclaurin Series

Taylor and Maclaurin Series Taylor and Maclaurin Series In the preceding section we were able to find power series representations for a certain restricted class of functions. Here we investigate more general problems: Which functions

More information

An Introduction to Basic Statistics and Probability

An Introduction to Basic Statistics and Probability An Introduction to Basic Statistics and Probability Shenek Heyward NCSU An Introduction to Basic Statistics and Probability p. 1/4 Outline Basic probability concepts Conditional probability Discrete Random

More information

Probability density function : An arbitrary continuous random variable X is similarly described by its probability density function f x = f X

Probability density function : An arbitrary continuous random variable X is similarly described by its probability density function f x = f X Week 6 notes : Continuous random variables and their probability densities WEEK 6 page 1 uniform, normal, gamma, exponential,chi-squared distributions, normal approx'n to the binomial Uniform [,1] random

More information

Some probability and statistics

Some probability and statistics Appendix A Some probability and statistics A Probabilities, random variables and their distribution We summarize a few of the basic concepts of random variables, usually denoted by capital letters, X,Y,

More information

2.6. Probability. In general the probability density of a random variable satisfies two conditions:

2.6. Probability. In general the probability density of a random variable satisfies two conditions: 2.6. PROBABILITY 66 2.6. Probability 2.6.. Continuous Random Variables. A random variable a real-valued function defined on some set of possible outcomes of a random experiment; e.g. the number of points

More information

Probability and Statistics Vocabulary List (Definitions for Middle School Teachers)

Probability and Statistics Vocabulary List (Definitions for Middle School Teachers) Probability and Statistics Vocabulary List (Definitions for Middle School Teachers) B Bar graph a diagram representing the frequency distribution for nominal or discrete data. It consists of a sequence

More information

Statistics 100A Homework 7 Solutions

Statistics 100A Homework 7 Solutions Chapter 6 Statistics A Homework 7 Solutions Ryan Rosario. A television store owner figures that 45 percent of the customers entering his store will purchase an ordinary television set, 5 percent will purchase

More information

Probability Calculator

Probability Calculator Chapter 95 Introduction Most statisticians have a set of probability tables that they refer to in doing their statistical wor. This procedure provides you with a set of electronic statistical tables that

More information

CHAPTER 6: Continuous Uniform Distribution: 6.1. Definition: The density function of the continuous random variable X on the interval [A, B] is.

CHAPTER 6: Continuous Uniform Distribution: 6.1. Definition: The density function of the continuous random variable X on the interval [A, B] is. Some Continuous Probability Distributions CHAPTER 6: Continuous Uniform Distribution: 6. Definition: The density function of the continuous random variable X on the interval [A, B] is B A A x B f(x; A,

More information

Practice problems for Homework 11 - Point Estimation

Practice problems for Homework 11 - Point Estimation Practice problems for Homework 11 - Point Estimation 1. (10 marks) Suppose we want to select a random sample of size 5 from the current CS 3341 students. Which of the following strategies is the best:

More information

Exploratory Data Analysis

Exploratory Data Analysis Exploratory Data Analysis Johannes Schauer johannes.schauer@tugraz.at Institute of Statistics Graz University of Technology Steyrergasse 17/IV, 8010 Graz www.statistics.tugraz.at February 12, 2008 Introduction

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Math 461 Fall 2006 Test 2 Solutions

Math 461 Fall 2006 Test 2 Solutions Math 461 Fall 2006 Test 2 Solutions Total points: 100. Do all questions. Explain all answers. No notes, books, or electronic devices. 1. [105+5 points] Assume X Exponential(λ). Justify the following two

More information

Elements of probability theory

Elements of probability theory 2 Elements of probability theory Probability theory provides mathematical models for random phenomena, that is, phenomena which under repeated observations yield di erent outcomes that cannot be predicted

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,

More information

4 The M/M/1 queue. 4.1 Time-dependent behaviour

4 The M/M/1 queue. 4.1 Time-dependent behaviour 4 The M/M/1 queue In this chapter we will analyze the model with exponential interarrival times with mean 1/λ, exponential service times with mean 1/µ and a single server. Customers are served in order

More information

Definition: Suppose that two random variables, either continuous or discrete, X and Y have joint density

Definition: Suppose that two random variables, either continuous or discrete, X and Y have joint density HW MATH 461/561 Lecture Notes 15 1 Definition: Suppose that two random variables, either continuous or discrete, X and Y have joint density and marginal densities f(x, y), (x, y) Λ X,Y f X (x), x Λ X,

More information

Lecture 8. Confidence intervals and the central limit theorem

Lecture 8. Confidence intervals and the central limit theorem Lecture 8. Confidence intervals and the central limit theorem Mathematical Statistics and Discrete Mathematics November 25th, 2015 1 / 15 Central limit theorem Let X 1, X 2,... X n be a random sample of

More information

Math 431 An Introduction to Probability. Final Exam Solutions

Math 431 An Introduction to Probability. Final Exam Solutions Math 43 An Introduction to Probability Final Eam Solutions. A continuous random variable X has cdf a for 0, F () = for 0 <

More information

Undergraduate Notes in Mathematics. Arkansas Tech University Department of Mathematics

Undergraduate Notes in Mathematics. Arkansas Tech University Department of Mathematics Undergraduate Notes in Mathematics Arkansas Tech University Department of Mathematics An Introductory Single Variable Real Analysis: A Learning Approach through Problem Solving Marcel B. Finan c All Rights

More information

Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test

Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test Math Review for the Quantitative Reasoning Measure of the GRE revised General Test www.ets.org Overview This Math Review will familiarize you with the mathematical skills and concepts that are important

More information

Chapter 4 - Practice Problems 2

Chapter 4 - Practice Problems 2 Chapter - Practice Problems 2 MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question. Find the indicated probability. 1) If you flip a coin three times, the

More information

What is Statistics? Lecture 1. Introduction and probability review. Idea of parametric inference

What is Statistics? Lecture 1. Introduction and probability review. Idea of parametric inference 0. 1. Introduction and probability review 1.1. What is Statistics? What is Statistics? Lecture 1. Introduction and probability review There are many definitions: I will use A set of principle and procedures

More information

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August

More information

STA 256: Statistics and Probability I

STA 256: Statistics and Probability I Al Nosedal. University of Toronto. Fall 2014 1 2 3 4 5 My momma always said: Life was like a box of chocolates. You never know what you re gonna get. Forrest Gump. Experiment, outcome, sample space, and

More information

CA200 Quantitative Analysis for Business Decisions. File name: CA200_Section_04A_StatisticsIntroduction

CA200 Quantitative Analysis for Business Decisions. File name: CA200_Section_04A_StatisticsIntroduction CA200 Quantitative Analysis for Business Decisions File name: CA200_Section_04A_StatisticsIntroduction Table of Contents 4. Introduction to Statistics... 1 4.1 Overview... 3 4.2 Discrete or continuous

More information

Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients. y + p(t) y + q(t) y = g(t), g(t) 0.

Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients. y + p(t) y + q(t) y = g(t), g(t) 0. Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients We will now turn our attention to nonhomogeneous second order linear equations, equations with the standard

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 14 10/27/2008 MOMENT GENERATING FUNCTIONS

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 14 10/27/2008 MOMENT GENERATING FUNCTIONS MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 14 10/27/2008 MOMENT GENERATING FUNCTIONS Contents 1. Moment generating functions 2. Sum of a ranom number of ranom variables 3. Transforms

More information

5.3 Improper Integrals Involving Rational and Exponential Functions

5.3 Improper Integrals Involving Rational and Exponential Functions Section 5.3 Improper Integrals Involving Rational and Exponential Functions 99.. 3. 4. dθ +a cos θ =, < a

More information

Lectures 5-6: Taylor Series

Lectures 5-6: Taylor Series Math 1d Instructor: Padraic Bartlett Lectures 5-: Taylor Series Weeks 5- Caltech 213 1 Taylor Polynomials and Series As we saw in week 4, power series are remarkably nice objects to work with. In particular,

More information

Real Roots of Univariate Polynomials with Real Coefficients

Real Roots of Univariate Polynomials with Real Coefficients Real Roots of Univariate Polynomials with Real Coefficients mostly written by Christina Hewitt March 22, 2012 1 Introduction Polynomial equations are used throughout mathematics. When solving polynomials

More information

The Method of Least Squares

The Method of Least Squares The Method of Least Squares Steven J. Miller Mathematics Department Brown University Providence, RI 0292 Abstract The Method of Least Squares is a procedure to determine the best fit line to data; the

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

1 Prior Probability and Posterior Probability

1 Prior Probability and Posterior Probability Math 541: Statistical Theory II Bayesian Approach to Parameter Estimation Lecturer: Songfeng Zheng 1 Prior Probability and Posterior Probability Consider now a problem of statistical inference in which

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2014 Timo Koski () Mathematisk statistik 24.09.2014 1 / 75 Learning outcomes Random vectors, mean vector, covariance

More information

A.2. Exponents and Radicals. Integer Exponents. What you should learn. Exponential Notation. Why you should learn it. Properties of Exponents

A.2. Exponents and Radicals. Integer Exponents. What you should learn. Exponential Notation. Why you should learn it. Properties of Exponents Appendix A. Exponents and Radicals A11 A. Exponents and Radicals What you should learn Use properties of exponents. Use scientific notation to represent real numbers. Use properties of radicals. Simplify

More information

The Heat Equation. Lectures INF2320 p. 1/88

The Heat Equation. Lectures INF2320 p. 1/88 The Heat Equation Lectures INF232 p. 1/88 Lectures INF232 p. 2/88 The Heat Equation We study the heat equation: u t = u xx for x (,1), t >, (1) u(,t) = u(1,t) = for t >, (2) u(x,) = f(x) for x (,1), (3)

More information

Answer Key for California State Standards: Algebra I

Answer Key for California State Standards: Algebra I Algebra I: Symbolic reasoning and calculations with symbols are central in algebra. Through the study of algebra, a student develops an understanding of the symbolic language of mathematics and the sciences.

More information

APPLIED MATHEMATICS ADVANCED LEVEL

APPLIED MATHEMATICS ADVANCED LEVEL APPLIED MATHEMATICS ADVANCED LEVEL INTRODUCTION This syllabus serves to examine candidates knowledge and skills in introductory mathematical and statistical methods, and their applications. For applications

More information

Part II - Random Processes

Part II - Random Processes Part II - Random Processes Goals for this unit: Give overview of concepts from discrete probability Give analogous concepts from continuous probability See how the Monte Carlo method can be viewed as sampling

More information

1 if 1 x 0 1 if 0 x 1

1 if 1 x 0 1 if 0 x 1 Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

More information

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year.

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year. This document is designed to help North Carolina educators teach the Common Core (Standard Course of Study). NCDPI staff are continually updating and improving these tools to better serve teachers. Algebra

More information

MATH4427 Notebook 2 Spring 2016. 2 MATH4427 Notebook 2 3. 2.1 Definitions and Examples... 3. 2.2 Performance Measures for Estimators...

MATH4427 Notebook 2 Spring 2016. 2 MATH4427 Notebook 2 3. 2.1 Definitions and Examples... 3. 2.2 Performance Measures for Estimators... MATH4427 Notebook 2 Spring 2016 prepared by Professor Jenny Baglivo c Copyright 2009-2016 by Jenny A. Baglivo. All Rights Reserved. Contents 2 MATH4427 Notebook 2 3 2.1 Definitions and Examples...................................

More information

e.g. arrival of a customer to a service station or breakdown of a component in some system.

e.g. arrival of a customer to a service station or breakdown of a component in some system. Poisson process Events occur at random instants of time at an average rate of λ events per second. e.g. arrival of a customer to a service station or breakdown of a component in some system. Let N(t) be

More information

MATH 132: CALCULUS II SYLLABUS

MATH 132: CALCULUS II SYLLABUS MATH 32: CALCULUS II SYLLABUS Prerequisites: Successful completion of Math 3 (or its equivalent elsewhere). Math 27 is normally not a sufficient prerequisite for Math 32. Required Text: Calculus: Early

More information

Lecture 3 : The Natural Exponential Function: f(x) = exp(x) = e x. y = exp(x) if and only if x = ln(y)

Lecture 3 : The Natural Exponential Function: f(x) = exp(x) = e x. y = exp(x) if and only if x = ln(y) Lecture 3 : The Natural Exponential Function: f(x) = exp(x) = Last day, we saw that the function f(x) = ln x is one-to-one, with domain (, ) and range (, ). We can conclude that f(x) has an inverse function

More information

Copy in your notebook: Add an example of each term with the symbols used in algebra 2 if there are any.

Copy in your notebook: Add an example of each term with the symbols used in algebra 2 if there are any. Algebra 2 - Chapter Prerequisites Vocabulary Copy in your notebook: Add an example of each term with the symbols used in algebra 2 if there are any. P1 p. 1 1. counting(natural) numbers - {1,2,3,4,...}

More information

1.1 Introduction, and Review of Probability Theory... 3. 1.1.1 Random Variable, Range, Types of Random Variables... 3. 1.1.2 CDF, PDF, Quantiles...

1.1 Introduction, and Review of Probability Theory... 3. 1.1.1 Random Variable, Range, Types of Random Variables... 3. 1.1.2 CDF, PDF, Quantiles... MATH4427 Notebook 1 Spring 2016 prepared by Professor Jenny Baglivo c Copyright 2009-2016 by Jenny A. Baglivo. All Rights Reserved. Contents 1 MATH4427 Notebook 1 3 1.1 Introduction, and Review of Probability

More information

1.5 / 1 -- Communication Networks II (Görg) -- www.comnets.uni-bremen.de. 1.5 Transforms

1.5 / 1 -- Communication Networks II (Görg) -- www.comnets.uni-bremen.de. 1.5 Transforms .5 / -- Communication Networks II (Görg) -- www.comnets.uni-bremen.de.5 Transforms Using different summation and integral transformations pmf, pdf and cdf/ccdf can be transformed in such a way, that even

More information

Lesson 20. Probability and Cumulative Distribution Functions

Lesson 20. Probability and Cumulative Distribution Functions Lesson 20 Probability and Cumulative Distribution Functions Recall If p(x) is a density function for some characteristic of a population, then Recall If p(x) is a density function for some characteristic

More information

You flip a fair coin four times, what is the probability that you obtain three heads.

You flip a fair coin four times, what is the probability that you obtain three heads. Handout 4: Binomial Distribution Reading Assignment: Chapter 5 In the previous handout, we looked at continuous random variables and calculating probabilities and percentiles for those type of variables.

More information

Statistics 100A Homework 4 Solutions

Statistics 100A Homework 4 Solutions Problem 1 For a discrete random variable X, Statistics 100A Homework 4 Solutions Ryan Rosario Note that all of the problems below as you to prove the statement. We are proving the properties of epectation

More information

Question: What is the probability that a five-card poker hand contains a flush, that is, five cards of the same suit?

Question: What is the probability that a five-card poker hand contains a flush, that is, five cards of the same suit? ECS20 Discrete Mathematics Quarter: Spring 2007 Instructor: John Steinberger Assistant: Sophie Engle (prepared by Sophie Engle) Homework 8 Hints Due Wednesday June 6 th 2007 Section 6.1 #16 What is the

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information