Chapter 4 Lecture Notes



Similar documents
ST 371 (IV): Discrete Random Variables

Random variables P(X = 3) = P(X = 3) = 1 8, P(X = 1) = P(X = 1) = 3 8.

Statistics 100A Homework 3 Solutions

University of California, Los Angeles Department of Statistics. Random variables

Example. A casino offers the following bets (the fairest bets in the casino!) 1 You get $0 (i.e., you can walk away)

Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 13. Random Variables: Distribution and Expectation

The sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1].

Question: What is the probability that a five-card poker hand contains a flush, that is, five cards of the same suit?

ECE 316 Probability Theory and Random Processes

Statistics and Random Variables. Math 425 Introduction to Probability Lecture 14. Finite valued Random Variables. Expectation defined

Statistics 100A Homework 4 Solutions

36 Odds, Expected Value, and Conditional Probability

Probability Distribution for Discrete Random Variables

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)

Chapter 3: DISCRETE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS. Part 3: Discrete Uniform Distribution Binomial Distribution

Random Variables. Chapter 2. Random Variables 1

Lecture 6: Discrete & Continuous Probability and Random Variables

MA 1125 Lecture 14 - Expected Values. Friday, February 28, Objectives: Introduce expected values.

5. Continuous Random Variables

Feb 7 Homework Solutions Math 151, Winter Chapter 4 Problems (pages )

V. RANDOM VARIABLES, PROBABILITY DISTRIBUTIONS, EXPECTED VALUE

AMS 5 CHANCE VARIABILITY

Random variables, probability distributions, binomial random variable

STAT 35A HW2 Solutions

Statistics 100A Homework 8 Solutions

Exploratory Data Analysis

Chapter 5. Discrete Probability Distributions

For a partition B 1,..., B n, where B i B j = for i. A = (A B 1 ) (A B 2 ),..., (A B n ) and thus. P (A) = P (A B i ) = P (A B i )P (B i )

Section 5.1 Continuous Random Variables: Introduction

WHERE DOES THE 10% CONDITION COME FROM?

Joint Exam 1/P Sample Exam 1

Section 7C: The Law of Large Numbers

STT315 Chapter 4 Random Variables & Probability Distributions KM. Chapter 4.5, 6, 8 Probability Distributions for Continuous Random Variables

Notes on Continuous Random Variables

Probability and Expected Value

Chapter 5. Random variables

MAS108 Probability I

Ch5: Discrete Probability Distributions Section 5-1: Probability Distribution

The overall size of these chance errors is measured by their RMS HALF THE NUMBER OF TOSSES NUMBER OF HEADS MINUS NUMBER OF TOSSES

Introduction to Probability

REPEATED TRIALS. The probability of winning those k chosen times and losing the other times is then p k q n k.

Final Mathematics 5010, Section 1, Fall 2004 Instructor: D.A. Levin

An Introduction to Basic Statistics and Probability

E3: PROBABILITY AND STATISTICS lecture notes

Math 425 (Fall 08) Solutions Midterm 2 November 6, 2008

Discrete Math in Computer Science Homework 7 Solutions (Max Points: 80)

Important Probability Distributions OPRE 6301

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

Sums of Independent Random Variables

Chapter 4. Probability Distributions

Binomial random variables

2. Discrete random variables

2WB05 Simulation Lecture 8: Generating random variables

ECE302 Spring 2006 HW4 Solutions February 6,

Homework 4 - KEY. Jeff Brenion. June 16, Note: Many problems can be solved in more than one way; we present only a single solution here.

Estimating the Frequency Distribution of the. Numbers Bet on the California Lottery

You flip a fair coin four times, what is the probability that you obtain three heads.

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

3.2 Roulette and Markov Chains

Statistics 100A Homework 4 Solutions

Lecture 13. Understanding Probability and Long-Term Expectations

Section 5 Part 2. Probability Distributions for Discrete Random Variables

Probability Models.S1 Introduction to Probability

Probability Theory. Florian Herzog. A random variable is neither random nor variable. Gian-Carlo Rota, M.I.T..

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

Chapter What is the probability that a card chosen from an ordinary deck of 52 cards is an ace? Ans: 4/52.

Unit 19: Probability Models

Expected Value and the Game of Craps

STAT 315: HOW TO CHOOSE A DISTRIBUTION FOR A RANDOM VARIABLE

Probability and Statistics Vocabulary List (Definitions for Middle School Teachers)

Unit 4 The Bernoulli and Binomial Distributions

We rst consider the game from the player's point of view: Suppose you have picked a number and placed your bet. The probability of winning is

Math/Stats 342: Solutions to Homework

Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test

arxiv: v1 [math.pr] 5 Dec 2011

FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL

1. (First passage/hitting times/gambler s ruin problem:) Suppose that X has a discrete state space and let i be a fixed state. Let

Math 461 Fall 2006 Test 2 Solutions

Introductory Probability. MATH 107: Finite Mathematics University of Louisville. March 5, 2014

Elementary Statistics and Inference. Elementary Statistics and Inference. 16 The Law of Averages (cont.) 22S:025 or 7P:025.

Aggregate Loss Models

Lecture 7: Continuous Random Variables

MBA 611 STATISTICS AND QUANTITATIVE METHODS

Contemporary Mathematics Online Math 1030 Sample Exam I Chapters No Time Limit No Scratch Paper Calculator Allowed: Scientific

Betting systems: how not to lose your money gambling

Week 5: Expected value and Betting systems

Normal distribution. ) 2 /2σ. 2π σ

Some special discrete probability distributions

Solutions: Problems for Chapter 3. Solutions: Problems for Chapter 3

Solution. Solution. (a) Sum of probabilities = 1 (Verify) (b) (see graph) Chapter 4 (Sections ) Homework Solutions. Section 4.

Section 6.1 Discrete Random variables Probability Distribution

Binomial random variables (Review)

Answer Key for California State Standards: Algebra I

Probability: Terminology and Examples Class 2, 18.05, Spring 2014 Jeremy Orloff and Jonathan Bloom

Mathematical Expectation

X: Probability:

ACMS Section 02 Elements of Statistics October 28, 2010 Midterm Examination II Answers

6.042/18.062J Mathematics for Computer Science. Expected Value I

In the situations that we will encounter, we may generally calculate the probability of an event

To define function and introduce operations on the set of functions. To investigate which of the field properties hold in the set of functions

Transcription:

Chapter 4 Lecture Notes Random Variables October 27, 2015 1

Section 4.1 Random Variables A random variable is typically a real-valued function defined on the sample space of some experiment. For instance, we may be concerned with the sum of the total value obtained when rolling two dice rather than the actual set of outcomes {(1, 6), (2, 5),... }. Or we may be interested in the total number of times a head is flipped in n trials of tossing a coin and not care about the head-tail sequence that occurred. This real number associated to specific events of an experiment is a random variable. Example (1a) Suppose we toss 3 coins and we let Y denote the number of heads that appear. Y is a random variable taking on one of the values 0, 1, 2, or 3 with respective probabilities P {Y = 0} = 1 8, P {Y = 1} = 3 8, P {Y = 2} = 3 8, P {Y = 3} = 1 8. We see that 3 P {Y = i} = 1 in accordance with the probability rules. Exercise (4.1) Two balls are chosen randomly from an urn containing 8 white, 4 black, and 2 orange balls. Suppose that we win $2 for each black ball and we lose $1 for each white ball selected. Let X denote our winnings. What are the possible values of X and what are the probabilities associated with each value? We first note that there are 6 total outcomes to this scenario. For each one, we gain or lose money. We ll denote this gain/loss by the random variable X. Let x denote any one value that X may take on. Let O, B, and W denote the event of drawing an orange, black, and white ball, respectively. Then, we can come up with the following: Outcome W W W O OO BW BO BB X -2-1 0 1 2 4 To find the probability associated with each X value, we compute the probability of drawing each outcome. Consider selecting two white balls from the urn. The probability of this event is ( 8 P (W W ) = 2) ) = 4 13. Therefore, we can write P {X = 2} = p( 2) = 4/13. Similarly, to find the probability of drawing a white ball and an orange ball, we find ( 8 )( 2 1 P (W O) = ( 1) 14 ) = 16 91. 2 We can do this for each outcome to create the following probability mass function: ( 14 2 2

Outcome W W W O OO BW BO BB X -2-1 0 1 2 4 P {X = x} 28/91 16/91 1/91 32/91 8/91 6/91 It is easy to verify that the sum of these probabilities is 1. Exercise (4.2) Two fair dice are rolled. Let X equal the product of the 2 dice. Compute P {X = i} for i = 1, 2,..., 36. Consider the product of the faces after two dice are rolled. Just as with the sum of two dice, there are 36 possible outcomes. We list them below: 1 2 3 4 5 6 1 1 2 3 4 5 6 2 2 4 6 8 10 12 3 3 6 9 12 15 18 4 4 8 12 16 20 24 5 5 10 15 20 25 30 6 6 12 18 24 30 36 Scanning the set of possible outcomes, we see that the random variable X takes on the values 1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 30, and 36, which the following probabilities: P {X = i} = 1, for i = 1, 9, 16, 25, 36 18 P {X = j} = 2, for j = 2, 3, 5, 8, 10, 15, 18, 20, 24, 30 18 P {X = 4} = 3 18 P {X = k} = 4, for k = 6, 12 18 Exercise (1d) Independent trials consisting of flipping a coin having probability p of coming up heads are continually performed until either a head occurs or a total of n flips is made. If we let X denote the number of times the coin is flipped, then X is a random variable taking on what values? What are the respective probabilities? If X is the number of times the coin is flipped, then X takes on the values 1, 2,..., n. 3

Since each flip is independent of the last, we can come up with the following outcomes and probabilities: Outcome Probability H P {X = 1} = p T H P {X = 2} = (1 p)p T T H P {X = 3} = (1 p) 2 p T T T H P {X = 4} = (1 p) 3 p. } T T {{ T } H n 2 } T T {{ T } n 1.. P {X = n 1} = (1 p) n 2 p P {X = n} = (1 p) n 1 We note here that for the number of flips to be X = n, we need only flip n 1 tails in a row. The last flip doesn t matter because either way we will be flipping the coin n times. 4

Section 4.2 Discrete Random Variables A random variable that takes on at most a countable number of possible values is said to be discrete. A countable set is a set with the same number of elements as some subset of the natural numbers, N = {0, 1, 2,...}. A countable set can be a finite set of number or a countably infinite set. A countably infinite set is an infinite set whose members can be labeled in such a way that each label has a one-to-one correspondence with every element of the natural numbers. In other words, there is a prescription that allows us to identify every member of a countably infinite set by relating this member to an element of the natural numbers. This one-to-one-correspondence is called a bijection. Recall that a function is one-to-one or injective if every element of the range is the image of at most one element in the domain. A function is called onto or surjective if every element of the range has a corresponding element in the domain. Using these definitions, we can define what it means to be countably infinite. Consider the set of integers, Z. Are they countably infinite? Yes they are because the following function maps every natural number to an integer. { f(x) = x/2 if x = even (x + 1)/2 if x = odd It is easy to show that this function is indeed a bijection. [Draw Picture] Therefore, the integers are countable and have exactly the same number of elements as the natural numbers. In a similar way, we can show that other sets are countable, like the rationals. The real numbers are uncountable. For any discrete variable X, we define a probability mass function p(x) of X given by p(x) = P {X = x}. Here, x can be thought of as a continuous variable, but we suppose that X takes on the countable values x 1, x 2,..., where each value x i is labeled by a natural number i = 1, 2,.... This is customary notation that suggests each x i originates from a countable set. Then p(x i ) is non-negative at each value and zero for all other values: p(x i ) 0 for i = 1, 2,... p(x) = 0 for all other values of x. The variable X must take on one of the values x i, which means p(x i ) = 1. The probability mass function p(x) is presented graphically by plotting p(x i ) on the y-axis against x i on the x-axis. Example Suppose X is the random variable representing the sum when the dice are rolled. Then we know X takes on integer values between 2 and 12 with probabilities ranging from 1/36 to 6/36. We represent the probability mass function with the following graph, where each bar is centered over the value of X and the height is the probability: 5

Probability Mass Function, p(x) 0.18 0.16 0.14 0.12 p(x) 0.1 0.08 0.06 0.04 0.02 0 2 3 4 5 6 7 8 9 10 11 12 x Exercise A discrete probability mass function of a random variable X has the form p(i) = cλ i /i! for i = 0, 1, 2,..., where λ is some positive value. Find (a) P {X = 0} and (b) P {X > 2}. a.) Using the formula for the probability mass function, we obtain P {X = 0} = p(0) = c λ0 0! = c. To find the constant c, we note that p(i) is a probability mass function, which means p(i) = 1. i=0 Substituting in the expression for p yields the following: p(i) = = c = ce λ, since the infinite series is actually a series representation for e λ. Hence, P {X = 0} = c = e λ. c λi i! λ i i! 6

b.) It follows that P {X > 2} = 1 P {X 2}, which gives P {X > 2} = 1 P {X = 0} P {X = 1} P {X = 2} = 1 e λ λe λ λ2 2 e λ. For a random variable X, the function F defined by F (x) = P {X x}, < x < is called the cumulative distribution function or more simply the distribution function of X. This function specifies the probability that the random variable is less than or equal to x. Suppose that a b. Then the event {X a} is contained in the event {X b}, and so F (a) F (b), which means F (x) is a nondecreasing function of x. We can define the cumulative distribution function in terms of p(x). In this case, we can write F (b) = p(x). all x b From this we see that if X is a discrete random variable with possible values x 1, x 2,..., where x 1 < x 2 <, then the distribution function F of X is a step function. Indeed, consider the following example. Example Suppose X is a random variable with mass function given by the four values: p(1) = 1 4, p(2) = 1 2, p(3) = 1 8, p(4) = 1 8. Then, the cumulative distribution function F (x) is given as 0 if b < 1 1 4 if 1 b < 2 3 F (b) = 4 if 2 b < 3 7 8 if 3 b < 4 1 if 4 b The probability mass function and cumulative distribution function are presented below. Probability Mass Function, p(x) Cumulative Distribution Function, F(b) 1 0.5 0.8 0.4 p(x) 0.3 F(b) 0.6 0.2 0.4 0.1 0.2 0 1 2 3 4 x 0 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 b 7

We note that the distance between each jump at each value x = 1, 2, 3, and 4 is exactly p(x). Exercise (4.17) Suppose that a cumulative distribution function is given by 0 if b < 0 F (b) = b 4 if 0 b < 1 1 2 + b 1 4 if 1 b < 2 11 12 if 2 b < 3 1 if 3 b a.) Draw a rough sketch of the cumulative distribution function. b.) Find P {X = i} for i = 1, 2, 3. c.) Find P {1/2 X 3/2}. a.) A sketch of the cumulative distribution function is below: Cumulative Distribution Function, F(b) 1 0.8 F(b) 0.6 0.4 0.2 0 0 0.5 1 1.5 2 2.5 3 3.5 4 b b.) The value of the probability mass function is equal to the width of the jumps at each value. Therefore, P {X = 1} = 1/2, P {X = 2} = 11/12 3/4 = 1/6, and P {X = 3} = 1/12. c.) By the definition of the cumulative distribution function, it follows that P {1/2 < X < 3/2} = P {X < 3/2} P {X < 1/2}, which means P {1/2 < X < 3/2} = F (3/2) F (1/2) = 5 8 1 8 = 1 2. 8

Section 4.3 Expected Value If X is a discrete random variable, having a probability mass function p(x), then the expectation, or expected value of X, denoted by E[X], is defined as E[X] = x:p(x)>0 xp(x). The expected value is the weighted average of the possible values that X can take on. Each value is weighted by its corresponding probability. Example Suppose a simple mass function for the random variable X is p(0) = 1 2, p(1) = 1 2. Then E[X] = (0)p(0) + (1)p(1) = (0) ( ) ( ) 1 1 + (1) = 1 2 2 2. In this case, the expected value is the ordinary average of the two possible values. Example Suppose a simple mass function for the random variable X is p(0) = 1 3, p(1) = 2 3. Then E[X] = (0)p(0) + (1)p(1) = (0) ( ) ( ) 1 2 + (1) = 2 3 3 3. In this case, the value at X = 1 is given twice as much weight as the value at X = 0. Suppose we are playing a game where X is our winnings, which could take on the values x 1, x 2,... x n. Each winning values has a corresponding probability p(x 1 ), p(x 2 ),..., p(x n ). Then if each p(x i ) is thought of as a relative frequency (in the sense that we play the game for a very long time), then our average winnings would be x i p(x i ) = E[X]. Exercise (3a) Suppose X is the outcome of rolling a fair die. Find E[X]. 9

The random variable X takes on the values i = 1, 2, 3, 4, 5, 6 with each value being equally probable, p(i) = 1/6. Therefore, E[X] = = 1 6 = 1 6 6 ip(i) = 7 2. 6 i (6)(7) 2 Exercise (4.25) Two coins are to be flipped. The first coin will land on heads with probability 0.6, the second with probability 0.7. Assume that the results of the flips are independent, and let X equal the total number of heads that result. Find a.) P {X = 1} and b.) E[X]. It is easy to conclude that the sample space for this experiment has four total outcomes: S = {(H, H), (H, T ), (T, H), (T, T )}, where H denotes heads and T denotes flipping a tails. If X is to be the random variable that counts the total number of heads flipped, then X takes on the values i = 0, 1, 2. The probability mass function is given by the following three probabilities: The expected value is then given by p(0) = P (T T ) = (0.4)(0.3) = 0.12 p(1) = P (T H) + P (HT ) = (0.6)(0.3) + (0.4)(0.7) = 0.46 p(2) = P (HH) = (0.6)(0.7) = 0.42. E[X] = 2 ip(i) = (0)(0.12) + (1)(0.46) + (2)(0.42) = 1.3. i=0 Exercise (4.30) A person tosses a fair coin until a tail appears for the first time. If the tail appears on the n th flip, the person wins 2 n dollars. Let X denote the player s winnings. Find E[X]. a.) Would you be willing to pay $1 million to play this game once? b.) Would you be willing to pay $1 million for each game if you could play for as long as you liked and only had to settle up when you stopped playing? 10

Let X be the player s winnings in dollars, then X takes on the values x = 2, 4, 8,..., 2 n,... for n = 1, 2,.... Since the game consists of flipping a fair coin, each flip may land tails or heads with probability 1/2, and each flip is independent of the last. Hence, the probability mass function is ( ) 1 n p(x) = p(2 n ) = = 1 2 2 n. Hence, the expected value is E[X] = x: p(x)>0 xp(x) = ( ) 1 (2 n ) 2 n = n=1 1 =. This means that if we continue to play this game for a long period of time (forever, actually) we can expect to win an infinite amount of money. a.) Probably not because to win our money back in one game would mean to flip 19 heads in a row: 2 n = 1000000 n 19.93. This is too difficult to do in only one try. b.) Yes, because we can definitely expect to win our money back at some point. n=1 Exercise (4.22) Suppose that two teams play a series of games that ends when one of them has won i games. Suppose that each game played is, independently, won by team A with probability p. Find the expected number of games that are played when a.) i = 2 and b.) i = 3. Also, show in both cases that this number is maximized when p = 1/2. 11

Section 4.4 Expectation of a Function of a Random Variable In order to talk about variance we need to look at the expected value of some function of X, say g(x). For example, suppose we have a random variable X but what if we want to find the expected value of the square of all of these values? In this case, we can think of g(x) as a discrete random variable, it has a probability mass function, which can be determined from p(x). Consider the following example: Example Let X denote a random variable that takes on the values 1, 0, and 1 with probabilities p( 1) = 0.2, p(0) = 0.5, p(1) = 0.3. We can compute the value E[X 2 ] by letting Y = X 2. Then the probability mass function of Y, denoted by p (y), obtains the following values: p (1) = p( 1) + p(1) = 0.2 + 0.3 = 0.5 p (0) = p(0) = 0.5. Hence, We note that E[X 2 ] E[X] 2. E[X 2 ] = E[Y ] = (0)p (0) + (1)p (1) = (0)(0.5) + (1)(0.5) = 0.5. We get the general idea that we will always be able to calculate the expected value of g(x) if we know the original probability mass function p(x). This, indeed, is always the case. We prove it below. Proposition If X is a discrete random variable that takes on values x i for i = 1, 2,... with probabilities p(x i ) then E[g(X)] = g(x i )p(x i ). Proof. We prove this proposition for a finite random variable. Let the random variable, X, take on the values x i for i = 1, 2,..., n. Because the function g(x) may not be one to one, suppose g(x) (another random variable) takes on the values g 1, g 2,..., g m (where m n). It follows that g(x) is a random variable, such that for j = 1, 2,..., m, the probability mass function is P {g(x) = g j } = p (g j ) = p(x i ). i: g(x i )=g j 12

Using this and the definition of expected value gives the following: E[g(X)] = = = = m g j p (g j ) j=1 m j=1 m j=1 g j p(x i ) i: g(x i )=g j g j p(x i ) i: g(x i )=g j n g(x i )p(x i ). Proposition If a and b are constants, then E[aX + b] = ae[x] + b. Proof. We have E[aX + b] = (ax + b)p(x) = a x: p(x)>0 xp(x) + b p(x) x: p(x)>0 x: p(x)>0 = ae[x] + b. The expected value of a random variable X, E[X], is also referred to also referred to as the mean or first moment of X. The quantity E[X n ] is referred to as the n th moment of X and is given by E[X n ] = x n p(x). x: p(x)>0 13

Section 4.5 Variance The expected value of a random variable X is an important measure of center of X but tells us nothing about the variation or spread of these values. Consider the following random variables: W = 0 with probability 1 { 1 with p( 1) = 1/2 Y = 1 with p(1) = 1/2 { 100 with p( 100) = 1/2 Z = 100 with p(100) = 1/2 All of these variables have the same expectation. To look at variation, we look at how far each value is from its mean, on the average. On way to do this is to look at the quantity E[ x µ ], where µ = E[X]. The term x µ is difficult to deal with so we look at the squares. Definition Let X be a random variable with mean µ, then the variance of X is denoted by Var[X] and is defined as Var[X] = E[(x µ) 2 ]. Exercise Show that E[(x µ) 2 ] = E[X 2 ] (E[X]) 2. This is an alternate form for the variance. Typically, this form is an easier way to compute the variance. Note, that this involves the second moment of X. Exercise A useful identity is that for any constants a and b, Show that this is true. Var[aX + b] = a 2 Var[X]. Exercise (5a) Calculate the variance, Var[X], if X represents the outcome when a fair die is rolled 14

Exercise (4.36) Suppose that two teams play a series of games that ends when one of them has won i games. Suppose that each game played is, independently, won by team A with probability p. Find the variance in the number of games that are played when i = 2. Show this value is maximized when p = 1/2. The mean is typically analogous to the center of gravity of a distribution of mass. The variance, with respect to mechanics, represents the moment of inertia. The square root of the variance is called the standard deviation of X, and we denote it by SD[X], which is given by SD[X] = Var[X]. 15

Section 4.6 Bernoulli and Binomial Random Variables Suppose an experiment takes place, whose outcome can be only two things: a success or a failure. We can let X = 1 for a success and let X = 0 for a failure. Then our probability mass function is p(0) = P {X = 0} = 1 p p(1) = P {X = 1} = p, where p, 0 p 1, is the probability of a success. A random variable that satisfies these conditions is called a Bernoulli random variable. Suppose we perform n trials of the experiment, each of which result in success with probability p. If X represents the number of successes in n trials, then X is a Binomial random variable. The two values n and p are called parameters for the Binomial model and if X follows the Bin(n, p). Exercise What is the probability mass function for the Binomial random variable X with parameters n and p? Once obtained, check if n i=0 p(x i) = 1. Exercise (6a) Five fair coins are flipped. If the outcomes are assumed independent, find the probability mass function of the number of heads obtained. Exercise (6b) It is known that screws produced by a certain company will be defective with probability 0.01, independently of one another. The company sells the screws in packages of 10 and offers a money back guarantee that at most 1 of the 10 screws is defective. What proportion of packages sold must the company replace? Exercise (4.32) To determine whether they have a certain disease, 100 people are to have their blood tested. However, rather than testing each individual separately, it has been decided first to place the people into groups of 10. The blood samples of the 10 people in each group will be 16

pooled and analyzed together. If the test is negative, one test will suffice for the 10 people, whereas if the test is positive, each of the 10 will also be individually tested and, in all, 11 tests will be made on this group. Assume that the probability that a person has the disease is 0.1 for all people, independently of one another, and compute the expected number of tests necessary for each group. Now, that we have defined the Binomial random variable, we seek a formula for the expected value and variance. The expected value can be computed as follows: E[X] = = = n ( n i i n ( n i i i=0 ) p i (1 p) n i ) p i (1 p) n i 17

Section 4.7 Poisson Random Variable Exercise (7a) Suppose that the number of typographical errors on a single page of this book has a Poisson distribution with parameter λ = 1/2. Calculate the probability that there is at least one error on this page. Exercise (7b) Suppose that the probability that an item produced by a certain machine will be defective is 0.1. Find the probability that a sample of size 10 items will contain at most 1 defective item. Exercise (4.61) The probability that you will be dealt a full house in a hand of poker is approximately 0.0014. Find an approximation for the probability that in 1000 hands of poker, you will be dealt at least 2 full houses. 18

Section 4.8 Other Discrete Random Variables Geometric Random Variable Exercise (4.71) Consider a roulette wheel consisting of 38 numbers, 1 through 36, 0, and double 0. If Basi always bets that the outcome will be one of the numbers 1 through 12, what is the probability that his first win will occur on his fourth bet? How many bets does he expect to place before he wins one time? Negative Binomial Random Variable Exercise (4.72) Two teams play a series of games; the first team to win 4 games is declared the winner. Suppose that one team is stronger than the other and wins each game with probability 0.6, independently of the outcomes of the other games. Find the probability, for i = 4, 5, 6, 7 that the stronger team wins in i games. Compare this probability that the stronger team wins with the probability that it would win a 2-out-of-3 series. Hypergeometric Random Variable Exercise (4.79) Suppose that a batch of 100 items contains 6 that are defective and 94 that are not defective. If X is the number of defective items in a randomly drawn sample of 10 items from the batch, find a.) P {X = 0} and b.) P {X > 2}. 19

Section 4.9 Expected Value of Sums of Random Variables The expected value of a sum of random variables is equal to the sum of the expected values. We prove this assuming that the sample space is either finite or countably infinite. Let X be a random variable and let X(s) be the value of X when s S is the outcome. If X and Y are random variables, then so is the sum Z = X + Y, with Z(s) = X(s) + Y (s). Example (9a) Suppose an experiment consists of flipping a coin 5 times, with the outcome being the resulting sequence of heads and tails. Suppose X is the number of heads in the first 3 flips and Y is the number of heads in the last 2 flips. Let Z = X + Y. Then, for instance, for the outcome s = {HT HT H}, X(s) = 2 Y (s) = 1 Z(s) = 3. Let p(s) be the probability that s is the outcome of the experiment. For any event A in the sample space, we can write A as the union of mutually exclusive outcomes s, such that each s A. It follows P (A) = s Ap(s). Suppose A = S, then 1 = s S p(s). Let X be a random variable such that X(s) is the value of X when s occurs. Proposition E[X] = x S X(s)p(s). Proof. Suppose X takes on the values x i, for i = 1, 2,.... For each i, let S i be the event that X is 20

equal to x i. In other words, S i = {s X(s) = x i }. then E[X] = = = = = x i P {X = x i } x i P (S i ) x i p(s) s Si x i p(s) s S i X(s)p(s) s S i = s S X(s)p(s), where S i for i = 1, 2,... are mutually exclusive events that make up S. Example Suppose two independent flips of a coin come up heads with probability p. Let X be the number of heads, then the probability mass function can be written as This gives P {X = 0} = p(t T ) = (1 p) 2 P {X = 1} = p(ht ) + p(t H) = 2p(1 p) P {X = 2} = p(hh) = p 2 E[X] = 0(1 p) 2 + 2p(1 p) + 2p 2 = 2p. We can also compute the expected values by considering every mutual event that makes up the sample space: E[X] = X(T T )p(t T ) + X(HT )p(ht ) + X(T H)p(T H) + X(HH)p(HH) = (0)(1 p) 2 + (1)(p(1 p)) + (1)((1 p)p) + (2)(p 2 ) = 2p. We now prove the important result from this section. Proposition For random variables X 1, X 2,..., X n, [ n ] n E X i = E[X]. 21

Proof. Let Z = n X i. Then it follows from the previous proposition E[Z] = s S Z(s)p(s) = ( n ) X i (s) p(s) s S ( ) n = X i (s)p(s) = s S n E[X i ]. Example Suppose we wanted to find the expected value of the sum obtained when n fair dice were rolled. Let X be the sum, then E[X] = n E[X i ], where X i is the upturned value on die i. Because X i is equally likely to be any of the values from 1 to 6, we know 6 ( ) 1 E[X i ] = i = 7 6 2. Hence, E[X] = n 7 2 = 7 2 n. 22