Continuous Random Variables

Similar documents
Definition: Suppose that two random variables, either continuous or discrete, X and Y have joint density

Lecture Notes 1. Brief Review of Basic Probability

5. Continuous Random Variables

Lecture 6: Discrete & Continuous Probability and Random Variables

Section 5.1 Continuous Random Variables: Introduction

What is Statistics? Lecture 1. Introduction and probability review. Idea of parametric inference

MAS108 Probability I

Random variables P(X = 3) = P(X = 3) = 1 8, P(X = 1) = P(X = 1) = 3 8.

Math 461 Fall 2006 Test 2 Solutions

Chapter 4 - Lecture 1 Probability Density Functions and Cumul. Distribution Functions

Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab

Section 6.1 Joint Distribution Functions

Important Probability Distributions OPRE 6301

CHAPTER 6: Continuous Uniform Distribution: 6.1. Definition: The density function of the continuous random variable X on the interval [A, B] is.

4. Continuous Random Variables, the Pareto and Normal Distributions

Notes on Continuous Random Variables

Feb 28 Homework Solutions Math 151, Winter Chapter 6 Problems (pages )

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)

Introduction to Probability

e.g. arrival of a customer to a service station or breakdown of a component in some system.

3.4. The Binomial Probability Distribution. Copyright Cengage Learning. All rights reserved.

STAT x 0 < x < 1

Math 431 An Introduction to Probability. Final Exam Solutions

ECE302 Spring 2006 HW5 Solutions February 21,

Math 370, Spring 2008 Prof. A.J. Hildebrand. Practice Test 2 Solutions

Statistics 100A Homework 7 Solutions

0 x = 0.30 x = 1.10 x = 3.05 x = 4.15 x = x = 12. f(x) =

6.041/6.431 Spring 2008 Quiz 2 Wednesday, April 16, 7:30-9:30 PM. SOLUTIONS

Joint Exam 1/P Sample Exam 1

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March Due:-March 25, 2015.

Aggregate Loss Models

THE CENTRAL LIMIT THEOREM TORONTO

Math 370, Actuarial Problemsolving Spring 2008 A.J. Hildebrand. Practice Test, 1/28/2008 (with solutions)

MULTIVARIATE PROBABILITY DISTRIBUTIONS

1.1 Introduction, and Review of Probability Theory Random Variable, Range, Types of Random Variables CDF, PDF, Quantiles...

Practice problems for Homework 11 - Point Estimation

Math 370, Spring 2008 Prof. A.J. Hildebrand. Practice Test 1 Solutions

Chapter 4 Lecture Notes

ST 371 (IV): Discrete Random Variables

The sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1].

Math 151. Rumbos Spring Solutions to Assignment #22

M2S1 Lecture Notes. G. A. Young ayoung

Chapter 6: Point Estimation. Fall Probability & Statistics

Math 370/408, Spring 2008 Prof. A.J. Hildebrand. Actuarial Exam Practice Problem Set 3 Solutions

MATH 10: Elementary Statistics and Probability Chapter 5: Continuous Random Variables

Random variables, probability distributions, binomial random variable

A Tutorial on Probability Theory

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

Continuous Random Variables

3. The Economics of Insurance

Stat 704 Data Analysis I Probability Review

1. Let A, B and C are three events such that P(A) = 0.45, P(B) = 0.30, P(C) = 0.35,

How To Find Out How Much Money You Get From A Car Insurance Claim

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

Manual for SOA Exam MLC.

PSTAT 120B Probability and Statistics

Lecture 3: Continuous distributions, expected value & mean, variance, the normal distribution

STAT 830 Convergence in Distribution

For a partition B 1,..., B n, where B i B j = for i. A = (A B 1 ) (A B 2 ),..., (A B n ) and thus. P (A) = P (A B i ) = P (A B i )P (B i )

Math 370/408, Spring 2008 Prof. A.J. Hildebrand. Actuarial Exam Practice Problem Set 2 Solutions

Probability density function : An arbitrary continuous random variable X is similarly described by its probability density function f x = f X

Statistics 100A Homework 4 Solutions

2WB05 Simulation Lecture 8: Generating random variables

STA 256: Statistics and Probability I

Lecture 5 : The Poisson Distribution

Microeconomic Theory: Basic Math Concepts

Math 370, Spring 2008 Prof. A.J. Hildebrand. Practice Test 2

Chapter 3 RANDOM VARIATE GENERATION

Sums of Independent Random Variables

Homework 4 - KEY. Jeff Brenion. June 16, Note: Many problems can be solved in more than one way; we present only a single solution here.

Chapter 3: DISCRETE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS. Part 3: Discrete Uniform Distribution Binomial Distribution

Generating Random Numbers Variance Reduction Quasi-Monte Carlo. Simulation Methods. Leonid Kogan. MIT, Sloan , Fall 2010

Probability Generating Functions

Lecture 5: Mathematical Expectation

Maximum Likelihood Estimation

Tail inequalities for order statistics of log-concave vectors and applications

Normal distribution. ) 2 /2σ. 2π σ

Key Concept. Density Curve

16. THE NORMAL APPROXIMATION TO THE BINOMIAL DISTRIBUTION

STT315 Chapter 4 Random Variables & Probability Distributions KM. Chapter 4.5, 6, 8 Probability Distributions for Continuous Random Variables

Data Modeling & Analysis Techniques. Probability & Statistics. Manfred Huber

CA200 Quantitative Analysis for Business Decisions. File name: CA200_Section_04A_StatisticsIntroduction

Random Variables. Chapter 2. Random Variables 1

UNIT I: RANDOM VARIABLES PART- A -TWO MARKS

Lecture 7: Continuous Random Variables

Stat 515 Midterm Examination II April 6, 2010 (9:30 a.m. - 10:45 a.m.)

Lecture 8. Confidence intervals and the central limit theorem

Solving Linear Systems, Continued and The Inverse of a Matrix

Transformations and Expectations of random variables

STAT 315: HOW TO CHOOSE A DISTRIBUTION FOR A RANDOM VARIABLE

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

Example. A casino offers the following bets (the fairest bets in the casino!) 1 You get $0 (i.e., you can walk away)

An Introduction to Basic Statistics and Probability

BNG 202 Biomechanics Lab. Descriptive statistics and probability distributions I

Example: 1. You have observed that the number of hits to your web site follow a Poisson distribution at a rate of 2 per day.

1. A survey of a group s viewing habits over the last year revealed the following

APPLIED MATHEMATICS ADVANCED LEVEL

Math 370/408, Spring 2008 Prof. A.J. Hildebrand. Actuarial Exam Practice Problem Set 5 Solutions

Chapter 5: Normal Probability Distributions - Solutions

5/31/ Normal Distributions. Normal Distributions. Chapter 6. Distribution. The Normal Distribution. Outline. Objectives.

Transcription:

Normal Approximation to the Binomial

Definition X is continuous if there exists a 0 function f (x) such that P(X A) = f (x)dx. This function is called the probability density function, or pdf. Properties: f (x)dx = 1 (= P( < X < )). P(a X b) = P(a < X < b) = b P(X = a) = a a f (x)dx = 0. Read Example 1a, p. 187. A a f (x)dx.

Example X = life time of an indicator light, has pdf f (x) = 100 x I (x > 100) 1. Find P(X < 150) 2. What is the probability that two of 5 such indicator lights will have to be replaced within the first 150 hours? Definition F (x) = P(X x) = x distribution function. f (t)dt is called the cumulative d By the Fundamental Theorem of Calculus, dx F (x) = f (x)

Example X f X (x). Find the pdf of Y = 2X. µ X = E(X ) = xf X (x)dx (Definition) E(g(X )) = g(x)f (x)dx (To be shown see next page) E(ag 1 (X ) + bg 2 (X )) = ae(g 1 (X )) + be(g 2 (X )) Example X f X (x) = I (0 x 1). Find E(e X ). Solution: a) Apply above formula with g(x) = e x b) Find the pdf, f Y (y), of Y = e X and use the definition of expected value, i.e. E(Y ) = yf Y (y)dy.

Lemma If Y is a 0 r.v., then E(Y ) = 0 P(Y > y)dy Proposition E(g(X )) = g(x)f (x)dx. Proof: Assume first that g(x) 0, for all x. Then, E(g(X )) = P(g(X ) > y)dy = = 0 g(x) 0 dyf (x)dx = 0 x:g(x)>y g(x)f (x)dx f (x)dxdy For general g write g(x) = g(x)i (g(x) 0) + g(x)i (g(x) < 0) = g + (x) g (x), and use E(g(X )) = E(g + (X )) E(g (X ))

σ 2 X = Var(X ) = E(X µ X ) 2 = E(X 2 ) µ 2 X. Var(aX + b) = a 2 Var(X ) Example (The Uniform in (0,1) Random Variable) X U(0, 1) if f X (x) = I (0 < x < 1). Show F X (x) = 0I (x 0) + xi (0 < x 1) + I (x > 1), µ X = 0.5, σ 2 X = 1 12 Example (The Uniform in (a,b) Random Variable) X U(0, 1) if f X (x) = 1 b ai (a < x < b). Derive expressions for F X, µ X and σx 2

Normal Approximation to the Binomial X N(µ, σ 2 ) if f X (x) = 1 2πσ exp{ (x µ)2 2σ 2 } If µ = 0, σ 2 = 1, X is said to have the standard normal distribution. A standard normal random variable is typically denoted by Z. Showing that f X integrates to one is not simple. In fact, you either know how to do this integral or you spend a lot of time going nowhere. See page 199 of the book. If X N(µ, σ 2 ) then Y = a + bx N(a + bµ, b 2 σ 2 )

Normal Approximation to the Binomial Corollary 1. If Z N(0, 1), then X = µ + σz N(µ, σ 2 ). 2. If X N(µ, σ 2 ), then Z = X µ σ 3. If X N(µ, σ 2 ), then x α = µ + σz α, N(0, 1). where x α and z α denote the percentiles of X and Z.

Normal Approximation to the Binomial Finding Probabilities via the Standard Normal Table In Table A.3, z-values are identified from the left column, up to the first decimal, and the top row, for the second decimal. Thus, 1 is identified by 1.0 in the left column and 0.00 in the top row. Example (The 68-95-99.7% Property.) Let Z N(0, 1). Then 1. P( 1 < Z < 1) = Φ(1) Φ( 1) =.8413.1587 =.6826. 2. P( 2 < Z < 2) = Φ(2) Φ( 2) =.9772.0228 =.9544. 3. P( 3 < Z < 3) = Φ(3) Φ( 3) =.9987.0013 =.9974.

Normal Approximation to the Binomial Finding Probabilities via the Standard Normal Table Example Let X N(1.25, 0.46 2 ). Find a) P(1 X 1.75), and b) P(X > 2). Solution. Use Z = X 1.25 N(0, 1) to express these 0.46 probabilities in terms of Z. Thus, ( 1 1.25 a) P(1 X 1.75) = P X 1.25 ) 1.75 1.25.46.46.46 = P(.54 < Z < 1.09) = Φ(1.09) Φ(.54) =.8621.2946. ( b) P(X > 2) = P Z > 2 1.25 ) = 1 Φ(1.63) =.0516..46

Normal Approximation to the Binomial Finding Percentiles via the Standard Normal Table To find z α, one first locates 1 α in the body of Table A.3 and then reads z α from the margins. If the exact value of 1 α does not exist in the main body of the table, then an approximation is used as described in the following. Example Find z 0.05, the 95th percentile of Z. Solution. 1 α = 0.95 does not exist in the body of the table. The entry that is closest to, but larger than 0.95 (i.e. 0.9505), corresponds to 1.64. The entry that is closest to, but smaller than 0.95 (which is 0.9495), corresponds to 1.65. We approximate z 0.05 1.64 + 1.65 by averaging these two z-values: z.05 = 1.645. 2

Normal Approximation to the Binomial Finding Percentiles via the Standard Normal Table Example Let X denote the weight of a randomly chosen frozen yogurt cup. Suppose X N(8,.46 2 ). Find the value c that separates the upper 5% of weight values from the lower 95%. Solution. This is another way of asking for the 95-th percentile, x.05, of X. Using the formula x α = µ + σz α, we have x.05 = 8 +.46z.05 = 8 + (.46)(1.645) = 8.76.

Normal Approximation to the Binomial The Basic Result If X Bin(n, p), then the DeMoivre-Laplace limit theorem states that X N(np, np(1 p)), for n large enough. The approximation is quite good for values of n such that np(1 p) 10 and is often used if np 5 and n(1 p) 5. If p = X /n, the DeMoivre-Laplace limit theorem also yields ( ) p p(1 p) N p,, for n large enough as above. n

Normal Approximation to the Binomial The Continuity Correction Due to the discreteness of the binomial distribution, the normal approximation is improved by the so-called continuity correction: P(X x) = P(X x + 0.5) ( ) x + 0.5 np = P(Y x + 0.5) = Φ, np(1 p) where Y N(np, np(1 p)), i.e. is a normal r.v. with mean and variance equal to the mean and variance of the Binomial r.v. X. The normal approximation works in situations where the Poisson approximation does not work. For example p does not have to be 0.01.

Normal Approximation to the Binomial Example Suppose that 60% of all drivers in a certain state wear seat belt always. A random sample of 500 drivers is selected. Find the (approximate) probability that the number of those wearing seat belt always is between 270 and 320 (inclusive). Solution: Let X denote the number of drivers that always wear seat belt. Then, X Bin(500, 0.6). Since np 5 and n(1 p) 5, P(270 X 320) = P(X 320) P(X 269) ( ) ( ) 320 300 269 300 = Φ Φ = Φ(1.826) Φ( 2.83) 10.95 10.95 = 0.9661 0.0023 = 0.9638.

Normal Approximation to the Binomial Example (Continued) Using the continuity correction, we have P(270 X 320) = P(X 320) P(X 269) ( ) ( ) 320 + 0.5 300 269 + 0.5 300 = Φ Φ 10.95 10.95 = Φ(1.87) Φ( 2.78) = 0.9693 0.0027 = 0.9666.

X Exp(λ) if f X (x) = λe λx I (x > 0), for some λ > 0. F (x) = P(X x) = 1 e λx E(X n ) = n λ E(X n 1 ), so that E(X ) = 1 λ, Var(X ) = 1 λ 2 Example Suppose that the number of miles a car can run before its battery wears out is exponentially distributed with an average value of 10,000 miles. A person decides to take a 5,000 mile trip having just changed the battery. What is the probability that the trip will be completed without having to replace the battery? Solution: P(X > 5) = e 5/10 = 0.604.

The Memoryless Property of the Exponential RV If X Exp(λ) then for t > s we have Example P(X > t X > s) = P(X > t s) Suppose that the number of miles a car can run before its battery wears out is exponentially distributed with an average value of 10,000 miles. A person decides to take a 5,000 mile. What is the probability that the trip will be completed without having to replace the battery? Solution: By the memoryless property, P(X > 5) = e 5/10 = 0.604.

The Poisson-Exponential Relationship Proposition Let X (t) be a Poisson process with parameter λ, and let T be the time until the first occurrence. Then T Exp(λ)

Example a) If X U(0, 1), find the distribution of Y = X 2. b) If X U( 1, 1), find the distribution of Y = X 2. c) If X U(0, 1), find the distribution of Y = log X. Theorem Let X be continuous with pdf f X, and let g(x) be strictly monotonic and differentiable function. Then Y = g(x ) has pdf f Y (y) = f X (g 1 (y)) d dy g 1 (y) for y in the range of the function g, and zero otherwise.

Example 1. (The Probability Transformation) Let X be continuous with cumulative distribution function F X. Then, if g = F X, Y = g(x ) U(0, 1). 2. (The Quantile Transformation) Let X U(0, 1) and F be a cumulative distribution function of a continuous random variable. Then, if g = F 1, Y = g(x ) has F Y = F.