Lecture 7: Continuous Random Variables
|
|
|
- Alexander Reed
- 9 years ago
- Views:
Transcription
1 Lecture 7: Continuous Random Variables 21 September Our First Continuous Random Variable The back of the lecture hall is roughly 10 meters across. Suppose it were exactly 10 meters, and consider throwing paper airplanes from the front of the room to the back, and recording how far they land from the left-hand side of the room. This gives us a continuous random variable, X, a real number in the interval [0, 10]. Suppose further that the person throwing paper airplanes has really terrible aim, and every part of the back wall is as likely as evry other that X is uniformly distributed. All of the random variables we ve looked at previously have had discrete sample spaces, whether finite (like the Bernoulli and binomial) or infinite (like the Poisson). X has a continuous sample space, but most things work very similarly. 2 The Cummulative Distribution Function For instance, we can ask about the probability of events. What is the probability that 0 X 10, for instance? Clearly, 1. What is the probability that 0 X 5? Well, since every point is equally likely, and the interval [0, 5] is half of the total interval, Pr (0 X 5) 5/ Going through the same reasoning, the probability that X belongs to any interval is just proportional to the length of the interval: Pr (a X b) b a 10, if a 0 and b 10. Let s plot Pr (0 X x) versus x. Now, I only plotted this over the interval [0, 10], but we can extend this plot pretty easily, by looking at Pr (X x) Pr ( < X x). If x < 0, this probability is exactly 0, because we know X is non-negative. If x > 10, this probability is exactly 1, because we know X is less than ten. And if x is between 0 and 10, then Pr (X x) Pr (0 X x). So then we get this plot What we have plotted here is the cummulative distribution function (CDF) of X. Formally, the CDF of any continuous random variable X is F (x) Pr (X x), where x is any real number. When X is uniform over [0, 10], then we have the following CDF: F (x) 0 x < 0 x 10 0 x 10 1 x > 10 1
2 Pr(0<X<x) x Figure 1: Probability that X, uniformly distributed over [0, 10], lies in the interval [0, x]. 2
3 Pr(X<x) x Figure 2: Probability that X, uniformly distributed over [0, 10], is less than or equal to x. 3
4 For any CDF, we have the following properties: 1. The CDF is non-negative: F (x) 0. Probabilities are never negative. 2. The CDF goes to zero on the far left: lim x F (x) 0. X is never less than. 3. The CDF goes to one on the far right: lim x F (x) 1. X is never more than. 4. The CDF is non-decreasing: F (b) F (a) if b a. If b a, then the event X a is a sub-set of the event X b, and sub-sets never have higher probabilities. (This was a problem in HW2.) Any function which satisfies these four properties can be used as the CDF of some random variable or other. The reason we care about the CDF is that if we know it, we can use it to get the probability of any event we care about. For instance, suppose I tell you the CDF of X, and ask for Pr (a X b). We can find that from the CDF. Start with Pr (X b) F (b). By total probability, Pr (X b) Pr ((X b) (X a)) + Pr ((X b) (X a) ) Now notice that, since a < b, (X b) (X a) (X a). (X b) (X a) (a X b). So Pr (X b) Pr (X a) + Pr (a X b) F (b) F (a) + Pr (a X b) Pr (a X b) F (b) F (a) Also, For instance, with our uniform-over-[0, 10] variable X, Pr (5 X 6) F (6) F (5) , as we d already reasoned, and Pr (5 X 12) F (12) F (5) Say we want to know Pr ((0 X 2) (8 X 10)). Notice that the two intervals are disjoint events, because they don t overlap. The probability of disjoint events adds, and we get the sum of the probabilities of the two intervals, which we know how to do. If the intervals overlap, say in Pr ((2 X 6) (4 X 8)), then we use the rule for unions: Pr ((2 X 6) (4 X 8)) Pr ((2 X 6)) + Pr ((4 X 8)) Pr ((2 X 6) (4 X 8)) Notice that the intersection is just another interval: (2 X 6) (4 X 8) (4 X 6). This is always true: two intervals are either disjoint, or their intersection is another interval. Using this, it turns out that pretty much any event for real-valued variables which you can describe can be written as a union of dis-joint intervals. 1 Since we can find the probability of any interval from the CDF, we can use the CDF to find the probability of arbitrary events. 1 The technicalities behind the pretty much form the subject of measure theory in mathematics. We won t go there. 4
5 3 The Probability Density Function Let s try to figure out what the probability of X 5 is, in our uniform example. We know how to calculate the probability of intervals, so let s try to get it as a limit of intervals around 5. Pr (4 X 6) F (6) F (4) 0.2 Pr (4.5 X 5.5) F (5.5) F (4.5) 0.1 Pr (4.95 X 5.05) F (5.05) F (4.95) 0.01 Pr (4.995 X 5.005) F (5.005) F (4.995) Well, you can see where this is going. As we take smaller and smaller intervals around the point X 5, we get a smaller and smaller probability, and clearly in the limit that probability will be exactly 0. (This matches what we d get from just plugging away with the CDF: F (5) F (5) 0.) What does this mean? Remember that probabilities are long-run frequencies: Pr (X 5) is the fraction of the time we expect to get the value of exactly 5, in infinitely many repetitions of our paper-airplane-throwing experiment. But with a really continuous random variable, we never expect to repeat any particular value we could come close, but there are uncountably many alternatives, all just as likely as X 5, so hits on that point are vanishingly rare. Talking about the probabilities of particular points isn t very useful, then, for continuous variables, because the probability of any one point is zero. Instead, we talk about the density of probability. In physics and chemistry, similarly, when we deal with continuous substances, we use the density rather than the mass at particular points. Mass density is defined as mass per unit volume; let s define probability density the same way. More specifically, let s look the probability of a small interval around 5, say [5 ɛ, 5 + ɛ], and divide that by the length of the interval, so we have probability per unit length. Pr (5 ɛ X 5 + ɛ) 2ɛ F (5 + ɛ) F (5 ɛ) 2ɛ 5 + ɛ (5 ɛ) ɛ 2ɛ (2ɛ)(10) 1 10 Clearly, there was nothing special about the point 5 here; we could have used any point in the interval [0, 10], and we d have gotten the same answer. Let s formally defined the probability density function (pdf) of a random variable X, with cummulative distribution function F (x), as the derivative of the CDF f(x) df dx To make this concrete, let s calculate the pdf for our paper-airplane example. f(x) 0 x < x < 10 0 x > 10 5
6 f(x) x Figure 3: Probability density function of a random variable uniformly distributed over [0, 10]. 6
7 From the fundamental theorem of calculus, So, for any interval [a, b], F (x) x f(y)dy Pr (a X b) b a f(x)dx That is, the probability of a region is the area under the curve of the density in that region. For small ɛ, Pr (x X x + ɛ) ɛf(x) Notice that I write the CDF with an upper-case F, and the pdf with a lower-case f the density, which is about small regions, gets the small letter. With discrete variables, we used the probability mass function p(x) to keep track of the probability of individual points. With continuous variables, we ll use the pdf f(x) similarly, to keep track of probability densities. We ll just have to be careful of the fact that it s a probability density and not a probability we ll have to do integrals, and not just sums. From the definition, and the properties of CDFs, we can deduce some properties of pdfs: pdfs are non-negative: f(x) 0. CDFs are non-decreasing, so their derivatives are non-negative. pdfs go to zero at the far left and the far right: lim x f(x) lim x f(x) 0. Because F (x) approaches fixed limits at ±, its derivative has to go to zero. pdfs integrate to one: f(x)dx 1. The integral is F ( ), which has to be 1. Any function which satisfies these properties can be used as a pdf. Suppose we have a function g which satisfies the first two properties of a pdf, but g(x)dx C 1. Then we can normalize g to get a pdf: f(x) g(x) C g(x) g(x)dx is a pdf, because it satisfies all three properties. To see that in action, let s say g(x) e λx, if x 0, and is zer otherwise. Then g(x)dx e λx dx 1 [ e λx ] 0 λ 1 0 λ so f(x) g(x) { λe 1/λ λx x 0 0 x < 0 7
8 Exponential density pdf x Figure 4: Exponential density with λ 7. 8
9 is a pdf. This is called the exponential density, and we ll be using it a lot. (See figure 4.) What is the corresponding CDF? F (x) x f(y)dy { x 0 λe λy dy x 0 0 x < 0 { λ 1 λ [e λy ] x 0 x 0 0 x < 0 { [e λx 1] x 0 0 x < 0 { 1 e λx x 0 0 x < 0 (See Figure 5.) Let s check that this is a valid cummulative distribution function: 1. Non-negative: If x < 0, then F (x) 0, which is non-negative. If x 0, then e λx 1, so F (x) 1 e λx Goes to 0 on the left: F (x) 0 if x < 0, so 3. Goes to 1 on the right: lim F (x) 0 x lim F (x) lim 1 x x e λx 1 lim x e λx 1 4. Non-decreasing: If 0 < a < b, then F (b) F (a) 1 e λb (1 e λa ) e λa e λb > 0 If a 0 < b, then F (b) > 0, F (a) 0, and clearly F (b) > F (a). So the F we obtain by integration makes for a good cummulative distribution function. The next figure summarizes the relationships between cummulative distribution functions, probability density functions, and un-normalized functions like g. Expectation I said that for most purposes the probability density function f(x) of a continuous variable works like the probability mass function p(x) of a discrete variable, 9
10 Exponential cummulative distribution CDF F(x) x Figure 5: Cummulative distribution function of an exponential random variable with λ 7. 10
11 Non-negative integrable g Normalize by integral of g pdf f Integrate Differentiate CDF F and one place that s true is when it comes to defining expectations. Remember that for discrete variables E [X] xp(x) x For a continuous variable, we just substitute f(x) for p(x) and an integral for a sum: E [X] xf(x)dx All of the rules which we learned for discrete expectations still hold for continuous expectations. Let s see how this works for the uniform-over-[0, 10] example. E [X] xf(x)dx 10 0 x 1 10 dx 1 1[ x 2 ] (100 0) 5 2 Notice that 5 is the mid-point of the interval [0, 10]. Suppose we had a uniform distribution over another interval, say (to be imaginative) [a, b]. What would the expectation be? First, find the CDF F (x), from the same kind of reasoning we used on the interval [0, 10]: the probability of an interval is its length, divided by the total length. Then, find the pdf, f(x) df/dx; finally, get the expectation, 11
12 by integrating xf(x). F (x) f(x) E [X] b 0 x < a a x b 1 b < x x a b a 0 x < a a x b 0 x > b 1 b a x 1 a b a dx 1 1[ x 2 ] b b a 2 a 1 b 2 a 2 b a 2 1 b a b + a 2 (b a)(b + a) 2 In words, the expectation of a uniform distribution is always the mid-point of its range. 4 Uniform Random Variables in General We ve already seen most of the important results for uniform random variables, but we might as well collect them all in one place. The uniform distribution on the interval [a, b], written U(a, b) or Uni(a, b), has two parameters, namely the end-points of the distribution. The CDF is F (x) fracx ab a inside the interval, and the pdf f(x) 1 b a. The expectation E [X] b+a 2 of the interval, and the variance Var (X) (b a)2 12. Notice that if X U(a, b), then cx + d U(ca + d, cb + d)., the mid-point 4.1 Turning Non-Uniform Random Variables into Uniform Ones Suppose X is a non-uniform random variable with CDF F. Then F (X) is again a random variable, but it s always uniform on the interval [0, 1] because F (X) 0.5 exactly half the time, F (X) 0.25 exactly a quarter of the time, and so on. This is one way to test whether you ve guessed the right distribution for a non-uniform variable: if you think the CDF is F, then calculate F (x i ) for all the data points you have, and the result should be very close to uniform on the unit interval. (We will come back to this idea later.) 12
13 4.2 Turning Uniform Random Variables into Non-Uniform Ones Computer random number generators typically produce random numbers uniformly distributed between 0 and 1. If we want to simulate a non-uniform random variable, and we know its distribution function, then we can reverse the trick in the last section. If Y U(0, 1), then F 1 (Y ) has cummulative distribution function F. We can think of this as follows: our random number generator gives us a number y between 0 and 1. We then slide along the domain of the CDF F, starting at the left and working to the right, until we come to a number x where F (x) y. Because F is non-decreasing, we ll never have to back-track. This x is F 1 (y), by definition. Because, again, F is non-decreasing, we know that F 1 (a) F 1 (b) if and only if a b. If we pick a y and b F (x), then this gives us that F 1 (y) x if and only if y F (x). So if 0 x 1, Pr ( F 1 (Y ) < x ) Pr (Y < F (x)) F (x) because, remember, Y is uniformly distributed between 0 and 1. Using this trick assumes that we can compute F 1, the inverse of the CDF. This isn t always available in closed form. 5 Exponential Random Variables Exponential random variables generally arise from decay processes. The idea is that things radioactive atoms, unstable molecules, etc. have a certain probability per unit time of decaying, and we ask how long we have to wait before seeing the decay. (The model applies to any kind of event with a constant rate over time.) They also arise in statistical mechanics and physical chemistry, where the probability that a molecule has energy E is proportional to e E/kBT, where T is the absolute temperature and k B is Boltzmann s constant. There s a constant probability of decay per unit time, call it (with malice aforethought) λ. So let s ask what the probability is that the decay happened by time t, i.e., Pr (0 T t). We can imagine dividing that interval up into n equal parts. The probability of decay in each time-interval then is λt/n. The probability of no decay is ( 1 λt ) n n because we can t have had a decay event in any of the n sub-intervals. So ( Pr (0 T t) 1 1 λt ) n n Since time is continuous, we should really take the limit as n : ( Pr (0 T t) lim 1 1 λt ) n n n 13
14 ( 1 lim 1 λt ) n n n 1 e λt which is the CDF of the exponential distribution. The expectation of an exponential variable is E [X] 1/λ; its variance, E [X] 1/λ 2. I leave showing both of these facts as exercises; there are several ways to do it, including integration by parts. 6 Discrete vs. Continuous Variables Discrete p.m.f. p(x) Pr (X x) (...) p(x) CDF F (x) Pr (X x) yx E [X] x xp(x) y p(y) Continuous p.d.f. f(x) df dx Pr(x X x+ɛ) ɛ (...) f(x)dx CDF F (x) Pr (X x) x f(y)dy E [X] xf(x)dx 14
Notes on Continuous Random Variables
Notes on Continuous Random Variables Continuous random variables are random quantities that are measured on a continuous scale. They can usually take on any value over some interval, which distinguishes
5. Continuous Random Variables
5. Continuous Random Variables Continuous random variables can take any value in an interval. They are used to model physical characteristics such as time, length, position, etc. Examples (i) Let X be
For a partition B 1,..., B n, where B i B j = for i. A = (A B 1 ) (A B 2 ),..., (A B n ) and thus. P (A) = P (A B i ) = P (A B i )P (B i )
Probability Review 15.075 Cynthia Rudin A probability space, defined by Kolmogorov (1903-1987) consists of: A set of outcomes S, e.g., for the roll of a die, S = {1, 2, 3, 4, 5, 6}, 1 1 2 1 6 for the roll
Lecture 6: Discrete & Continuous Probability and Random Variables
Lecture 6: Discrete & Continuous Probability and Random Variables D. Alex Hughes Math Camp September 17, 2015 D. Alex Hughes (Math Camp) Lecture 6: Discrete & Continuous Probability and Random September
Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab
Monte Carlo Simulation: IEOR E4703 Fall 2004 c 2004 by Martin Haugh Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab 1 Overview of Monte Carlo Simulation 1.1 Why use simulation?
STAT 315: HOW TO CHOOSE A DISTRIBUTION FOR A RANDOM VARIABLE
STAT 315: HOW TO CHOOSE A DISTRIBUTION FOR A RANDOM VARIABLE TROY BUTLER 1. Random variables and distributions We are often presented with descriptions of problems involving some level of uncertainty about
The sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1].
Probability Theory Probability Spaces and Events Consider a random experiment with several possible outcomes. For example, we might roll a pair of dice, flip a coin three times, or choose a random real
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES Contents 1. Random variables and measurable functions 2. Cumulative distribution functions 3. Discrete
Section 5.1 Continuous Random Variables: Introduction
Section 5. Continuous Random Variables: Introduction Not all random variables are discrete. For example:. Waiting times for anything (train, arrival of customer, production of mrna molecule from gene,
Lecture 5 : The Poisson Distribution
Lecture 5 : The Poisson Distribution Jonathan Marchini November 10, 2008 1 Introduction Many experimental situations occur in which we observe the counts of events within a set unit of time, area, volume,
Chapter 3 RANDOM VARIATE GENERATION
Chapter 3 RANDOM VARIATE GENERATION In order to do a Monte Carlo simulation either by hand or by computer, techniques must be developed for generating values of random variables having known distributions.
6.041/6.431 Spring 2008 Quiz 2 Wednesday, April 16, 7:30-9:30 PM. SOLUTIONS
6.4/6.43 Spring 28 Quiz 2 Wednesday, April 6, 7:3-9:3 PM. SOLUTIONS Name: Recitation Instructor: TA: 6.4/6.43: Question Part Score Out of 3 all 36 2 a 4 b 5 c 5 d 8 e 5 f 6 3 a 4 b 6 c 6 d 6 e 6 Total
Math 461 Fall 2006 Test 2 Solutions
Math 461 Fall 2006 Test 2 Solutions Total points: 100. Do all questions. Explain all answers. No notes, books, or electronic devices. 1. [105+5 points] Assume X Exponential(λ). Justify the following two
Chapter 4 Lecture Notes
Chapter 4 Lecture Notes Random Variables October 27, 2015 1 Section 4.1 Random Variables A random variable is typically a real-valued function defined on the sample space of some experiment. For instance,
Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 18. A Brief Introduction to Continuous Probability
CS 7 Discrete Mathematics and Probability Theory Fall 29 Satish Rao, David Tse Note 8 A Brief Introduction to Continuous Probability Up to now we have focused exclusively on discrete probability spaces
Lecture 2: Discrete Distributions, Normal Distributions. Chapter 1
Lecture 2: Discrete Distributions, Normal Distributions Chapter 1 Reminders Course website: www. stat.purdue.edu/~xuanyaoh/stat350 Office Hour: Mon 3:30-4:30, Wed 4-5 Bring a calculator, and copy Tables
2WB05 Simulation Lecture 8: Generating random variables
2WB05 Simulation Lecture 8: Generating random variables Marko Boon http://www.win.tue.nl/courses/2wb05 January 7, 2013 Outline 2/36 1. How do we generate random variables? 2. Fitting distributions Generating
ECE302 Spring 2006 HW5 Solutions February 21, 2006 1
ECE3 Spring 6 HW5 Solutions February 1, 6 1 Solutions to HW5 Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in italics
Probability density function : An arbitrary continuous random variable X is similarly described by its probability density function f x = f X
Week 6 notes : Continuous random variables and their probability densities WEEK 6 page 1 uniform, normal, gamma, exponential,chi-squared distributions, normal approx'n to the binomial Uniform [,1] random
Introduction to Probability
Introduction to Probability EE 179, Lecture 15, Handout #24 Probability theory gives a mathematical characterization for experiments with random outcomes. coin toss life of lightbulb binary data sequence
1.1 Introduction, and Review of Probability Theory... 3. 1.1.1 Random Variable, Range, Types of Random Variables... 3. 1.1.2 CDF, PDF, Quantiles...
MATH4427 Notebook 1 Spring 2016 prepared by Professor Jenny Baglivo c Copyright 2009-2016 by Jenny A. Baglivo. All Rights Reserved. Contents 1 MATH4427 Notebook 1 3 1.1 Introduction, and Review of Probability
Exponential Distribution
Exponential Distribution Definition: Exponential distribution with parameter λ: { λe λx x 0 f(x) = 0 x < 0 The cdf: F(x) = x Mean E(X) = 1/λ. f(x)dx = Moment generating function: φ(t) = E[e tx ] = { 1
Practice with Proofs
Practice with Proofs October 6, 2014 Recall the following Definition 0.1. A function f is increasing if for every x, y in the domain of f, x < y = f(x) < f(y) 1. Prove that h(x) = x 3 is increasing, using
Maximum Likelihood Estimation
Math 541: Statistical Theory II Lecturer: Songfeng Zheng Maximum Likelihood Estimation 1 Maximum Likelihood Estimation Maximum likelihood is a relatively simple method of constructing an estimator for
14.1. Basic Concepts of Integration. Introduction. Prerequisites. Learning Outcomes. Learning Style
Basic Concepts of Integration 14.1 Introduction When a function f(x) is known we can differentiate it to obtain its derivative df. The reverse dx process is to obtain the function f(x) from knowledge of
An Introduction to Basic Statistics and Probability
An Introduction to Basic Statistics and Probability Shenek Heyward NCSU An Introduction to Basic Statistics and Probability p. 1/4 Outline Basic probability concepts Conditional probability Discrete Random
Section 6.1 Joint Distribution Functions
Section 6.1 Joint Distribution Functions We often care about more than one random variable at a time. DEFINITION: For any two random variables X and Y the joint cumulative probability distribution function
CHAPTER 6: Continuous Uniform Distribution: 6.1. Definition: The density function of the continuous random variable X on the interval [A, B] is.
Some Continuous Probability Distributions CHAPTER 6: Continuous Uniform Distribution: 6. Definition: The density function of the continuous random variable X on the interval [A, B] is B A A x B f(x; A,
FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL
FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL STATIsTICs 4 IV. RANDOm VECTORs 1. JOINTLY DIsTRIBUTED RANDOm VARIABLEs If are two rom variables defined on the same sample space we define the joint
e.g. arrival of a customer to a service station or breakdown of a component in some system.
Poisson process Events occur at random instants of time at an average rate of λ events per second. e.g. arrival of a customer to a service station or breakdown of a component in some system. Let N(t) be
2.2 Derivative as a Function
2.2 Derivative as a Function Recall that we defined the derivative as f (a) = lim h 0 f(a + h) f(a) h But since a is really just an arbitrary number that represents an x-value, why don t we just use x
Probability Generating Functions
page 39 Chapter 3 Probability Generating Functions 3 Preamble: Generating Functions Generating functions are widely used in mathematics, and play an important role in probability theory Consider a sequence
Notes on metric spaces
Notes on metric spaces 1 Introduction The purpose of these notes is to quickly review some of the basic concepts from Real Analysis, Metric Spaces and some related results that will be used in this course.
Random variables, probability distributions, binomial random variable
Week 4 lecture notes. WEEK 4 page 1 Random variables, probability distributions, binomial random variable Eample 1 : Consider the eperiment of flipping a fair coin three times. The number of tails that
CA200 Quantitative Analysis for Business Decisions. File name: CA200_Section_04A_StatisticsIntroduction
CA200 Quantitative Analysis for Business Decisions File name: CA200_Section_04A_StatisticsIntroduction Table of Contents 4. Introduction to Statistics... 1 4.1 Overview... 3 4.2 Discrete or continuous
Definition: Suppose that two random variables, either continuous or discrete, X and Y have joint density
HW MATH 461/561 Lecture Notes 15 1 Definition: Suppose that two random variables, either continuous or discrete, X and Y have joint density and marginal densities f(x, y), (x, y) Λ X,Y f X (x), x Λ X,
Random variables P(X = 3) = P(X = 3) = 1 8, P(X = 1) = P(X = 1) = 3 8.
Random variables Remark on Notations 1. When X is a number chosen uniformly from a data set, What I call P(X = k) is called Freq[k, X] in the courseware. 2. When X is a random variable, what I call F ()
Math 4310 Handout - Quotient Vector Spaces
Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable
Microeconomic Theory: Basic Math Concepts
Microeconomic Theory: Basic Math Concepts Matt Van Essen University of Alabama Van Essen (U of A) Basic Math Concepts 1 / 66 Basic Math Concepts In this lecture we will review some basic mathematical concepts
MATH 10: Elementary Statistics and Probability Chapter 5: Continuous Random Variables
MATH 10: Elementary Statistics and Probability Chapter 5: Continuous Random Variables Tony Pourmohamad Department of Mathematics De Anza College Spring 2015 Objectives By the end of this set of slides,
The Exponential Distribution
21 The Exponential Distribution From Discrete-Time to Continuous-Time: In Chapter 6 of the text we will be considering Markov processes in continuous time. In a sense, we already have a very good understanding
MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform
MATH 433/533, Fourier Analysis Section 11, The Discrete Fourier Transform Now, instead of considering functions defined on a continuous domain, like the interval [, 1) or the whole real line R, we wish
Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015.
Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment -3, Probability and Statistics, March 05. Due:-March 5, 05.. Show that the function 0 for x < x+ F (x) = 4 for x < for x
1. Prove that the empty set is a subset of every set.
1. Prove that the empty set is a subset of every set. Basic Topology Written by Men-Gen Tsai email: [email protected] Proof: For any element x of the empty set, x is also an element of every set since
Math 120 Final Exam Practice Problems, Form: A
Math 120 Final Exam Practice Problems, Form: A Name: While every attempt was made to be complete in the types of problems given below, we make no guarantees about the completeness of the problems. Specifically,
Statistics 100A Homework 7 Solutions
Chapter 6 Statistics A Homework 7 Solutions Ryan Rosario. A television store owner figures that 45 percent of the customers entering his store will purchase an ordinary television set, 5 percent will purchase
Random Variables. Chapter 2. Random Variables 1
Random Variables Chapter 2 Random Variables 1 Roulette and Random Variables A Roulette wheel has 38 pockets. 18 of them are red and 18 are black; these are numbered from 1 to 36. The two remaining pockets
UNIT I: RANDOM VARIABLES PART- A -TWO MARKS
UNIT I: RANDOM VARIABLES PART- A -TWO MARKS 1. Given the probability density function of a continuous random variable X as follows f(x) = 6x (1-x) 0
MATH 425, PRACTICE FINAL EXAM SOLUTIONS.
MATH 45, PRACTICE FINAL EXAM SOLUTIONS. Exercise. a Is the operator L defined on smooth functions of x, y by L u := u xx + cosu linear? b Does the answer change if we replace the operator L by the operator
Stats on the TI 83 and TI 84 Calculator
Stats on the TI 83 and TI 84 Calculator Entering the sample values STAT button Left bracket { Right bracket } Store (STO) List L1 Comma Enter Example: Sample data are {5, 10, 15, 20} 1. Press 2 ND and
Chapter 3: DISCRETE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS. Part 3: Discrete Uniform Distribution Binomial Distribution
Chapter 3: DISCRETE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS Part 3: Discrete Uniform Distribution Binomial Distribution Sections 3-5, 3-6 Special discrete random variable distributions we will cover
Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur
Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Module No. #01 Lecture No. #15 Special Distributions-VI Today, I am going to introduce
WEEK #22: PDFs and CDFs, Measures of Center and Spread
WEEK #22: PDFs and CDFs, Measures of Center and Spread Goals: Explore the effect of independent events in probability calculations. Present a number of ways to represent probability distributions. Textbook
Probability Theory. Florian Herzog. A random variable is neither random nor variable. Gian-Carlo Rota, M.I.T..
Probability Theory A random variable is neither random nor variable. Gian-Carlo Rota, M.I.T.. Florian Herzog 2013 Probability space Probability space A probability space W is a unique triple W = {Ω, F,
5544 = 2 2772 = 2 2 1386 = 2 2 2 693. Now we have to find a divisor of 693. We can try 3, and 693 = 3 231,and we keep dividing by 3 to get: 1
MATH 13150: Freshman Seminar Unit 8 1. Prime numbers 1.1. Primes. A number bigger than 1 is called prime if its only divisors are 1 and itself. For example, 3 is prime because the only numbers dividing
Math 431 An Introduction to Probability. Final Exam Solutions
Math 43 An Introduction to Probability Final Eam Solutions. A continuous random variable X has cdf a for 0, F () = for 0 <
An Introduction to Partial Differential Equations
An Introduction to Partial Differential Equations Andrew J. Bernoff LECTURE 2 Cooling of a Hot Bar: The Diffusion Equation 2.1. Outline of Lecture An Introduction to Heat Flow Derivation of the Diffusion
Lecture 8: More Continuous Random Variables
Lecture 8: More Continuous Random Variables 26 September 2005 Last time: the eponential. Going from saying the density e λ, to f() λe λ, to the CDF F () e λ. Pictures of the pdf and CDF. Today: the Gaussian
Decision Making under Uncertainty
6.825 Techniques in Artificial Intelligence Decision Making under Uncertainty How to make one decision in the face of uncertainty Lecture 19 1 In the next two lectures, we ll look at the question of how
PSTAT 120B Probability and Statistics
- Week University of California, Santa Barbara April 10, 013 Discussion section for 10B Information about TA: Fang-I CHU Office: South Hall 5431 T Office hour: TBA email: [email protected] Slides will
Vieta s Formulas and the Identity Theorem
Vieta s Formulas and the Identity Theorem This worksheet will work through the material from our class on 3/21/2013 with some examples that should help you with the homework The topic of our discussion
Week 2: Exponential Functions
Week 2: Exponential Functions Goals: Introduce exponential functions Study the compounded interest and introduce the number e Suggested Textbook Readings: Chapter 4: 4.1, and Chapter 5: 5.1. Practice Problems:
SOLUTIONS TO EXERCISES FOR. MATHEMATICS 205A Part 3. Spaces with special properties
SOLUTIONS TO EXERCISES FOR MATHEMATICS 205A Part 3 Fall 2008 III. Spaces with special properties III.1 : Compact spaces I Problems from Munkres, 26, pp. 170 172 3. Show that a finite union of compact subspaces
BNG 202 Biomechanics Lab. Descriptive statistics and probability distributions I
BNG 202 Biomechanics Lab Descriptive statistics and probability distributions I Overview The overall goal of this short course in statistics is to provide an introduction to descriptive and inferential
3 Some Integer Functions
3 Some Integer Functions A Pair of Fundamental Integer Functions The integer function that is the heart of this section is the modulo function. However, before getting to it, let us look at some very simple
correct-choice plot f(x) and draw an approximate tangent line at x = a and use geometry to estimate its slope comment The choices were:
Topic 1 2.1 mode MultipleSelection text How can we approximate the slope of the tangent line to f(x) at a point x = a? This is a Multiple selection question, so you need to check all of the answers that
ST 371 (IV): Discrete Random Variables
ST 371 (IV): Discrete Random Variables 1 Random Variables A random variable (rv) is a function that is defined on the sample space of the experiment and that assigns a numerical variable to each possible
Pr(X = x) = f(x) = λe λx
Old Business - variance/std. dev. of binomial distribution - mid-term (day, policies) - class strategies (problems, etc.) - exponential distributions New Business - Central Limit Theorem, standard error
TOPIC 4: DERIVATIVES
TOPIC 4: DERIVATIVES 1. The derivative of a function. Differentiation rules 1.1. The slope of a curve. The slope of a curve at a point P is a measure of the steepness of the curve. If Q is a point on the
To give it a definition, an implicit function of x and y is simply any relationship that takes the form:
2 Implicit function theorems and applications 21 Implicit functions The implicit function theorem is one of the most useful single tools you ll meet this year After a while, it will be second nature to
A Tutorial on Probability Theory
Paola Sebastiani Department of Mathematics and Statistics University of Massachusetts at Amherst Corresponding Author: Paola Sebastiani. Department of Mathematics and Statistics, University of Massachusetts,
E3: PROBABILITY AND STATISTICS lecture notes
E3: PROBABILITY AND STATISTICS lecture notes 2 Contents 1 PROBABILITY THEORY 7 1.1 Experiments and random events............................ 7 1.2 Certain event. Impossible event............................
1 if 1 x 0 1 if 0 x 1
Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or
BANACH AND HILBERT SPACE REVIEW
BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but
Lesson 20. Probability and Cumulative Distribution Functions
Lesson 20 Probability and Cumulative Distribution Functions Recall If p(x) is a density function for some characteristic of a population, then Recall If p(x) is a density function for some characteristic
THE BANACH CONTRACTION PRINCIPLE. Contents
THE BANACH CONTRACTION PRINCIPLE ALEX PONIECKI Abstract. This paper will study contractions of metric spaces. To do this, we will mainly use tools from topology. We will give some examples of contractions,
Chapter 5. Random variables
Random variables random variable numerical variable whose value is the outcome of some probabilistic experiment; we use uppercase letters, like X, to denote such a variable and lowercase letters, like
Continued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS
RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS. DISCRETE RANDOM VARIABLES.. Definition of a Discrete Random Variable. A random variable X is said to be discrete if it can assume only a finite or countable
Constrained optimization.
ams/econ 11b supplementary notes ucsc Constrained optimization. c 2010, Yonatan Katznelson 1. Constraints In many of the optimization problems that arise in economics, there are restrictions on the values
6.4 Logarithmic Equations and Inequalities
6.4 Logarithmic Equations and Inequalities 459 6.4 Logarithmic Equations and Inequalities In Section 6.3 we solved equations and inequalities involving exponential functions using one of two basic strategies.
Lab 11. Simulations. The Concept
Lab 11 Simulations In this lab you ll learn how to create simulations to provide approximate answers to probability questions. We ll make use of a particular kind of structure, called a box model, that
Domain of a Composition
Domain of a Composition Definition Given the function f and g, the composition of f with g is a function defined as (f g)() f(g()). The domain of f g is the set of all real numbers in the domain of g such
Linear Programming Notes V Problem Transformations
Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material
Statistics and Random Variables. Math 425 Introduction to Probability Lecture 14. Finite valued Random Variables. Expectation defined
Expectation Statistics and Random Variables Math 425 Introduction to Probability Lecture 4 Kenneth Harris [email protected] Department of Mathematics University of Michigan February 9, 2009 When a large
M2S1 Lecture Notes. G. A. Young http://www2.imperial.ac.uk/ ayoung
M2S1 Lecture Notes G. A. Young http://www2.imperial.ac.uk/ ayoung September 2011 ii Contents 1 DEFINITIONS, TERMINOLOGY, NOTATION 1 1.1 EVENTS AND THE SAMPLE SPACE......................... 1 1.1.1 OPERATIONS
7.6 Approximation Errors and Simpson's Rule
WileyPLUS: Home Help Contact us Logout Hughes-Hallett, Calculus: Single and Multivariable, 4/e Calculus I, II, and Vector Calculus Reading content Integration 7.1. Integration by Substitution 7.2. Integration
STT315 Chapter 4 Random Variables & Probability Distributions KM. Chapter 4.5, 6, 8 Probability Distributions for Continuous Random Variables
Chapter 4.5, 6, 8 Probability Distributions for Continuous Random Variables Discrete vs. continuous random variables Examples of continuous distributions o Uniform o Exponential o Normal Recall: A random
Generating Random Variables and Stochastic Processes
Monte Carlo Simulation: IEOR E4703 Fall 2004 c 2004 by Martin Haugh Generating Random Variables and Stochastic Processes 1 Generating U(0,1) Random Variables The ability to generate U(0, 1) random variables
Normal distribution. ) 2 /2σ. 2π σ
Normal distribution The normal distribution is the most widely known and used of all distributions. Because the normal distribution approximates many natural phenomena so well, it has developed into a
What is Statistics? Lecture 1. Introduction and probability review. Idea of parametric inference
0. 1. Introduction and probability review 1.1. What is Statistics? What is Statistics? Lecture 1. Introduction and probability review There are many definitions: I will use A set of principle and procedures
Generating Random Variables and Stochastic Processes
Monte Carlo Simulation: IEOR E4703 c 2010 by Martin Haugh Generating Random Variables and Stochastic Processes In these lecture notes we describe the principal methods that are used to generate random
Chapter 4. Probability and Probability Distributions
Chapter 4. robability and robability Distributions Importance of Knowing robability To know whether a sample is not identical to the population from which it was selected, it is necessary to assess the
Eigenvalues, Eigenvectors, and Differential Equations
Eigenvalues, Eigenvectors, and Differential Equations William Cherry April 009 (with a typo correction in November 05) The concepts of eigenvalue and eigenvector occur throughout advanced mathematics They
VISUALIZATION OF DENSITY FUNCTIONS WITH GEOGEBRA
VISUALIZATION OF DENSITY FUNCTIONS WITH GEOGEBRA Csilla Csendes University of Miskolc, Hungary Department of Applied Mathematics ICAM 2010 Probability density functions A random variable X has density
x 2 + y 2 = 1 y 1 = x 2 + 2x y = x 2 + 2x + 1
Implicit Functions Defining Implicit Functions Up until now in this course, we have only talked about functions, which assign to every real number x in their domain exactly one real number f(x). The graphs
Two Fundamental Theorems about the Definite Integral
Two Fundamental Theorems about the Definite Integral These lecture notes develop the theorem Stewart calls The Fundamental Theorem of Calculus in section 5.3. The approach I use is slightly different than
A power series about x = a is the series of the form
POWER SERIES AND THE USES OF POWER SERIES Elizabeth Wood Now we are finally going to start working with a topic that uses all of the information from the previous topics. The topic that we are going to
2.6. Probability. In general the probability density of a random variable satisfies two conditions:
2.6. PROBABILITY 66 2.6. Probability 2.6.. Continuous Random Variables. A random variable a real-valued function defined on some set of possible outcomes of a random experiment; e.g. the number of points
