Section 5.1 Continuous Random Variables: Introduction

Size: px
Start display at page:

Download "Section 5.1 Continuous Random Variables: Introduction"

Transcription

1 Section 5. Continuous Random Variables: Introduction Not all random variables are discrete. For example:. Waiting times for anything (train, arrival of customer, production of mrna molecule from gene, etc). 2. Distance a ball is thrown. 3. Size of an antenna on a bug. The general idea is that now the sample space is uncountable. Probability mass functions and summation no longer works. DEFINITION: We say that X is a continuous random variable if the sample space is uncountable and there exists a nonnegative function f defined for all x (, ) having the property that for any set B R, P {X B} = f(x)dx The function f is called the probability density function of the random variable X, and is (sort of) the analogue of the probability mass function in the discrete case. So probabilities are now found by integration, rather than summation. REMARK: We must have that = P { < X < } = Also, taking B = [a, b] for a < b we have P {a X b} = B b a f(x)dx Note that taking a = b yields the moderately counter-intuitive P {X = a} = a a f(x)dx = f(x)dx Recall: For discrete random variables the probability mass function can be reconstructed from the distribution function and vice versa and changes in the distribution function corresponded with finding where the mass of the probability was. We now have that and so F(t) = P {X t} = t F (t) = f(t) f(x)dx

2 agreeing with the previous interpretation. Also, P(X (a, b)) = P(X [a, b]) = etc. = b a f(t)dt = F(b) F(a) The density function does not represent a probability. However, its integral gives probability of being in certain regions of R. Also, f(a) gives a measure of likelihood of being around a. That is, P {a ǫ/2 < X < a + ǫ/2} = a+ǫ/2 a ǫ/2 f(t)dt f(a)ǫ when ǫ is small and f is continuous at a. Thus, the probability that X will be in an interval around a of size ǫ is approximately ǫf(a). So, if f(a) < f(b), then P(a ǫ/2 < X < a + ǫ/2) ǫf(a) < ǫf(b) P(b ǫ/2 < X < b + ǫ/2) In other words, f(a) is a measure of how likely it is that the random variable takes a value near a. EXAMPLES:. The amount of time you must wait, in minutes, for the appearance of an mrna molecule is a continuous random variable with density { λe 3t, t f(t) =, t < What is the probability that. You will have to wait between and 2 minutes? 2. You will have to wait longer than /2 minutes? Solution: First, we haven t given λ. We need = f(t)dt = Therefore, λ = 3, and f(t) = 3e 3t for t. Thus, P { < X < 2} = P {X > /2} = 2 /2 λe 3t dt = λ 3 e 3t t= = λ 3 3e 3t dt = e 3 e e 3t dt = e 3/ If X is a continuous random variable with distribution function F X and density f X, find the density function of Y = kx. 2

3 2. If X is a continuous random variable with distribution function F X and density f X, find the density function of Y = kx. Solution: We ll do two ways (like in the book). We have Differentiating with respect to t yields Another derivation is the following. We have F Y (t) = P {Y t} = P {kx t} = P {X t/k} = F X (t/k) f Y (t) = d dt F X(t/k) = k f X(t/k) ǫf Y (a) P {a ǫ/2 Y a + ǫ/2} = P {a ǫ/2 kx a + ǫ/2} = P {a/k ǫ/(2k) X a/k + ǫ/(2k)} ǫ k f X(a/k) Dividing by ǫ yields the same result as before. Returning to our previous example with f X (t) = 3e 3t. If Y = 3X, then What if Y = kx + b? We have f Y (t) = 3 f X(t/3) = e t, t. F Y (t) = P {Y t} = P {kx + b t} = P {X (t b)/k} ( ) t b = F X k Differentiating with respect to t yields ( ) t b ( ) t b f Y (t) = d dt F X k = k f X k 3

4 Section 5.2 Expectation and Variance of Continuous Random Variables DEFINITION: If X is a random variable with density function f, then the expected value is This is analogous to the discrete case: E[X] = xf(x)dx. Discretize X into small ranges (x i, x i ] where x i x i = h is small. 2. Now think of X as discrete with the values x i. 3. Then, EXAMPLES: E[X] x i x i p(x i ) x i x i P(x i < X x i ) x i x i f(x i )h. Suppose that X has the density function Find E[X]. f(x) = 2 x, x 2 xf(x)dx Solution: We have E[X] = 2 x 2 xdx = 2 6 x3 = 8 6 = Suppose that f X (x) = for x (, ). Find E[e X ]. 4

5 2. Suppose that f X (x) = for x (, ). Find E[e X ]. Solution: Let Y = e X. We need to find the density of Y, then we can use the definition of Y. Because the range of X is (, ), the range of Y is (, e). For x e we have F Y (x) = P {Y x} = P {e X x} = P {X ln x} = F X (ln x) = ln x f X (t)dt = ln x Differentiating both sides, we get f Y (x) = /x for x e and zero otherwise. Thus, E[e X ] = E[Y ] = xf Y (x)dx = e dx = e As in the discrete case, there is a theorem making these computations much easier. THEOREM: Let X be a continuous random variable with density function f. Then for any g : R R we have E[g(X)] = Note how easy this makes the previous example: E[e X ] = g(x)f(x)dx e x dx = e To prove the theorem in the special case that g(x) we need the following: LEMMA: For a nonnegative random variable Y, Proof: We have = P {Y > y}dy = f Y (x) x E[Y ] = dydx = y P {Y > y}dy f Y (x)dxdy = x xf Y (x)dx = E[Y ] f Y (x)dydx Proof of Theorem: For any function g : R R (general case is similar) we have E[g(X)] = P {g(x) > y}dy = f(x)dx dy x:g(x)>y = f(x)dxdy = f(x) g(x) dydx {(x,y)}:g(x)>y x:g(x)> = f(x)g(x)dx = f(x)g(x)dx x:g(x)> 5

6 Of course, it immediately follows from the above theorem that for any constants a and b we have E[aX + b] = ae[x] + b In fact, putting g(x) = ax + b in the previous theorem, we get E[aX + b] = (ax + b)f X (x)dx = a xf X (x)dx + b f X (x)dx = ae[x] + b Thus, expected values inherit their linearity from the linearity of the integral. DEFINITION: If X is random variable with mean µ, then the variance and standard deviation are given by Var(X) = E[(X µ) 2 ] = (x µ) 2 f(x)dx σ X = Var(x) We also still have Var(X) = E[X 2 ] (E[X]) 2 Var(aX + b) = a 2 Var(X) σ ax+b = a σ X The proofs are exactly the same as in the discrete case. EXAMPLE: Consider again X with the density function Find Var[X]. f(x) = 2 x, x 2 Solution: Recall that we have E[X] = 4/3. We now find the second moment E[X 2 ] = x 2 f(x)dx = 2 x 2 2 xdx = 8 x4 2 = 6 8 = 2 Therefore, Var(X) = E[X 2 ] E[X] 2 = 2 (4/3) 2 = 8/9 6/9 = /3 6

7 Section 5.3 The Uniform Random Variable We first consider the interval (, ) and say a random variable is uniformly distributed over (, ) if its density function is f(x) = {, < x <, else Note that f(x)dx = dx = Also, as f(x) = outside of (, ), X only takes values in (, ). As f is a constant, X is equally likely to be near each number. That is, each small region of size ǫ is equally likely to contain X (and has a probability of ǫ of doing so). Note that for any a b we have P {a X b} = b a f(x)dx = b a General Case: Now consider an interval (a, b). We say that X is uniformly distributed on (a, b) if, t < a f(t) = b a, a < t < b and F(t) = t a b a, a x < b, else, t b We can compute the expected value and variance straightaway: 7

8 We can compute the expected value and variance straightaway: E[X] = b a x b a dx = b a 2 (b2 a 2 ) = b + a 2 Similarly, Therefore, E[X 2 ] = b a x 2 b a dx = b3 a 3 3(b a) = b2 + ab + a 2 3 Var(X) = E[X 2 ] E[X] 2 = b2 + ab + a 2 3 ( ) 2 b + a = 2 (b a)2 2 EXAMPLE: Consider a uniform random variable with density, x (, 7) f(x) = 8, else Find P {X < 2}. 8

9 EXAMPLE: Consider a uniform random variable with density, x (, 7) f(x) = 8, else Find P {X < 2}. Solution: The range of X is (, 7). Thus, we have P {X < 2} = 2 f(x)dx = 2 8 dx = 3 8 Section 5.4 Normal Random Variables We say that X is a normal random variable or simply that X is normal with parameters µ and σ 2 if the density is given by f(x) = σ 2π e (x µ) 2 2σ 2 This density is a bell shaped curve and is symmetric around µ: It should NOT be apparent that this is a density. We need that it integrates to one. Making the substitution y = (x µ)/σ, we have σ 2π e (x µ) 2 2σ 2 dx = 2π 9 e y2 /2 dy

10 We will show that the integral is equal to 2π. To do so, we actually compute the square of the integral. Define I = e y2 /2 dy Then, I 2 = e y2 /2 dy e x2 /2 dx = e (x2 +y 2 )/2 dxdy Now we switch to polar coordinates. That is, x = r cosθ, y = r sin θ, dxdy = rdθdr Thus, 2π 2π I 2 = e r2 /2 rdθdr = e r2 /2 r dθdr = 2πe r2 /2 rdr = 2πe r2 /2 r= = 2π Scaling a normal random variable gives you another normal random variable. THEOREM: Let X be Normal(µ, σ 2 ). Then Y = ax + b is a normal random variable with parameters aµ + b and a 2 σ 2 Proof: We let a > (the proof for a < is the same). We first consider the distribution function of Y. We have ( ) t b F Y (t) = P {Y t} = P {ax + b t} = P {X (t b)/a} = F X a Differentiating both sides, we get f Y (t) = a f X ( ) t b a ( t b µ = σa 2π exp a 2σ 2 = ) 2 } { σa 2π exp (t b aµ)2 2(aσ) 2 which is the density of a normal random variable with parameters aµ + b and a 2 σ 2. From the above theorem it follows that if X is Normal(µ, σ 2 ), then Z = X µ σ is normally distributed with parameters and. Such a random variable is called a standard normal random variable. Note that its density is simply f Z (x) = 2π e x2 /2

11 By the above scaling, we see that we can compute the mean and variance for all normal random variables if we can compute it for the standard normal. In fact, we have E[Z] = xf Z (x)dx = 2π xe x2 /2 dx = e x2 /2 2π = and Var(Z) = E[Z 2 ] = 2π x 2 e x2 /2 dx Integration by parts with u = x and dv = xe x2 /2 dx yields Var(Z) = Therefore, we see that if X = µ + σz, then X is a normal random variable with parameters µ, σ 2 and E[X] = µ, Var(X) = σ 2 Var(Z) = σ 2 NOTATION: Typically, we denote Φ(x) = P {Z x} for a standard normal Z. That is, Φ(x) = x 2π Note that by the symmetry of the density we have e y2 /2 dy Φ( x) = Φ(x) Therefore, it is customary to provide charts with the values of Φ(x) for all x (, 3.5) in increments of.. (Page 2 in the book). Note that if X Normal(µ, σ 2 ), then Φ can still be used: ( ) a µ F X (a) = P {X a} = P {(X µ)/σ (a µ)/σ} = Φ σ EXAMPLE: Let X Normal(3, 9), i.e. µ = 3 and σ 2 = 9. Find P {2 < X < 5}.

12 EXAMPLE: Let X Normal(3, 9), i.e. µ = 3 and σ 2 = 9. Find P {2 < X < 5}. Solution: We have, where Z is a standard normal, { 2 3 P {2 < X < 5} = P < Z < 5 3 } = P { /3 < Z < 2/3} = Φ(2/3) Φ( /3) 3 3 = Φ(2/3) ( Φ(/3)) Φ(.66) + Φ(.33) = =.3747 We can use the normal random variable to approximate a binomial. This is a special case of the Central limit theorem. THEOREM (The DeMoivre-Laplace limit theorem): Let S n denote the number of successes that occur when n independent trials, each with a probability of p, are performed. Then for any a < b we have { } P a S n np b Φ(b) Φ(a) np( p) A rule of thumb is that this is a good approximation when np( p). Note that there is no assumption of p being small like in Poisson approximation. EXAMPLE: Suppose that a biased coin is flipped 7 times and that probability of heads is.4. Approximate the probability that there are 3 heads. Compare with the actual answer. What can you say about the probability that there are between 2 and 4 heads? 2

13 EXAMPLE: Suppose that a biased coin is flipped 7 times and that probability of heads is.4. Approximate the probability that there are 3 heads. Compare with the actual answer. What can you say about the probability that there are between 2 and 4 heads? Solution: Let X be the number of times the coin lands on heads. We first note that a normal random variable is continuous, whereas the binomial is discrete. Thus, (and this is called the continuity correction), we do the following P {X = 3} = P {29.5 < X < 3.5} = P { < X < } Whereas P {.366 < Z <.6} Φ(.6) Φ(.37) =.848 P {X = 3} = ( ) = We now evaluate the probability that there are between 2 and 4 heads: P {2 < X < 4} = P {9.5 < X < 4.5} = P { < X < } P { 2.7 < Z < 3.5} Φ(3.5) Φ( 2.7) = Φ(3.5) ( Φ(2.7)).9989 (.988) =

14 Section 5.5 Exponential Random Variables Exponential random variables arise as the distribution of the amount of time until some specific event occurs. A continuous random variable with the density function f(x) = λe λx for x and zero otherwise for some λ > is called an exponential random variable with parameter λ >. The distribution function for t is F(t) = P {X t} = t λe λx dx = e λx t x= = e λt Computing the moments of an exponential is easy with integration by parts for n > : E[X n ] = x n λe λx dx = x n e λx + x= nx n e λx dx = n λ x n λe λx dx = n λ E[Xn ] Therefore, for n = and n = 2 we see that E[X] = λ, E[X2 ] = 2 λ E[X] = 2 λ 2,..., E[Xn ] = n! λ n and Var(X) = 2 λ 2 ( ) 2 = λ λ 2 EXAMPLE: Suppose the length of a phone call in minutes is an exponential random variable with parameter λ = / (so the average call is minutes). What is the probability that a random call will take (a) more than 8 minutes? (b) between 8 and 22 minutes? Solution: Let X be the length of the call. We have P {X > 8} = F(8) = ( e (/) 8) = e.8 =.4493 P {8 < X < 22} = F(22) F(8) = e.8 e 2.2 =.3385 Memoryless property: We say that a nonnegative random variable X is memoryless if P {X > s + t X > t} = P {X > s} for all s, t Note that this property is similar to the geometric random variable one. Let us show that an an exponential random variable satisfies the memoryless property. In fact, we have P {X > s + t X > t} = P {X > s + t} P {X > t} = e λ(s+t) /e λt = e λs = P {X > s} One can show that exponential is the only continuous distribution with this property. 4

15 EXAMPLE: Three people are in a post-office. Persons and 2 are at the counter and will leave after an Exp(λ) amount of time. Once one leaves, person 3 will go to the counter and be served after an Exp(λ) amount of time. What is the probability that person 3 is the last to leave the post-office? Solution: After one leaves, person 3 takes his spot. Next, by the memoryless property, the waiting time for the remaining person is again Exp(λ), just like person 3. By symmetry, the probability is then /2. Hazard Rate Functions. Let X be a positive, continuous random variable that we think of as the lifetime of something. Let it have distribution function F and density f. Then the hazard or failure rate function λ(t) of the random variable is defined via λ(t) = f(t) F(t) where F = F What is this about? Note that P {X (t, t + dt) X > t} = P {X (t, t + dt), X > t} P {X > t} = P {X (t, t + dt)} P {X > t} f(t) F(t) dt Therefore, λ(t) represents the conditional probability intensity that a t-unit-old item will fail. Note that for the exponential distribution we have λ(t) = f(t) F(t) = λe λt e λt = λ The parameter λ is usually referred to as the rate of the distribution. It also turns out that the hazard function λ(t) uniquely determines the distribution of a random variable. In fact, we have d dt λ(t) = F(t) F(t) Integrating both sides, we get Thus, t ln( F(t)) = F(t) = e k exp λ(s)ds + k t λ(s)ds Setting t = (note that F() = ) shows that k =. Therefore F(t) = exp t λ(s)ds 5

16 EXAMPLE: Suppose that the life distribution of an item has the hazard rate function λ(t) = t 3, t > What is the probability that (a) the item survives to age 2? (b) the item s lifetime is between.4 and.4? (c) the item survives to age? (d) the item that survived to age will survive to age 2? 6

17 EXAMPLE: Suppose that the life distribution of an item has the hazard rate function What is the probability that (a) the item survives to age 2? λ(t) = t 3, t > (b) the item s lifetime is between.4 and.4? (c) the item survives to age? (d) the item that survived to age will survive to age 2? Solution: We are given λ(t). We know that t F(t) = exp λ(s)ds = exp Therefore, letting X be the lifetime of an item, we get (a) P {X > 2} = F(2) = exp{ 2 4 /4} = e t s 3 ds = exp { t 4 /4 } (b) P {.4 < X <.4} = F(.4) F(.4) = exp{ (.4) 4 /4} exp{ (.4) 4 /4}.69 (c) P {X > } = F() = exp{ /4}.7788 (d) P {X > 2 X > } = P {X > 2} P {X > } = F(2) F() = e e /4 EXAMPLE: A smoker is said to have a rate of death twice that of a non-smoker at every age. What does this mean? Solution: Let λ s (t) denote the hazard of a smoker and λ n (t) be the hazard of a non-smoker. Then λ s (t) = 2λ n (t) Let X n and X s denote the lifetime of a non-smoker and smoker, respectively. The probability that a non-smoker of age A will reach age B > A is B exp P {X n > B X n > A} = F λ n (t)dt n(b) B F n (A) = = exp A λ n (t)dt exp A λ n (t)dt Whereas, the corresponding probability for a smoker is B P {X s > B X s > A}=exp λ s (t)dt =exp 2 A B A λ n (t)dt = exp B A λ n (t)dt =[P {X n > B X n > A}] 2 Thus, if the probability that a 5 year old non-smoker reaches 6 is.765, the probability for a smoker is =.534. If the probability a 5 year old non-smoker reaches 8 is.2, the probability for a smoker is.2 2 =

18 Section 5.6 Other Continuous Distributions The Gamma distribution: A random variable X is said to have a gamma distribution with parameters (α, λ) for λ > and α > if its density function is λe λx (λx) α, x f(x) = Γ(x), x < where Γ(x) is called the gamma function and is defined by Γ(x) = e y y x dy Note that Γ() =. For x we use integration by parts with u = y x and dv = e y dy to show that Γ(x) = e y y x y= + (x ) e y y x 2 dy = (x )Γ(x ) Therefore, for integer values of n we see that Γ(n) = (n )Γ(n ) = (n )(n 2)Γ(n 2) =... = (n )(n 2)...3 2Γ() = (n )! So, it is a generalization of the factorial. REMARK: One can show that ( ) Γ = ( ) 3 π, Γ = ( ) 5 π, Γ = 3 4 ( ) π,..., Γ 2 + n = (2n)! π, n Z 4 n n! The gamma distribution comes up a lot as the sum of independent exponential random variables of parameter λ. That is, if X,...,X n are independent Exp(λ), then T n = X X n is gamma(n, λ). Recall that the time of the nth jump in a Poisson process, N(t), is the sum of n exponential random variables of parameter λ. Thus, the time of the nth event follows a gamma(n, λ) distribution. To prove all this we denote the time of the nth jump as T n and see that P {T n t} = P {N(t) n} = Differentiating both sides, we get f Tn (t)= e λtj(λt)j λ λ e λt(λt)j j! j! j=n j=n P {N(t) = j} = j=n = j=n λt (λt)j λe which is the density of a gamma(n, λ) random variable. REMARK: Note that as expected gamma(, λ) is Exp(λ). j=n (j )! λ j=n e λt(λt)j j! e λt(λt)j j! λt (λt)n =λe (n )! 8

19 We have so E[X] = Γ(x) λxe λx (λx) α dx = λγ(x) E[X] = α λ λe λx (λx) α dx = Γ(α + ) λγ(x) = α λ Recall that the expected value of Exp(λ) is /λ and this ties in nicely with the sum of exponentials being a gamma. It can also be shown that Var(X) = α λ 2 DEFINITION: The gamma distribution with λ = /2 and α = n/2 where n is a positive integer is called the χ 2 n distribution with n degrees of freedom. Section 5.7 The Distribution of a Function of a Random Variable We formalize what we ve already done. EXAMPLE: Let X be uniformly distributed over (, ). Let Y = X n. For y we have F Y (y) = P {Y y} = P {X n y} = P {X y /n } = F X (y /n ) = y /n Therefore, for y (, ) we have and zero otherwise. f Y (y) = n y/n THEOREM: Let X be a continuous random variable with density f X. Suppose g(x) is strictly monotone and differentiable. Let Y = g(x). Then f X (g (y)) d f Y (y) = dy g (y), if y = g(x) for some x, if y g(x) for all x Proof: Suppose g (x) > (the proof in the other case is similar). Suppose y = g(x) (i.e. it is in the range of Y ), then Differentiating both sides, we get F Y (y) = P {g(x) y} = P {X g (y)} = F X (g (y)) f Y (y) = f X (g (y)) d dy g (y) 9

5. Continuous Random Variables

5. Continuous Random Variables 5. Continuous Random Variables Continuous random variables can take any value in an interval. They are used to model physical characteristics such as time, length, position, etc. Examples (i) Let X be

More information

Notes on Continuous Random Variables

Notes on Continuous Random Variables Notes on Continuous Random Variables Continuous random variables are random quantities that are measured on a continuous scale. They can usually take on any value over some interval, which distinguishes

More information

CHAPTER 6: Continuous Uniform Distribution: 6.1. Definition: The density function of the continuous random variable X on the interval [A, B] is.

CHAPTER 6: Continuous Uniform Distribution: 6.1. Definition: The density function of the continuous random variable X on the interval [A, B] is. Some Continuous Probability Distributions CHAPTER 6: Continuous Uniform Distribution: 6. Definition: The density function of the continuous random variable X on the interval [A, B] is B A A x B f(x; A,

More information

Chapter 4 Lecture Notes

Chapter 4 Lecture Notes Chapter 4 Lecture Notes Random Variables October 27, 2015 1 Section 4.1 Random Variables A random variable is typically a real-valued function defined on the sample space of some experiment. For instance,

More information

The Exponential Distribution

The Exponential Distribution 21 The Exponential Distribution From Discrete-Time to Continuous-Time: In Chapter 6 of the text we will be considering Markov processes in continuous time. In a sense, we already have a very good understanding

More information

Lecture 6: Discrete & Continuous Probability and Random Variables

Lecture 6: Discrete & Continuous Probability and Random Variables Lecture 6: Discrete & Continuous Probability and Random Variables D. Alex Hughes Math Camp September 17, 2015 D. Alex Hughes (Math Camp) Lecture 6: Discrete & Continuous Probability and Random September

More information

Statistics 100A Homework 7 Solutions

Statistics 100A Homework 7 Solutions Chapter 6 Statistics A Homework 7 Solutions Ryan Rosario. A television store owner figures that 45 percent of the customers entering his store will purchase an ordinary television set, 5 percent will purchase

More information

Probability density function : An arbitrary continuous random variable X is similarly described by its probability density function f x = f X

Probability density function : An arbitrary continuous random variable X is similarly described by its probability density function f x = f X Week 6 notes : Continuous random variables and their probability densities WEEK 6 page 1 uniform, normal, gamma, exponential,chi-squared distributions, normal approx'n to the binomial Uniform [,1] random

More information

Statistics 100A Homework 4 Solutions

Statistics 100A Homework 4 Solutions Problem 1 For a discrete random variable X, Statistics 100A Homework 4 Solutions Ryan Rosario Note that all of the problems below as you to prove the statement. We are proving the properties of epectation

More information

Practice problems for Homework 11 - Point Estimation

Practice problems for Homework 11 - Point Estimation Practice problems for Homework 11 - Point Estimation 1. (10 marks) Suppose we want to select a random sample of size 5 from the current CS 3341 students. Which of the following strategies is the best:

More information

Section 6.1 Joint Distribution Functions

Section 6.1 Joint Distribution Functions Section 6.1 Joint Distribution Functions We often care about more than one random variable at a time. DEFINITION: For any two random variables X and Y the joint cumulative probability distribution function

More information

e.g. arrival of a customer to a service station or breakdown of a component in some system.

e.g. arrival of a customer to a service station or breakdown of a component in some system. Poisson process Events occur at random instants of time at an average rate of λ events per second. e.g. arrival of a customer to a service station or breakdown of a component in some system. Let N(t) be

More information

Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab

Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab Monte Carlo Simulation: IEOR E4703 Fall 2004 c 2004 by Martin Haugh Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab 1 Overview of Monte Carlo Simulation 1.1 Why use simulation?

More information

Definition: Suppose that two random variables, either continuous or discrete, X and Y have joint density

Definition: Suppose that two random variables, either continuous or discrete, X and Y have joint density HW MATH 461/561 Lecture Notes 15 1 Definition: Suppose that two random variables, either continuous or discrete, X and Y have joint density and marginal densities f(x, y), (x, y) Λ X,Y f X (x), x Λ X,

More information

For a partition B 1,..., B n, where B i B j = for i. A = (A B 1 ) (A B 2 ),..., (A B n ) and thus. P (A) = P (A B i ) = P (A B i )P (B i )

For a partition B 1,..., B n, where B i B j = for i. A = (A B 1 ) (A B 2 ),..., (A B n ) and thus. P (A) = P (A B i ) = P (A B i )P (B i ) Probability Review 15.075 Cynthia Rudin A probability space, defined by Kolmogorov (1903-1987) consists of: A set of outcomes S, e.g., for the roll of a die, S = {1, 2, 3, 4, 5, 6}, 1 1 2 1 6 for the roll

More information

Exponential Distribution

Exponential Distribution Exponential Distribution Definition: Exponential distribution with parameter λ: { λe λx x 0 f(x) = 0 x < 0 The cdf: F(x) = x Mean E(X) = 1/λ. f(x)dx = Moment generating function: φ(t) = E[e tx ] = { 1

More information

1 Sufficient statistics

1 Sufficient statistics 1 Sufficient statistics A statistic is a function T = rx 1, X 2,, X n of the random sample X 1, X 2,, X n. Examples are X n = 1 n s 2 = = X i, 1 n 1 the sample mean X i X n 2, the sample variance T 1 =

More information

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015.

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015. Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment -3, Probability and Statistics, March 05. Due:-March 5, 05.. Show that the function 0 for x < x+ F (x) = 4 for x < for x

More information

Lecture 7: Continuous Random Variables

Lecture 7: Continuous Random Variables Lecture 7: Continuous Random Variables 21 September 2005 1 Our First Continuous Random Variable The back of the lecture hall is roughly 10 meters across. Suppose it were exactly 10 meters, and consider

More information

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4) Summary of Formulas and Concepts Descriptive Statistics (Ch. 1-4) Definitions Population: The complete set of numerical information on a particular quantity in which an investigator is interested. We assume

More information

Random variables P(X = 3) = P(X = 3) = 1 8, P(X = 1) = P(X = 1) = 3 8.

Random variables P(X = 3) = P(X = 3) = 1 8, P(X = 1) = P(X = 1) = 3 8. Random variables Remark on Notations 1. When X is a number chosen uniformly from a data set, What I call P(X = k) is called Freq[k, X] in the courseware. 2. When X is a random variable, what I call F ()

More information

Statistics 100A Homework 8 Solutions

Statistics 100A Homework 8 Solutions Part : Chapter 7 Statistics A Homework 8 Solutions Ryan Rosario. A player throws a fair die and simultaneously flips a fair coin. If the coin lands heads, then she wins twice, and if tails, the one-half

More information

UNIT I: RANDOM VARIABLES PART- A -TWO MARKS

UNIT I: RANDOM VARIABLES PART- A -TWO MARKS UNIT I: RANDOM VARIABLES PART- A -TWO MARKS 1. Given the probability density function of a continuous random variable X as follows f(x) = 6x (1-x) 0

More information

Math 461 Fall 2006 Test 2 Solutions

Math 461 Fall 2006 Test 2 Solutions Math 461 Fall 2006 Test 2 Solutions Total points: 100. Do all questions. Explain all answers. No notes, books, or electronic devices. 1. [105+5 points] Assume X Exponential(λ). Justify the following two

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Math 541: Statistical Theory II Lecturer: Songfeng Zheng Maximum Likelihood Estimation 1 Maximum Likelihood Estimation Maximum likelihood is a relatively simple method of constructing an estimator for

More information

6.041/6.431 Spring 2008 Quiz 2 Wednesday, April 16, 7:30-9:30 PM. SOLUTIONS

6.041/6.431 Spring 2008 Quiz 2 Wednesday, April 16, 7:30-9:30 PM. SOLUTIONS 6.4/6.43 Spring 28 Quiz 2 Wednesday, April 6, 7:3-9:3 PM. SOLUTIONS Name: Recitation Instructor: TA: 6.4/6.43: Question Part Score Out of 3 all 36 2 a 4 b 5 c 5 d 8 e 5 f 6 3 a 4 b 6 c 6 d 6 e 6 Total

More information

Feb 28 Homework Solutions Math 151, Winter 2012. Chapter 6 Problems (pages 287-291)

Feb 28 Homework Solutions Math 151, Winter 2012. Chapter 6 Problems (pages 287-291) Feb 8 Homework Solutions Math 5, Winter Chapter 6 Problems (pages 87-9) Problem 6 bin of 5 transistors is known to contain that are defective. The transistors are to be tested, one at a time, until the

More information

Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 18. A Brief Introduction to Continuous Probability

Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 18. A Brief Introduction to Continuous Probability CS 7 Discrete Mathematics and Probability Theory Fall 29 Satish Rao, David Tse Note 8 A Brief Introduction to Continuous Probability Up to now we have focused exclusively on discrete probability spaces

More information

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Module No. #01 Lecture No. #15 Special Distributions-VI Today, I am going to introduce

More information

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS. DISCRETE RANDOM VARIABLES.. Definition of a Discrete Random Variable. A random variable X is said to be discrete if it can assume only a finite or countable

More information

INSURANCE RISK THEORY (Problems)

INSURANCE RISK THEORY (Problems) INSURANCE RISK THEORY (Problems) 1 Counting random variables 1. (Lack of memory property) Let X be a geometric distributed random variable with parameter p (, 1), (X Ge (p)). Show that for all n, m =,

More information

Lecture 8. Confidence intervals and the central limit theorem

Lecture 8. Confidence intervals and the central limit theorem Lecture 8. Confidence intervals and the central limit theorem Mathematical Statistics and Discrete Mathematics November 25th, 2015 1 / 15 Central limit theorem Let X 1, X 2,... X n be a random sample of

More information

An Introduction to Basic Statistics and Probability

An Introduction to Basic Statistics and Probability An Introduction to Basic Statistics and Probability Shenek Heyward NCSU An Introduction to Basic Statistics and Probability p. 1/4 Outline Basic probability concepts Conditional probability Discrete Random

More information

ST 371 (IV): Discrete Random Variables

ST 371 (IV): Discrete Random Variables ST 371 (IV): Discrete Random Variables 1 Random Variables A random variable (rv) is a function that is defined on the sample space of the experiment and that assigns a numerical variable to each possible

More information

The sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1].

The sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1]. Probability Theory Probability Spaces and Events Consider a random experiment with several possible outcomes. For example, we might roll a pair of dice, flip a coin three times, or choose a random real

More information

Math 425 (Fall 08) Solutions Midterm 2 November 6, 2008

Math 425 (Fall 08) Solutions Midterm 2 November 6, 2008 Math 425 (Fall 8) Solutions Midterm 2 November 6, 28 (5 pts) Compute E[X] and Var[X] for i) X a random variable that takes the values, 2, 3 with probabilities.2,.5,.3; ii) X a random variable with the

More information

STAT 830 Convergence in Distribution

STAT 830 Convergence in Distribution STAT 830 Convergence in Distribution Richard Lockhart Simon Fraser University STAT 830 Fall 2011 Richard Lockhart (Simon Fraser University) STAT 830 Convergence in Distribution STAT 830 Fall 2011 1 / 31

More information

Generating Random Variables and Stochastic Processes

Generating Random Variables and Stochastic Processes Monte Carlo Simulation: IEOR E4703 Fall 2004 c 2004 by Martin Haugh Generating Random Variables and Stochastic Processes 1 Generating U(0,1) Random Variables The ability to generate U(0, 1) random variables

More information

2WB05 Simulation Lecture 8: Generating random variables

2WB05 Simulation Lecture 8: Generating random variables 2WB05 Simulation Lecture 8: Generating random variables Marko Boon http://www.win.tue.nl/courses/2wb05 January 7, 2013 Outline 2/36 1. How do we generate random variables? 2. Fitting distributions Generating

More information

ECE302 Spring 2006 HW5 Solutions February 21, 2006 1

ECE302 Spring 2006 HW5 Solutions February 21, 2006 1 ECE3 Spring 6 HW5 Solutions February 1, 6 1 Solutions to HW5 Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in italics

More information

Generating Random Variables and Stochastic Processes

Generating Random Variables and Stochastic Processes Monte Carlo Simulation: IEOR E4703 c 2010 by Martin Haugh Generating Random Variables and Stochastic Processes In these lecture notes we describe the principal methods that are used to generate random

More information

Final Mathematics 5010, Section 1, Fall 2004 Instructor: D.A. Levin

Final Mathematics 5010, Section 1, Fall 2004 Instructor: D.A. Levin Final Mathematics 51, Section 1, Fall 24 Instructor: D.A. Levin Name YOU MUST SHOW YOUR WORK TO RECEIVE CREDIT. A CORRECT ANSWER WITHOUT SHOWING YOUR REASONING WILL NOT RECEIVE CREDIT. Problem Points Possible

More information

1.1 Introduction, and Review of Probability Theory... 3. 1.1.1 Random Variable, Range, Types of Random Variables... 3. 1.1.2 CDF, PDF, Quantiles...

1.1 Introduction, and Review of Probability Theory... 3. 1.1.1 Random Variable, Range, Types of Random Variables... 3. 1.1.2 CDF, PDF, Quantiles... MATH4427 Notebook 1 Spring 2016 prepared by Professor Jenny Baglivo c Copyright 2009-2016 by Jenny A. Baglivo. All Rights Reserved. Contents 1 MATH4427 Notebook 1 3 1.1 Introduction, and Review of Probability

More information

What is Statistics? Lecture 1. Introduction and probability review. Idea of parametric inference

What is Statistics? Lecture 1. Introduction and probability review. Idea of parametric inference 0. 1. Introduction and probability review 1.1. What is Statistics? What is Statistics? Lecture 1. Introduction and probability review There are many definitions: I will use A set of principle and procedures

More information

Lecture 8: More Continuous Random Variables

Lecture 8: More Continuous Random Variables Lecture 8: More Continuous Random Variables 26 September 2005 Last time: the eponential. Going from saying the density e λ, to f() λe λ, to the CDF F () e λ. Pictures of the pdf and CDF. Today: the Gaussian

More information

FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL

FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL STATIsTICs 4 IV. RANDOm VECTORs 1. JOINTLY DIsTRIBUTED RANDOm VARIABLEs If are two rom variables defined on the same sample space we define the joint

More information

Math 431 An Introduction to Probability. Final Exam Solutions

Math 431 An Introduction to Probability. Final Exam Solutions Math 43 An Introduction to Probability Final Eam Solutions. A continuous random variable X has cdf a for 0, F () = for 0 <

More information

ISyE 6761 Fall 2012 Homework #2 Solutions

ISyE 6761 Fall 2012 Homework #2 Solutions 1 1. The joint p.m.f. of X and Y is (a) Find E[X Y ] for 1, 2, 3. (b) Find E[E[X Y ]]. (c) Are X and Y independent? ISE 6761 Fall 212 Homework #2 Solutions f(x, ) x 1 x 2 x 3 1 1/9 1/3 1/9 2 1/9 1/18 3

More information

Normal distribution. ) 2 /2σ. 2π σ

Normal distribution. ) 2 /2σ. 2π σ Normal distribution The normal distribution is the most widely known and used of all distributions. Because the normal distribution approximates many natural phenomena so well, it has developed into a

More information

4. Continuous Random Variables, the Pareto and Normal Distributions

4. Continuous Random Variables, the Pareto and Normal Distributions 4. Continuous Random Variables, the Pareto and Normal Distributions A continuous random variable X can take any value in a given range (e.g. height, weight, age). The distribution of a continuous random

More information

TOPIC 4: DERIVATIVES

TOPIC 4: DERIVATIVES TOPIC 4: DERIVATIVES 1. The derivative of a function. Differentiation rules 1.1. The slope of a curve. The slope of a curve at a point P is a measure of the steepness of the curve. If Q is a point on the

More information

STA 256: Statistics and Probability I

STA 256: Statistics and Probability I Al Nosedal. University of Toronto. Fall 2014 1 2 3 4 5 My momma always said: Life was like a box of chocolates. You never know what you re gonna get. Forrest Gump. Experiment, outcome, sample space, and

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Practice with Proofs

Practice with Proofs Practice with Proofs October 6, 2014 Recall the following Definition 0.1. A function f is increasing if for every x, y in the domain of f, x < y = f(x) < f(y) 1. Prove that h(x) = x 3 is increasing, using

More information

Important Probability Distributions OPRE 6301

Important Probability Distributions OPRE 6301 Important Probability Distributions OPRE 6301 Important Distributions... Certain probability distributions occur with such regularity in real-life applications that they have been given their own names.

More information

Probability Theory. Florian Herzog. A random variable is neither random nor variable. Gian-Carlo Rota, M.I.T..

Probability Theory. Florian Herzog. A random variable is neither random nor variable. Gian-Carlo Rota, M.I.T.. Probability Theory A random variable is neither random nor variable. Gian-Carlo Rota, M.I.T.. Florian Herzog 2013 Probability space Probability space A probability space W is a unique triple W = {Ω, F,

More information

Chapter 3 RANDOM VARIATE GENERATION

Chapter 3 RANDOM VARIATE GENERATION Chapter 3 RANDOM VARIATE GENERATION In order to do a Monte Carlo simulation either by hand or by computer, techniques must be developed for generating values of random variables having known distributions.

More information

APPLIED MATHEMATICS ADVANCED LEVEL

APPLIED MATHEMATICS ADVANCED LEVEL APPLIED MATHEMATICS ADVANCED LEVEL INTRODUCTION This syllabus serves to examine candidates knowledge and skills in introductory mathematical and statistical methods, and their applications. For applications

More information

Scalar Valued Functions of Several Variables; the Gradient Vector

Scalar Valued Functions of Several Variables; the Gradient Vector Scalar Valued Functions of Several Variables; the Gradient Vector Scalar Valued Functions vector valued function of n variables: Let us consider a scalar (i.e., numerical, rather than y = φ(x = φ(x 1,

More information

Manual for SOA Exam MLC.

Manual for SOA Exam MLC. Chapter 5. Life annuities. Extract from: Arcones Manual for the SOA Exam MLC. Spring 2010 Edition. available at http://www.actexmadriver.com/ 1/114 Whole life annuity A whole life annuity is a series of

More information

Math 370, Spring 2008 Prof. A.J. Hildebrand. Practice Test 2 Solutions

Math 370, Spring 2008 Prof. A.J. Hildebrand. Practice Test 2 Solutions Math 370, Spring 008 Prof. A.J. Hildebrand Practice Test Solutions About this test. This is a practice test made up of a random collection of 5 problems from past Course /P actuarial exams. Most of the

More information

MATH4427 Notebook 2 Spring 2016. 2 MATH4427 Notebook 2 3. 2.1 Definitions and Examples... 3. 2.2 Performance Measures for Estimators...

MATH4427 Notebook 2 Spring 2016. 2 MATH4427 Notebook 2 3. 2.1 Definitions and Examples... 3. 2.2 Performance Measures for Estimators... MATH4427 Notebook 2 Spring 2016 prepared by Professor Jenny Baglivo c Copyright 2009-2016 by Jenny A. Baglivo. All Rights Reserved. Contents 2 MATH4427 Notebook 2 3 2.1 Definitions and Examples...................................

More information

Practice Problems for Homework #6. Normal distribution and Central Limit Theorem.

Practice Problems for Homework #6. Normal distribution and Central Limit Theorem. Practice Problems for Homework #6. Normal distribution and Central Limit Theorem. 1. Read Section 3.4.6 about the Normal distribution and Section 4.7 about the Central Limit Theorem. 2. Solve the practice

More information

Stirling s formula, n-spheres and the Gamma Function

Stirling s formula, n-spheres and the Gamma Function Stirling s formula, n-spheres and the Gamma Function We start by noticing that and hence x n e x dx lim a 1 ( 1 n n a n n! e ax dx lim a 1 ( 1 n n a n a 1 x n e x dx (1 Let us make a remark in passing.

More information

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August

More information

General Sampling Methods

General Sampling Methods General Sampling Methods Reference: Glasserman, 2.2 and 2.3 Claudio Pacati academic year 2016 17 1 Inverse Transform Method Assume U U(0, 1) and let F be the cumulative distribution function of a distribution

More information

Errata and updates for ASM Exam C/Exam 4 Manual (Sixteenth Edition) sorted by page

Errata and updates for ASM Exam C/Exam 4 Manual (Sixteenth Edition) sorted by page Errata for ASM Exam C/4 Study Manual (Sixteenth Edition) Sorted by Page 1 Errata and updates for ASM Exam C/Exam 4 Manual (Sixteenth Edition) sorted by page Practice exam 1:9, 1:22, 1:29, 9:5, and 10:8

More information

correct-choice plot f(x) and draw an approximate tangent line at x = a and use geometry to estimate its slope comment The choices were:

correct-choice plot f(x) and draw an approximate tangent line at x = a and use geometry to estimate its slope comment The choices were: Topic 1 2.1 mode MultipleSelection text How can we approximate the slope of the tangent line to f(x) at a point x = a? This is a Multiple selection question, so you need to check all of the answers that

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES Contents 1. Random variables and measurable functions 2. Cumulative distribution functions 3. Discrete

More information

Solutions for Review Problems

Solutions for Review Problems olutions for Review Problems 1. Let be the triangle with vertices A (,, ), B (4,, 1) and C (,, 1). (a) Find the cosine of the angle BAC at vertex A. (b) Find the area of the triangle ABC. (c) Find a vector

More information

MULTIVARIATE PROBABILITY DISTRIBUTIONS

MULTIVARIATE PROBABILITY DISTRIBUTIONS MULTIVARIATE PROBABILITY DISTRIBUTIONS. PRELIMINARIES.. Example. Consider an experiment that consists of tossing a die and a coin at the same time. We can consider a number of random variables defined

More information

M2S1 Lecture Notes. G. A. Young http://www2.imperial.ac.uk/ ayoung

M2S1 Lecture Notes. G. A. Young http://www2.imperial.ac.uk/ ayoung M2S1 Lecture Notes G. A. Young http://www2.imperial.ac.uk/ ayoung September 2011 ii Contents 1 DEFINITIONS, TERMINOLOGY, NOTATION 1 1.1 EVENTS AND THE SAMPLE SPACE......................... 1 1.1.1 OPERATIONS

More information

Topic 8. Chi Square Tests

Topic 8. Chi Square Tests BE540W Chi Square Tests Page 1 of 5 Topic 8 Chi Square Tests Topics 1. Introduction to Contingency Tables. Introduction to the Contingency Table Hypothesis Test of No Association.. 3. The Chi Square Test

More information

MATH 425, PRACTICE FINAL EXAM SOLUTIONS.

MATH 425, PRACTICE FINAL EXAM SOLUTIONS. MATH 45, PRACTICE FINAL EXAM SOLUTIONS. Exercise. a Is the operator L defined on smooth functions of x, y by L u := u xx + cosu linear? b Does the answer change if we replace the operator L by the operator

More information

Random variables, probability distributions, binomial random variable

Random variables, probability distributions, binomial random variable Week 4 lecture notes. WEEK 4 page 1 Random variables, probability distributions, binomial random variable Eample 1 : Consider the eperiment of flipping a fair coin three times. The number of tails that

More information

Math 151. Rumbos Spring 2014 1. Solutions to Assignment #22

Math 151. Rumbos Spring 2014 1. Solutions to Assignment #22 Math 151. Rumbos Spring 2014 1 Solutions to Assignment #22 1. An experiment consists of rolling a die 81 times and computing the average of the numbers on the top face of the die. Estimate the probability

More information

2.6. Probability. In general the probability density of a random variable satisfies two conditions:

2.6. Probability. In general the probability density of a random variable satisfies two conditions: 2.6. PROBABILITY 66 2.6. Probability 2.6.. Continuous Random Variables. A random variable a real-valued function defined on some set of possible outcomes of a random experiment; e.g. the number of points

More information

Introduction to Probability

Introduction to Probability Introduction to Probability EE 179, Lecture 15, Handout #24 Probability theory gives a mathematical characterization for experiments with random outcomes. coin toss life of lightbulb binary data sequence

More information

Chapter 2: Binomial Methods and the Black-Scholes Formula

Chapter 2: Binomial Methods and the Black-Scholes Formula Chapter 2: Binomial Methods and the Black-Scholes Formula 2.1 Binomial Trees We consider a financial market consisting of a bond B t = B(t), a stock S t = S(t), and a call-option C t = C(t), where the

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,

More information

Joint Exam 1/P Sample Exam 1

Joint Exam 1/P Sample Exam 1 Joint Exam 1/P Sample Exam 1 Take this practice exam under strict exam conditions: Set a timer for 3 hours; Do not stop the timer for restroom breaks; Do not look at your notes. If you believe a question

More information

Lecture Notes 1. Brief Review of Basic Probability

Lecture Notes 1. Brief Review of Basic Probability Probability Review Lecture Notes Brief Review of Basic Probability I assume you know basic probability. Chapters -3 are a review. I will assume you have read and understood Chapters -3. Here is a very

More information

VISUALIZATION OF DENSITY FUNCTIONS WITH GEOGEBRA

VISUALIZATION OF DENSITY FUNCTIONS WITH GEOGEBRA VISUALIZATION OF DENSITY FUNCTIONS WITH GEOGEBRA Csilla Csendes University of Miskolc, Hungary Department of Applied Mathematics ICAM 2010 Probability density functions A random variable X has density

More information

Chapter 3: DISCRETE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS. Part 3: Discrete Uniform Distribution Binomial Distribution

Chapter 3: DISCRETE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS. Part 3: Discrete Uniform Distribution Binomial Distribution Chapter 3: DISCRETE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS Part 3: Discrete Uniform Distribution Binomial Distribution Sections 3-5, 3-6 Special discrete random variable distributions we will cover

More information

STT315 Chapter 4 Random Variables & Probability Distributions KM. Chapter 4.5, 6, 8 Probability Distributions for Continuous Random Variables

STT315 Chapter 4 Random Variables & Probability Distributions KM. Chapter 4.5, 6, 8 Probability Distributions for Continuous Random Variables Chapter 4.5, 6, 8 Probability Distributions for Continuous Random Variables Discrete vs. continuous random variables Examples of continuous distributions o Uniform o Exponential o Normal Recall: A random

More information

Modeling and Analysis of Information Technology Systems

Modeling and Analysis of Information Technology Systems Modeling and Analysis of Information Technology Systems Dr. János Sztrik University of Debrecen, Faculty of Informatics Reviewers: Dr. József Bíró Doctor of the Hungarian Academy of Sciences, Full Professor

More information

Determining distribution parameters from quantiles

Determining distribution parameters from quantiles Determining distribution parameters from quantiles John D. Cook Department of Biostatistics The University of Texas M. D. Anderson Cancer Center P. O. Box 301402 Unit 1409 Houston, TX 77230-1402 USA cook@mderson.org

More information

Notes on the Negative Binomial Distribution

Notes on the Negative Binomial Distribution Notes on the Negative Binomial Distribution John D. Cook October 28, 2009 Abstract These notes give several properties of the negative binomial distribution. 1. Parameterizations 2. The connection between

More information

Survival Distributions, Hazard Functions, Cumulative Hazards

Survival Distributions, Hazard Functions, Cumulative Hazards Week 1 Survival Distributions, Hazard Functions, Cumulative Hazards 1.1 Definitions: The goals of this unit are to introduce notation, discuss ways of probabilistically describing the distribution of a

More information

Lecture 13: Martingales

Lecture 13: Martingales Lecture 13: Martingales 1. Definition of a Martingale 1.1 Filtrations 1.2 Definition of a martingale and its basic properties 1.3 Sums of independent random variables and related models 1.4 Products of

More information

Chapter 5. Random variables

Chapter 5. Random variables Random variables random variable numerical variable whose value is the outcome of some probabilistic experiment; we use uppercase letters, like X, to denote such a variable and lowercase letters, like

More information

6 PROBABILITY GENERATING FUNCTIONS

6 PROBABILITY GENERATING FUNCTIONS 6 PROBABILITY GENERATING FUNCTIONS Certain derivations presented in this course have been somewhat heavy on algebra. For example, determining the expectation of the Binomial distribution (page 5.1 turned

More information

M5A42 APPLIED STOCHASTIC PROCESSES PROBLEM SHEET 1 SOLUTIONS Term 1 2010-2011

M5A42 APPLIED STOCHASTIC PROCESSES PROBLEM SHEET 1 SOLUTIONS Term 1 2010-2011 M5A42 APPLIED STOCHASTIC PROCESSES PROBLEM SHEET 1 SOLUTIONS Term 1 21-211 1. Clculte the men, vrince nd chrcteristic function of the following probbility density functions. ) The exponentil distribution

More information

Introduction to General and Generalized Linear Models

Introduction to General and Generalized Linear Models Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby

More information

Exact Confidence Intervals

Exact Confidence Intervals Math 541: Statistical Theory II Instructor: Songfeng Zheng Exact Confidence Intervals Confidence intervals provide an alternative to using an estimator ˆθ when we wish to estimate an unknown parameter

More information

Math 432 HW 2.5 Solutions

Math 432 HW 2.5 Solutions Math 432 HW 2.5 Solutions Assigned: 1-10, 12, 13, and 14. Selected for Grading: 1 (for five points), 6 (also for five), 9, 12 Solutions: 1. (2y 3 + 2y 2 ) dx + (3y 2 x + 2xy) dy = 0. M/ y = 6y 2 + 4y N/

More information

Principle of Data Reduction

Principle of Data Reduction Chapter 6 Principle of Data Reduction 6.1 Introduction An experimenter uses the information in a sample X 1,..., X n to make inferences about an unknown parameter θ. If the sample size n is large, then

More information

1 Prior Probability and Posterior Probability

1 Prior Probability and Posterior Probability Math 541: Statistical Theory II Bayesian Approach to Parameter Estimation Lecturer: Songfeng Zheng 1 Prior Probability and Posterior Probability Consider now a problem of statistical inference in which

More information

Properties of Future Lifetime Distributions and Estimation

Properties of Future Lifetime Distributions and Estimation Properties of Future Lifetime Distributions and Estimation Harmanpreet Singh Kapoor and Kanchan Jain Abstract Distributional properties of continuous future lifetime of an individual aged x have been studied.

More information

Optimization of Business Processes: An Introduction to Applied Stochastic Modeling. Ger Koole Department of Mathematics, VU University Amsterdam

Optimization of Business Processes: An Introduction to Applied Stochastic Modeling. Ger Koole Department of Mathematics, VU University Amsterdam Optimization of Business Processes: An Introduction to Applied Stochastic Modeling Ger Koole Department of Mathematics, VU University Amsterdam Version of March 30, 2010 c Ger Koole, 1998 2010. These lecture

More information