Section 5.1 Continuous Random Variables: Introduction

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "Section 5.1 Continuous Random Variables: Introduction"

Transcription

1 Section 5. Continuous Random Variables: Introduction Not all random variables are discrete. For example:. Waiting times for anything (train, arrival of customer, production of mrna molecule from gene, etc). 2. Distance a ball is thrown. 3. Size of an antenna on a bug. The general idea is that now the sample space is uncountable. Probability mass functions and summation no longer works. DEFINITION: We say that X is a continuous random variable if the sample space is uncountable and there exists a nonnegative function f defined for all x (, ) having the property that for any set B R, P {X B} = f(x)dx The function f is called the probability density function of the random variable X, and is (sort of) the analogue of the probability mass function in the discrete case. So probabilities are now found by integration, rather than summation. REMARK: We must have that = P { < X < } = Also, taking B = [a, b] for a < b we have P {a X b} = B b a f(x)dx Note that taking a = b yields the moderately counter-intuitive P {X = a} = a a f(x)dx = f(x)dx Recall: For discrete random variables the probability mass function can be reconstructed from the distribution function and vice versa and changes in the distribution function corresponded with finding where the mass of the probability was. We now have that and so F(t) = P {X t} = t F (t) = f(t) f(x)dx

2 agreeing with the previous interpretation. Also, P(X (a, b)) = P(X [a, b]) = etc. = b a f(t)dt = F(b) F(a) The density function does not represent a probability. However, its integral gives probability of being in certain regions of R. Also, f(a) gives a measure of likelihood of being around a. That is, P {a ǫ/2 < X < a + ǫ/2} = a+ǫ/2 a ǫ/2 f(t)dt f(a)ǫ when ǫ is small and f is continuous at a. Thus, the probability that X will be in an interval around a of size ǫ is approximately ǫf(a). So, if f(a) < f(b), then P(a ǫ/2 < X < a + ǫ/2) ǫf(a) < ǫf(b) P(b ǫ/2 < X < b + ǫ/2) In other words, f(a) is a measure of how likely it is that the random variable takes a value near a. EXAMPLES:. The amount of time you must wait, in minutes, for the appearance of an mrna molecule is a continuous random variable with density { λe 3t, t f(t) =, t < What is the probability that. You will have to wait between and 2 minutes? 2. You will have to wait longer than /2 minutes? Solution: First, we haven t given λ. We need = f(t)dt = Therefore, λ = 3, and f(t) = 3e 3t for t. Thus, P { < X < 2} = P {X > /2} = 2 /2 λe 3t dt = λ 3 e 3t t= = λ 3 3e 3t dt = e 3 e e 3t dt = e 3/ If X is a continuous random variable with distribution function F X and density f X, find the density function of Y = kx. 2

3 2. If X is a continuous random variable with distribution function F X and density f X, find the density function of Y = kx. Solution: We ll do two ways (like in the book). We have Differentiating with respect to t yields Another derivation is the following. We have F Y (t) = P {Y t} = P {kx t} = P {X t/k} = F X (t/k) f Y (t) = d dt F X(t/k) = k f X(t/k) ǫf Y (a) P {a ǫ/2 Y a + ǫ/2} = P {a ǫ/2 kx a + ǫ/2} = P {a/k ǫ/(2k) X a/k + ǫ/(2k)} ǫ k f X(a/k) Dividing by ǫ yields the same result as before. Returning to our previous example with f X (t) = 3e 3t. If Y = 3X, then What if Y = kx + b? We have f Y (t) = 3 f X(t/3) = e t, t. F Y (t) = P {Y t} = P {kx + b t} = P {X (t b)/k} ( ) t b = F X k Differentiating with respect to t yields ( ) t b ( ) t b f Y (t) = d dt F X k = k f X k 3

4 Section 5.2 Expectation and Variance of Continuous Random Variables DEFINITION: If X is a random variable with density function f, then the expected value is This is analogous to the discrete case: E[X] = xf(x)dx. Discretize X into small ranges (x i, x i ] where x i x i = h is small. 2. Now think of X as discrete with the values x i. 3. Then, EXAMPLES: E[X] x i x i p(x i ) x i x i P(x i < X x i ) x i x i f(x i )h. Suppose that X has the density function Find E[X]. f(x) = 2 x, x 2 xf(x)dx Solution: We have E[X] = 2 x 2 xdx = 2 6 x3 = 8 6 = Suppose that f X (x) = for x (, ). Find E[e X ]. 4

5 2. Suppose that f X (x) = for x (, ). Find E[e X ]. Solution: Let Y = e X. We need to find the density of Y, then we can use the definition of Y. Because the range of X is (, ), the range of Y is (, e). For x e we have F Y (x) = P {Y x} = P {e X x} = P {X ln x} = F X (ln x) = ln x f X (t)dt = ln x Differentiating both sides, we get f Y (x) = /x for x e and zero otherwise. Thus, E[e X ] = E[Y ] = xf Y (x)dx = e dx = e As in the discrete case, there is a theorem making these computations much easier. THEOREM: Let X be a continuous random variable with density function f. Then for any g : R R we have E[g(X)] = Note how easy this makes the previous example: E[e X ] = g(x)f(x)dx e x dx = e To prove the theorem in the special case that g(x) we need the following: LEMMA: For a nonnegative random variable Y, Proof: We have = P {Y > y}dy = f Y (x) x E[Y ] = dydx = y P {Y > y}dy f Y (x)dxdy = x xf Y (x)dx = E[Y ] f Y (x)dydx Proof of Theorem: For any function g : R R (general case is similar) we have E[g(X)] = P {g(x) > y}dy = f(x)dx dy x:g(x)>y = f(x)dxdy = f(x) g(x) dydx {(x,y)}:g(x)>y x:g(x)> = f(x)g(x)dx = f(x)g(x)dx x:g(x)> 5

6 Of course, it immediately follows from the above theorem that for any constants a and b we have E[aX + b] = ae[x] + b In fact, putting g(x) = ax + b in the previous theorem, we get E[aX + b] = (ax + b)f X (x)dx = a xf X (x)dx + b f X (x)dx = ae[x] + b Thus, expected values inherit their linearity from the linearity of the integral. DEFINITION: If X is random variable with mean µ, then the variance and standard deviation are given by Var(X) = E[(X µ) 2 ] = (x µ) 2 f(x)dx σ X = Var(x) We also still have Var(X) = E[X 2 ] (E[X]) 2 Var(aX + b) = a 2 Var(X) σ ax+b = a σ X The proofs are exactly the same as in the discrete case. EXAMPLE: Consider again X with the density function Find Var[X]. f(x) = 2 x, x 2 Solution: Recall that we have E[X] = 4/3. We now find the second moment E[X 2 ] = x 2 f(x)dx = 2 x 2 2 xdx = 8 x4 2 = 6 8 = 2 Therefore, Var(X) = E[X 2 ] E[X] 2 = 2 (4/3) 2 = 8/9 6/9 = /3 6

7 Section 5.3 The Uniform Random Variable We first consider the interval (, ) and say a random variable is uniformly distributed over (, ) if its density function is f(x) = {, < x <, else Note that f(x)dx = dx = Also, as f(x) = outside of (, ), X only takes values in (, ). As f is a constant, X is equally likely to be near each number. That is, each small region of size ǫ is equally likely to contain X (and has a probability of ǫ of doing so). Note that for any a b we have P {a X b} = b a f(x)dx = b a General Case: Now consider an interval (a, b). We say that X is uniformly distributed on (a, b) if, t < a f(t) = b a, a < t < b and F(t) = t a b a, a x < b, else, t b We can compute the expected value and variance straightaway: 7

8 We can compute the expected value and variance straightaway: E[X] = b a x b a dx = b a 2 (b2 a 2 ) = b + a 2 Similarly, Therefore, E[X 2 ] = b a x 2 b a dx = b3 a 3 3(b a) = b2 + ab + a 2 3 Var(X) = E[X 2 ] E[X] 2 = b2 + ab + a 2 3 ( ) 2 b + a = 2 (b a)2 2 EXAMPLE: Consider a uniform random variable with density, x (, 7) f(x) = 8, else Find P {X < 2}. 8

9 EXAMPLE: Consider a uniform random variable with density, x (, 7) f(x) = 8, else Find P {X < 2}. Solution: The range of X is (, 7). Thus, we have P {X < 2} = 2 f(x)dx = 2 8 dx = 3 8 Section 5.4 Normal Random Variables We say that X is a normal random variable or simply that X is normal with parameters µ and σ 2 if the density is given by f(x) = σ 2π e (x µ) 2 2σ 2 This density is a bell shaped curve and is symmetric around µ: It should NOT be apparent that this is a density. We need that it integrates to one. Making the substitution y = (x µ)/σ, we have σ 2π e (x µ) 2 2σ 2 dx = 2π 9 e y2 /2 dy

10 We will show that the integral is equal to 2π. To do so, we actually compute the square of the integral. Define I = e y2 /2 dy Then, I 2 = e y2 /2 dy e x2 /2 dx = e (x2 +y 2 )/2 dxdy Now we switch to polar coordinates. That is, x = r cosθ, y = r sin θ, dxdy = rdθdr Thus, 2π 2π I 2 = e r2 /2 rdθdr = e r2 /2 r dθdr = 2πe r2 /2 rdr = 2πe r2 /2 r= = 2π Scaling a normal random variable gives you another normal random variable. THEOREM: Let X be Normal(µ, σ 2 ). Then Y = ax + b is a normal random variable with parameters aµ + b and a 2 σ 2 Proof: We let a > (the proof for a < is the same). We first consider the distribution function of Y. We have ( ) t b F Y (t) = P {Y t} = P {ax + b t} = P {X (t b)/a} = F X a Differentiating both sides, we get f Y (t) = a f X ( ) t b a ( t b µ = σa 2π exp a 2σ 2 = ) 2 } { σa 2π exp (t b aµ)2 2(aσ) 2 which is the density of a normal random variable with parameters aµ + b and a 2 σ 2. From the above theorem it follows that if X is Normal(µ, σ 2 ), then Z = X µ σ is normally distributed with parameters and. Such a random variable is called a standard normal random variable. Note that its density is simply f Z (x) = 2π e x2 /2

11 By the above scaling, we see that we can compute the mean and variance for all normal random variables if we can compute it for the standard normal. In fact, we have E[Z] = xf Z (x)dx = 2π xe x2 /2 dx = e x2 /2 2π = and Var(Z) = E[Z 2 ] = 2π x 2 e x2 /2 dx Integration by parts with u = x and dv = xe x2 /2 dx yields Var(Z) = Therefore, we see that if X = µ + σz, then X is a normal random variable with parameters µ, σ 2 and E[X] = µ, Var(X) = σ 2 Var(Z) = σ 2 NOTATION: Typically, we denote Φ(x) = P {Z x} for a standard normal Z. That is, Φ(x) = x 2π Note that by the symmetry of the density we have e y2 /2 dy Φ( x) = Φ(x) Therefore, it is customary to provide charts with the values of Φ(x) for all x (, 3.5) in increments of.. (Page 2 in the book). Note that if X Normal(µ, σ 2 ), then Φ can still be used: ( ) a µ F X (a) = P {X a} = P {(X µ)/σ (a µ)/σ} = Φ σ EXAMPLE: Let X Normal(3, 9), i.e. µ = 3 and σ 2 = 9. Find P {2 < X < 5}.

12 EXAMPLE: Let X Normal(3, 9), i.e. µ = 3 and σ 2 = 9. Find P {2 < X < 5}. Solution: We have, where Z is a standard normal, { 2 3 P {2 < X < 5} = P < Z < 5 3 } = P { /3 < Z < 2/3} = Φ(2/3) Φ( /3) 3 3 = Φ(2/3) ( Φ(/3)) Φ(.66) + Φ(.33) = =.3747 We can use the normal random variable to approximate a binomial. This is a special case of the Central limit theorem. THEOREM (The DeMoivre-Laplace limit theorem): Let S n denote the number of successes that occur when n independent trials, each with a probability of p, are performed. Then for any a < b we have { } P a S n np b Φ(b) Φ(a) np( p) A rule of thumb is that this is a good approximation when np( p). Note that there is no assumption of p being small like in Poisson approximation. EXAMPLE: Suppose that a biased coin is flipped 7 times and that probability of heads is.4. Approximate the probability that there are 3 heads. Compare with the actual answer. What can you say about the probability that there are between 2 and 4 heads? 2

13 EXAMPLE: Suppose that a biased coin is flipped 7 times and that probability of heads is.4. Approximate the probability that there are 3 heads. Compare with the actual answer. What can you say about the probability that there are between 2 and 4 heads? Solution: Let X be the number of times the coin lands on heads. We first note that a normal random variable is continuous, whereas the binomial is discrete. Thus, (and this is called the continuity correction), we do the following P {X = 3} = P {29.5 < X < 3.5} = P { < X < } Whereas P {.366 < Z <.6} Φ(.6) Φ(.37) =.848 P {X = 3} = ( ) = We now evaluate the probability that there are between 2 and 4 heads: P {2 < X < 4} = P {9.5 < X < 4.5} = P { < X < } P { 2.7 < Z < 3.5} Φ(3.5) Φ( 2.7) = Φ(3.5) ( Φ(2.7)).9989 (.988) =

14 Section 5.5 Exponential Random Variables Exponential random variables arise as the distribution of the amount of time until some specific event occurs. A continuous random variable with the density function f(x) = λe λx for x and zero otherwise for some λ > is called an exponential random variable with parameter λ >. The distribution function for t is F(t) = P {X t} = t λe λx dx = e λx t x= = e λt Computing the moments of an exponential is easy with integration by parts for n > : E[X n ] = x n λe λx dx = x n e λx + x= nx n e λx dx = n λ x n λe λx dx = n λ E[Xn ] Therefore, for n = and n = 2 we see that E[X] = λ, E[X2 ] = 2 λ E[X] = 2 λ 2,..., E[Xn ] = n! λ n and Var(X) = 2 λ 2 ( ) 2 = λ λ 2 EXAMPLE: Suppose the length of a phone call in minutes is an exponential random variable with parameter λ = / (so the average call is minutes). What is the probability that a random call will take (a) more than 8 minutes? (b) between 8 and 22 minutes? Solution: Let X be the length of the call. We have P {X > 8} = F(8) = ( e (/) 8) = e.8 =.4493 P {8 < X < 22} = F(22) F(8) = e.8 e 2.2 =.3385 Memoryless property: We say that a nonnegative random variable X is memoryless if P {X > s + t X > t} = P {X > s} for all s, t Note that this property is similar to the geometric random variable one. Let us show that an an exponential random variable satisfies the memoryless property. In fact, we have P {X > s + t X > t} = P {X > s + t} P {X > t} = e λ(s+t) /e λt = e λs = P {X > s} One can show that exponential is the only continuous distribution with this property. 4

15 EXAMPLE: Three people are in a post-office. Persons and 2 are at the counter and will leave after an Exp(λ) amount of time. Once one leaves, person 3 will go to the counter and be served after an Exp(λ) amount of time. What is the probability that person 3 is the last to leave the post-office? Solution: After one leaves, person 3 takes his spot. Next, by the memoryless property, the waiting time for the remaining person is again Exp(λ), just like person 3. By symmetry, the probability is then /2. Hazard Rate Functions. Let X be a positive, continuous random variable that we think of as the lifetime of something. Let it have distribution function F and density f. Then the hazard or failure rate function λ(t) of the random variable is defined via λ(t) = f(t) F(t) where F = F What is this about? Note that P {X (t, t + dt) X > t} = P {X (t, t + dt), X > t} P {X > t} = P {X (t, t + dt)} P {X > t} f(t) F(t) dt Therefore, λ(t) represents the conditional probability intensity that a t-unit-old item will fail. Note that for the exponential distribution we have λ(t) = f(t) F(t) = λe λt e λt = λ The parameter λ is usually referred to as the rate of the distribution. It also turns out that the hazard function λ(t) uniquely determines the distribution of a random variable. In fact, we have d dt λ(t) = F(t) F(t) Integrating both sides, we get Thus, t ln( F(t)) = F(t) = e k exp λ(s)ds + k t λ(s)ds Setting t = (note that F() = ) shows that k =. Therefore F(t) = exp t λ(s)ds 5

16 EXAMPLE: Suppose that the life distribution of an item has the hazard rate function λ(t) = t 3, t > What is the probability that (a) the item survives to age 2? (b) the item s lifetime is between.4 and.4? (c) the item survives to age? (d) the item that survived to age will survive to age 2? 6

17 EXAMPLE: Suppose that the life distribution of an item has the hazard rate function What is the probability that (a) the item survives to age 2? λ(t) = t 3, t > (b) the item s lifetime is between.4 and.4? (c) the item survives to age? (d) the item that survived to age will survive to age 2? Solution: We are given λ(t). We know that t F(t) = exp λ(s)ds = exp Therefore, letting X be the lifetime of an item, we get (a) P {X > 2} = F(2) = exp{ 2 4 /4} = e t s 3 ds = exp { t 4 /4 } (b) P {.4 < X <.4} = F(.4) F(.4) = exp{ (.4) 4 /4} exp{ (.4) 4 /4}.69 (c) P {X > } = F() = exp{ /4}.7788 (d) P {X > 2 X > } = P {X > 2} P {X > } = F(2) F() = e e /4 EXAMPLE: A smoker is said to have a rate of death twice that of a non-smoker at every age. What does this mean? Solution: Let λ s (t) denote the hazard of a smoker and λ n (t) be the hazard of a non-smoker. Then λ s (t) = 2λ n (t) Let X n and X s denote the lifetime of a non-smoker and smoker, respectively. The probability that a non-smoker of age A will reach age B > A is B exp P {X n > B X n > A} = F λ n (t)dt n(b) B F n (A) = = exp A λ n (t)dt exp A λ n (t)dt Whereas, the corresponding probability for a smoker is B P {X s > B X s > A}=exp λ s (t)dt =exp 2 A B A λ n (t)dt = exp B A λ n (t)dt =[P {X n > B X n > A}] 2 Thus, if the probability that a 5 year old non-smoker reaches 6 is.765, the probability for a smoker is =.534. If the probability a 5 year old non-smoker reaches 8 is.2, the probability for a smoker is.2 2 =

18 Section 5.6 Other Continuous Distributions The Gamma distribution: A random variable X is said to have a gamma distribution with parameters (α, λ) for λ > and α > if its density function is λe λx (λx) α, x f(x) = Γ(x), x < where Γ(x) is called the gamma function and is defined by Γ(x) = e y y x dy Note that Γ() =. For x we use integration by parts with u = y x and dv = e y dy to show that Γ(x) = e y y x y= + (x ) e y y x 2 dy = (x )Γ(x ) Therefore, for integer values of n we see that Γ(n) = (n )Γ(n ) = (n )(n 2)Γ(n 2) =... = (n )(n 2)...3 2Γ() = (n )! So, it is a generalization of the factorial. REMARK: One can show that ( ) Γ = ( ) 3 π, Γ = ( ) 5 π, Γ = 3 4 ( ) π,..., Γ 2 + n = (2n)! π, n Z 4 n n! The gamma distribution comes up a lot as the sum of independent exponential random variables of parameter λ. That is, if X,...,X n are independent Exp(λ), then T n = X X n is gamma(n, λ). Recall that the time of the nth jump in a Poisson process, N(t), is the sum of n exponential random variables of parameter λ. Thus, the time of the nth event follows a gamma(n, λ) distribution. To prove all this we denote the time of the nth jump as T n and see that P {T n t} = P {N(t) n} = Differentiating both sides, we get f Tn (t)= e λtj(λt)j λ λ e λt(λt)j j! j! j=n j=n P {N(t) = j} = j=n = j=n λt (λt)j λe which is the density of a gamma(n, λ) random variable. REMARK: Note that as expected gamma(, λ) is Exp(λ). j=n (j )! λ j=n e λt(λt)j j! e λt(λt)j j! λt (λt)n =λe (n )! 8

19 We have so E[X] = Γ(x) λxe λx (λx) α dx = λγ(x) E[X] = α λ λe λx (λx) α dx = Γ(α + ) λγ(x) = α λ Recall that the expected value of Exp(λ) is /λ and this ties in nicely with the sum of exponentials being a gamma. It can also be shown that Var(X) = α λ 2 DEFINITION: The gamma distribution with λ = /2 and α = n/2 where n is a positive integer is called the χ 2 n distribution with n degrees of freedom. Section 5.7 The Distribution of a Function of a Random Variable We formalize what we ve already done. EXAMPLE: Let X be uniformly distributed over (, ). Let Y = X n. For y we have F Y (y) = P {Y y} = P {X n y} = P {X y /n } = F X (y /n ) = y /n Therefore, for y (, ) we have and zero otherwise. f Y (y) = n y/n THEOREM: Let X be a continuous random variable with density f X. Suppose g(x) is strictly monotone and differentiable. Let Y = g(x). Then f X (g (y)) d f Y (y) = dy g (y), if y = g(x) for some x, if y g(x) for all x Proof: Suppose g (x) > (the proof in the other case is similar). Suppose y = g(x) (i.e. it is in the range of Y ), then Differentiating both sides, we get F Y (y) = P {g(x) y} = P {X g (y)} = F X (g (y)) f Y (y) = f X (g (y)) d dy g (y) 9

5. Continuous Random Variables

5. Continuous Random Variables 5. Continuous Random Variables Continuous random variables can take any value in an interval. They are used to model physical characteristics such as time, length, position, etc. Examples (i) Let X be

More information

Notes on Continuous Random Variables

Notes on Continuous Random Variables Notes on Continuous Random Variables Continuous random variables are random quantities that are measured on a continuous scale. They can usually take on any value over some interval, which distinguishes

More information

Chapter 4 Expected Values

Chapter 4 Expected Values Chapter 4 Expected Values 4. The Expected Value of a Random Variables Definition. Let X be a random variable having a pdf f(x). Also, suppose the the following conditions are satisfied: x f(x) converges

More information

Continuous random variables

Continuous random variables Continuous random variables So far we have been concentrating on discrete random variables, whose distributions are not continuous. Now we deal with the so-called continuous random variables. A random

More information

CHAPTER 6: Continuous Uniform Distribution: 6.1. Definition: The density function of the continuous random variable X on the interval [A, B] is.

CHAPTER 6: Continuous Uniform Distribution: 6.1. Definition: The density function of the continuous random variable X on the interval [A, B] is. Some Continuous Probability Distributions CHAPTER 6: Continuous Uniform Distribution: 6. Definition: The density function of the continuous random variable X on the interval [A, B] is B A A x B f(x; A,

More information

Chapter 4 Lecture Notes

Chapter 4 Lecture Notes Chapter 4 Lecture Notes Random Variables October 27, 2015 1 Section 4.1 Random Variables A random variable is typically a real-valued function defined on the sample space of some experiment. For instance,

More information

The Exponential Distribution

The Exponential Distribution 21 The Exponential Distribution From Discrete-Time to Continuous-Time: In Chapter 6 of the text we will be considering Markov processes in continuous time. In a sense, we already have a very good understanding

More information

Lecture 6: Discrete & Continuous Probability and Random Variables

Lecture 6: Discrete & Continuous Probability and Random Variables Lecture 6: Discrete & Continuous Probability and Random Variables D. Alex Hughes Math Camp September 17, 2015 D. Alex Hughes (Math Camp) Lecture 6: Discrete & Continuous Probability and Random September

More information

Chapters 5. Multivariate Probability Distributions

Chapters 5. Multivariate Probability Distributions Chapters 5. Multivariate Probability Distributions Random vectors are collection of random variables defined on the same sample space. Whenever a collection of random variables are mentioned, they are

More information

3. Continuous Random Variables

3. Continuous Random Variables 3. Continuous Random Variables A continuous random variable is one which can take any value in an interval (or union of intervals) The values that can be taken by such a variable cannot be listed. Such

More information

Statistics 100A Homework 4 Solutions

Statistics 100A Homework 4 Solutions Problem 1 For a discrete random variable X, Statistics 100A Homework 4 Solutions Ryan Rosario Note that all of the problems below as you to prove the statement. We are proving the properties of epectation

More information

Practice problems for Homework 11 - Point Estimation

Practice problems for Homework 11 - Point Estimation Practice problems for Homework 11 - Point Estimation 1. (10 marks) Suppose we want to select a random sample of size 5 from the current CS 3341 students. Which of the following strategies is the best:

More information

Statistics 100A Homework 7 Solutions

Statistics 100A Homework 7 Solutions Chapter 6 Statistics A Homework 7 Solutions Ryan Rosario. A television store owner figures that 45 percent of the customers entering his store will purchase an ordinary television set, 5 percent will purchase

More information

Probability density function : An arbitrary continuous random variable X is similarly described by its probability density function f x = f X

Probability density function : An arbitrary continuous random variable X is similarly described by its probability density function f x = f X Week 6 notes : Continuous random variables and their probability densities WEEK 6 page 1 uniform, normal, gamma, exponential,chi-squared distributions, normal approx'n to the binomial Uniform [,1] random

More information

Section 6.1 Joint Distribution Functions

Section 6.1 Joint Distribution Functions Section 6.1 Joint Distribution Functions We often care about more than one random variable at a time. DEFINITION: For any two random variables X and Y the joint cumulative probability distribution function

More information

STAT 430/510 Probability Lecture 14: Joint Probability Distribution, Continuous Case

STAT 430/510 Probability Lecture 14: Joint Probability Distribution, Continuous Case STAT 430/510 Probability Lecture 14: Joint Probability Distribution, Continuous Case Pengyuan (Penelope) Wang June 20, 2011 Joint density function of continuous Random Variable When X and Y are two continuous

More information

e.g. arrival of a customer to a service station or breakdown of a component in some system.

e.g. arrival of a customer to a service station or breakdown of a component in some system. Poisson process Events occur at random instants of time at an average rate of λ events per second. e.g. arrival of a customer to a service station or breakdown of a component in some system. Let N(t) be

More information

Definition: Suppose that two random variables, either continuous or discrete, X and Y have joint density

Definition: Suppose that two random variables, either continuous or discrete, X and Y have joint density HW MATH 461/561 Lecture Notes 15 1 Definition: Suppose that two random variables, either continuous or discrete, X and Y have joint density and marginal densities f(x, y), (x, y) Λ X,Y f X (x), x Λ X,

More information

Exponential Distribution

Exponential Distribution Exponential Distribution Definition: Exponential distribution with parameter λ: { λe λx x 0 f(x) = 0 x < 0 The cdf: F(x) = x Mean E(X) = 1/λ. f(x)dx = Moment generating function: φ(t) = E[e tx ] = { 1

More information

For a partition B 1,..., B n, where B i B j = for i. A = (A B 1 ) (A B 2 ),..., (A B n ) and thus. P (A) = P (A B i ) = P (A B i )P (B i )

For a partition B 1,..., B n, where B i B j = for i. A = (A B 1 ) (A B 2 ),..., (A B n ) and thus. P (A) = P (A B i ) = P (A B i )P (B i ) Probability Review 15.075 Cynthia Rudin A probability space, defined by Kolmogorov (1903-1987) consists of: A set of outcomes S, e.g., for the roll of a die, S = {1, 2, 3, 4, 5, 6}, 1 1 2 1 6 for the roll

More information

Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab

Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab Monte Carlo Simulation: IEOR E4703 Fall 2004 c 2004 by Martin Haugh Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab 1 Overview of Monte Carlo Simulation 1.1 Why use simulation?

More information

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015.

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015. Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment -3, Probability and Statistics, March 05. Due:-March 5, 05.. Show that the function 0 for x < x+ F (x) = 4 for x < for x

More information

Definition 6.1.1. A r.v. X has a normal distribution with mean µ and variance σ 2, where µ R, and σ > 0, if its density is f(x) = 1. 2σ 2.

Definition 6.1.1. A r.v. X has a normal distribution with mean µ and variance σ 2, where µ R, and σ > 0, if its density is f(x) = 1. 2σ 2. Chapter 6 Brownian Motion 6. Normal Distribution Definition 6... A r.v. X has a normal distribution with mean µ and variance σ, where µ R, and σ > 0, if its density is fx = πσ e x µ σ. The previous definition

More information

Lecture 7: Continuous Random Variables

Lecture 7: Continuous Random Variables Lecture 7: Continuous Random Variables 21 September 2005 1 Our First Continuous Random Variable The back of the lecture hall is roughly 10 meters across. Suppose it were exactly 10 meters, and consider

More information

Statistics 100A Homework 8 Solutions

Statistics 100A Homework 8 Solutions Part : Chapter 7 Statistics A Homework 8 Solutions Ryan Rosario. A player throws a fair die and simultaneously flips a fair coin. If the coin lands heads, then she wins twice, and if tails, the one-half

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Math 541: Statistical Theory II Lecturer: Songfeng Zheng Maximum Likelihood Estimation 1 Maximum Likelihood Estimation Maximum likelihood is a relatively simple method of constructing an estimator for

More information

UNIT I: RANDOM VARIABLES PART- A -TWO MARKS

UNIT I: RANDOM VARIABLES PART- A -TWO MARKS UNIT I: RANDOM VARIABLES PART- A -TWO MARKS 1. Given the probability density function of a continuous random variable X as follows f(x) = 6x (1-x) 0

More information

Math 461 Fall 2006 Test 2 Solutions

Math 461 Fall 2006 Test 2 Solutions Math 461 Fall 2006 Test 2 Solutions Total points: 100. Do all questions. Explain all answers. No notes, books, or electronic devices. 1. [105+5 points] Assume X Exponential(λ). Justify the following two

More information

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4) Summary of Formulas and Concepts Descriptive Statistics (Ch. 1-4) Definitions Population: The complete set of numerical information on a particular quantity in which an investigator is interested. We assume

More information

Notes for STA 437/1005 Methods for Multivariate Data

Notes for STA 437/1005 Methods for Multivariate Data Notes for STA 437/1005 Methods for Multivariate Data Radford M. Neal, 26 November 2010 Random Vectors Notation: Let X be a random vector with p elements, so that X = [X 1,..., X p ], where denotes transpose.

More information

3 Multiple Discrete Random Variables

3 Multiple Discrete Random Variables 3 Multiple Discrete Random Variables 3.1 Joint densities Suppose we have a probability space (Ω, F,P) and now we have two discrete random variables X and Y on it. They have probability mass functions f

More information

1 Sufficient statistics

1 Sufficient statistics 1 Sufficient statistics A statistic is a function T = rx 1, X 2,, X n of the random sample X 1, X 2,, X n. Examples are X n = 1 n s 2 = = X i, 1 n 1 the sample mean X i X n 2, the sample variance T 1 =

More information

4. Joint Distributions of Two Random Variables

4. Joint Distributions of Two Random Variables 4. Joint Distributions of Two Random Variables 4.1 Joint Distributions of Two Discrete Random Variables Suppose the discrete random variables X and Y have supports S X and S Y, respectively. The joint

More information

Topic 8: The Expected Value

Topic 8: The Expected Value Topic 8: September 27 and 29, 2 Among the simplest summary of quantitative data is the sample mean. Given a random variable, the corresponding concept is given a variety of names, the distributional mean,

More information

P(a X b) = f X (x)dx. A p.d.f. must integrate to one: f X (x)dx = 1. Z b

P(a X b) = f X (x)dx. A p.d.f. must integrate to one: f X (x)dx = 1. Z b Continuous Random Variables The probability that a continuous random variable, X, has a value between a and b is computed by integrating its probability density function (p.d.f.) over the interval [a,b]:

More information

P (x) 0. Discrete random variables Expected value. The expected value, mean or average of a random variable x is: xp (x) = v i P (v i )

P (x) 0. Discrete random variables Expected value. The expected value, mean or average of a random variable x is: xp (x) = v i P (v i ) Discrete random variables Probability mass function Given a discrete random variable X taking values in X = {v 1,..., v m }, its probability mass function P : X [0, 1] is defined as: P (v i ) = Pr[X =

More information

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Module No. #01 Lecture No. #15 Special Distributions-VI Today, I am going to introduce

More information

Random variables P(X = 3) = P(X = 3) = 1 8, P(X = 1) = P(X = 1) = 3 8.

Random variables P(X = 3) = P(X = 3) = 1 8, P(X = 1) = P(X = 1) = 3 8. Random variables Remark on Notations 1. When X is a number chosen uniformly from a data set, What I call P(X = k) is called Freq[k, X] in the courseware. 2. When X is a random variable, what I call F ()

More information

Feb 28 Homework Solutions Math 151, Winter 2012. Chapter 6 Problems (pages 287-291)

Feb 28 Homework Solutions Math 151, Winter 2012. Chapter 6 Problems (pages 287-291) Feb 8 Homework Solutions Math 5, Winter Chapter 6 Problems (pages 87-9) Problem 6 bin of 5 transistors is known to contain that are defective. The transistors are to be tested, one at a time, until the

More information

Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 18. A Brief Introduction to Continuous Probability

Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 18. A Brief Introduction to Continuous Probability CS 7 Discrete Mathematics and Probability Theory Fall 29 Satish Rao, David Tse Note 8 A Brief Introduction to Continuous Probability Up to now we have focused exclusively on discrete probability spaces

More information

An Introduction to Basic Statistics and Probability

An Introduction to Basic Statistics and Probability An Introduction to Basic Statistics and Probability Shenek Heyward NCSU An Introduction to Basic Statistics and Probability p. 1/4 Outline Basic probability concepts Conditional probability Discrete Random

More information

Continuous Random Variables

Continuous Random Variables Continuous Random Variables COMP 245 STATISTICS Dr N A Heard Contents 1 Continuous Random Variables 2 11 Introduction 2 12 Probability Density Functions 3 13 Transformations 5 2 Mean, Variance and Quantiles

More information

Generating Random Variables and Stochastic Processes

Generating Random Variables and Stochastic Processes Monte Carlo Simulation: IEOR E4703 c 2010 by Martin Haugh Generating Random Variables and Stochastic Processes In these lecture notes we describe the principal methods that are used to generate random

More information

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS. DISCRETE RANDOM VARIABLES.. Definition of a Discrete Random Variable. A random variable X is said to be discrete if it can assume only a finite or countable

More information

6.041/6.431 Spring 2008 Quiz 2 Wednesday, April 16, 7:30-9:30 PM. SOLUTIONS

6.041/6.431 Spring 2008 Quiz 2 Wednesday, April 16, 7:30-9:30 PM. SOLUTIONS 6.4/6.43 Spring 28 Quiz 2 Wednesday, April 6, 7:3-9:3 PM. SOLUTIONS Name: Recitation Instructor: TA: 6.4/6.43: Question Part Score Out of 3 all 36 2 a 4 b 5 c 5 d 8 e 5 f 6 3 a 4 b 6 c 6 d 6 e 6 Total

More information

Generating Random Variables and Stochastic Processes

Generating Random Variables and Stochastic Processes Monte Carlo Simulation: IEOR E4703 Fall 2004 c 2004 by Martin Haugh Generating Random Variables and Stochastic Processes 1 Generating U(0,1) Random Variables The ability to generate U(0, 1) random variables

More information

Lecture 8. Confidence intervals and the central limit theorem

Lecture 8. Confidence intervals and the central limit theorem Lecture 8. Confidence intervals and the central limit theorem Mathematical Statistics and Discrete Mathematics November 25th, 2015 1 / 15 Central limit theorem Let X 1, X 2,... X n be a random sample of

More information

INSURANCE RISK THEORY (Problems)

INSURANCE RISK THEORY (Problems) INSURANCE RISK THEORY (Problems) 1 Counting random variables 1. (Lack of memory property) Let X be a geometric distributed random variable with parameter p (, 1), (X Ge (p)). Show that for all n, m =,

More information

The sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1].

The sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1]. Probability Theory Probability Spaces and Events Consider a random experiment with several possible outcomes. For example, we might roll a pair of dice, flip a coin three times, or choose a random real

More information

STAT 830 Convergence in Distribution

STAT 830 Convergence in Distribution STAT 830 Convergence in Distribution Richard Lockhart Simon Fraser University STAT 830 Fall 2011 Richard Lockhart (Simon Fraser University) STAT 830 Convergence in Distribution STAT 830 Fall 2011 1 / 31

More information

Math 425 (Fall 08) Solutions Midterm 2 November 6, 2008

Math 425 (Fall 08) Solutions Midterm 2 November 6, 2008 Math 425 (Fall 8) Solutions Midterm 2 November 6, 28 (5 pts) Compute E[X] and Var[X] for i) X a random variable that takes the values, 2, 3 with probabilities.2,.5,.3; ii) X a random variable with the

More information

Chapter 4. Multivariate Distributions

Chapter 4. Multivariate Distributions 1 Chapter 4. Multivariate Distributions Joint p.m.f. (p.d.f.) Independent Random Variables Covariance and Correlation Coefficient Expectation and Covariance Matrix Multivariate (Normal) Distributions Matlab

More information

2WB05 Simulation Lecture 8: Generating random variables

2WB05 Simulation Lecture 8: Generating random variables 2WB05 Simulation Lecture 8: Generating random variables Marko Boon http://www.win.tue.nl/courses/2wb05 January 7, 2013 Outline 2/36 1. How do we generate random variables? 2. Fitting distributions Generating

More information

ST 371 (IV): Discrete Random Variables

ST 371 (IV): Discrete Random Variables ST 371 (IV): Discrete Random Variables 1 Random Variables A random variable (rv) is a function that is defined on the sample space of the experiment and that assigns a numerical variable to each possible

More information

Taylor Polynomials and Taylor Series Math 126

Taylor Polynomials and Taylor Series Math 126 Taylor Polynomials and Taylor Series Math 26 In many problems in science and engineering we have a function f(x) which is too complicated to answer the questions we d like to ask. In this chapter, we will

More information

Lecture 16: Expected value, variance, independence and Chebyshev inequality

Lecture 16: Expected value, variance, independence and Chebyshev inequality Lecture 16: Expected value, variance, independence and Chebyshev inequality Expected value, variance, and Chebyshev inequality. If X is a random variable recall that the expected value of X, E[X] is the

More information

Using pivots to construct confidence intervals. In Example 41 we used the fact that

Using pivots to construct confidence intervals. In Example 41 we used the fact that Using pivots to construct confidence intervals In Example 41 we used the fact that Q( X, µ) = X µ σ/ n N(0, 1) for all µ. We then said Q( X, µ) z α/2 with probability 1 α, and converted this into a statement

More information

Lecture 8: More Continuous Random Variables

Lecture 8: More Continuous Random Variables Lecture 8: More Continuous Random Variables 26 September 2005 Last time: the eponential. Going from saying the density e λ, to f() λe λ, to the CDF F () e λ. Pictures of the pdf and CDF. Today: the Gaussian

More information

What is Statistics? Lecture 1. Introduction and probability review. Idea of parametric inference

What is Statistics? Lecture 1. Introduction and probability review. Idea of parametric inference 0. 1. Introduction and probability review 1.1. What is Statistics? What is Statistics? Lecture 1. Introduction and probability review There are many definitions: I will use A set of principle and procedures

More information

ECE302 Spring 2006 HW5 Solutions February 21, 2006 1

ECE302 Spring 2006 HW5 Solutions February 21, 2006 1 ECE3 Spring 6 HW5 Solutions February 1, 6 1 Solutions to HW5 Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in italics

More information

1.1 Introduction, and Review of Probability Theory... 3. 1.1.1 Random Variable, Range, Types of Random Variables... 3. 1.1.2 CDF, PDF, Quantiles...

1.1 Introduction, and Review of Probability Theory... 3. 1.1.1 Random Variable, Range, Types of Random Variables... 3. 1.1.2 CDF, PDF, Quantiles... MATH4427 Notebook 1 Spring 2016 prepared by Professor Jenny Baglivo c Copyright 2009-2016 by Jenny A. Baglivo. All Rights Reserved. Contents 1 MATH4427 Notebook 1 3 1.1 Introduction, and Review of Probability

More information

Probability and Statistics

Probability and Statistics CHAPTER 2: RANDOM VARIABLES AND ASSOCIATED FUNCTIONS 2b - 0 Probability and Statistics Kristel Van Steen, PhD 2 Montefiore Institute - Systems and Modeling GIGA - Bioinformatics ULg kristel.vansteen@ulg.ac.be

More information

Final Mathematics 5010, Section 1, Fall 2004 Instructor: D.A. Levin

Final Mathematics 5010, Section 1, Fall 2004 Instructor: D.A. Levin Final Mathematics 51, Section 1, Fall 24 Instructor: D.A. Levin Name YOU MUST SHOW YOUR WORK TO RECEIVE CREDIT. A CORRECT ANSWER WITHOUT SHOWING YOUR REASONING WILL NOT RECEIVE CREDIT. Problem Points Possible

More information

CSE 312, 2011 Winter, W.L.Ruzzo. 7. continuous random variables

CSE 312, 2011 Winter, W.L.Ruzzo. 7. continuous random variables CSE 312, 2011 Winter, W.L.Ruzzo 7. continuous random variables continuous random variables Discrete random variable: takes values in a finite or countable set, e.g. X {1,2,..., 6} with equal probability

More information

Math 431 An Introduction to Probability. Final Exam Solutions

Math 431 An Introduction to Probability. Final Exam Solutions Math 43 An Introduction to Probability Final Eam Solutions. A continuous random variable X has cdf a for 0, F () = for 0 <

More information

Exercises with solutions (1)

Exercises with solutions (1) Exercises with solutions (). Investigate the relationship between independence and correlation. (a) Two random variables X and Y are said to be correlated if and only if their covariance C XY is not equal

More information

6. Jointly Distributed Random Variables

6. Jointly Distributed Random Variables 6. Jointly Distributed Random Variables We are often interested in the relationship between two or more random variables. Example: A randomly chosen person may be a smoker and/or may get cancer. Definition.

More information

Solution Using the geometric series a/(1 r) = x=1. x=1. Problem For each of the following distributions, compute

Solution Using the geometric series a/(1 r) = x=1. x=1. Problem For each of the following distributions, compute Math 472 Homework Assignment 1 Problem 1.9.2. Let p(x) 1/2 x, x 1, 2, 3,..., zero elsewhere, be the pmf of the random variable X. Find the mgf, the mean, and the variance of X. Solution 1.9.2. Using the

More information

Random Variables, Expectation, Distributions

Random Variables, Expectation, Distributions Random Variables, Expectation, Distributions CS 5960/6960: Nonparametric Methods Tom Fletcher January 21, 2009 Review Random Variables Definition A random variable is a function defined on a probability

More information

ST 371 (VIII): Theory of Joint Distributions

ST 371 (VIII): Theory of Joint Distributions ST 371 (VIII): Theory of Joint Distributions So far we have focused on probability distributions for single random variables. However, we are often interested in probability statements concerning two or

More information

Jointly Distributed Random Variables

Jointly Distributed Random Variables Jointly Distributed Random Variables COMP 245 STATISTICS Dr N A Heard Contents 1 Jointly Distributed Random Variables 1 1.1 Definition......................................... 1 1.2 Joint cdfs..........................................

More information

University of California, Berkeley, Statistics 134: Concepts of Probability

University of California, Berkeley, Statistics 134: Concepts of Probability University of California, Berkeley, Statistics 134: Concepts of Probability Michael Lugo, Spring 211 Exam 2 solutions 1. A fair twenty-sided die has its faces labeled 1, 2, 3,..., 2. The die is rolled

More information

Recitation 4. 24xy for 0 < x < 1, 0 < y < 1, x + y < 1 0 elsewhere

Recitation 4. 24xy for 0 < x < 1, 0 < y < 1, x + y < 1 0 elsewhere Recitation. Exercise 3.5: If the joint probability density of X and Y is given by xy for < x

More information

ISyE 6761 Fall 2012 Homework #2 Solutions

ISyE 6761 Fall 2012 Homework #2 Solutions 1 1. The joint p.m.f. of X and Y is (a) Find E[X Y ] for 1, 2, 3. (b) Find E[E[X Y ]]. (c) Are X and Y independent? ISE 6761 Fall 212 Homework #2 Solutions f(x, ) x 1 x 2 x 3 1 1/9 1/3 1/9 2 1/9 1/18 3

More information

Probability & Statistics Primer Gregory J. Hakim University of Washington 2 January 2009 v2.0

Probability & Statistics Primer Gregory J. Hakim University of Washington 2 January 2009 v2.0 Probability & Statistics Primer Gregory J. Hakim University of Washington 2 January 2009 v2.0 This primer provides an overview of basic concepts and definitions in probability and statistics. We shall

More information

sheng@mail.ncyu.edu.tw 1 Content Introduction Expectation and variance of continuous random variables Normal random variables Exponential random variables Other continuous distributions The distribution

More information

Expectation Discrete RV - weighted average Continuous RV - use integral to take the weighted average

Expectation Discrete RV - weighted average Continuous RV - use integral to take the weighted average PHP 2510 Expectation, variance, covariance, correlation Expectation Discrete RV - weighted average Continuous RV - use integral to take the weighted average Variance Variance is the average of (X µ) 2

More information

1 IEOR 6711: Notes on the Poisson Process

1 IEOR 6711: Notes on the Poisson Process Copyright c 29 by Karl Sigman 1 IEOR 6711: Notes on the Poisson Process We present here the essentials of the Poisson point process with its many interesting properties. As preliminaries, we first define

More information

ECE302 Spring 2006 HW7 Solutions March 11, 2006 1

ECE302 Spring 2006 HW7 Solutions March 11, 2006 1 ECE32 Spring 26 HW7 Solutions March, 26 Solutions to HW7 Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in italics where

More information

Probability Theory. Florian Herzog. A random variable is neither random nor variable. Gian-Carlo Rota, M.I.T..

Probability Theory. Florian Herzog. A random variable is neither random nor variable. Gian-Carlo Rota, M.I.T.. Probability Theory A random variable is neither random nor variable. Gian-Carlo Rota, M.I.T.. Florian Herzog 2013 Probability space Probability space A probability space W is a unique triple W = {Ω, F,

More information

TOPIC 4: DERIVATIVES

TOPIC 4: DERIVATIVES TOPIC 4: DERIVATIVES 1. The derivative of a function. Differentiation rules 1.1. The slope of a curve. The slope of a curve at a point P is a measure of the steepness of the curve. If Q is a point on the

More information

MATH 201. Final ANSWERS August 12, 2016

MATH 201. Final ANSWERS August 12, 2016 MATH 01 Final ANSWERS August 1, 016 Part A 1. 17 points) A bag contains three different types of dice: four 6-sided dice, five 8-sided dice, and six 0-sided dice. A die is drawn from the bag and then rolled.

More information

Continuous Random Variables and Probability Distributions. Stat 4570/5570 Material from Devore s book (Ed 8) Chapter 4 - and Cengage

Continuous Random Variables and Probability Distributions. Stat 4570/5570 Material from Devore s book (Ed 8) Chapter 4 - and Cengage 4 Continuous Random Variables and Probability Distributions Stat 4570/5570 Material from Devore s book (Ed 8) Chapter 4 - and Cengage Continuous r.v. A random variable X is continuous if possible values

More information

Continuous Distributions

Continuous Distributions MAT 2379 3X (Summer 2012) Continuous Distributions Up to now we have been working with discrete random variables whose R X is finite or countable. However we will have to allow for variables that can take

More information

MATH4427 Notebook 2 Spring 2016. 2 MATH4427 Notebook 2 3. 2.1 Definitions and Examples... 3. 2.2 Performance Measures for Estimators...

MATH4427 Notebook 2 Spring 2016. 2 MATH4427 Notebook 2 3. 2.1 Definitions and Examples... 3. 2.2 Performance Measures for Estimators... MATH4427 Notebook 2 Spring 2016 prepared by Professor Jenny Baglivo c Copyright 2009-2016 by Jenny A. Baglivo. All Rights Reserved. Contents 2 MATH4427 Notebook 2 3 2.1 Definitions and Examples...................................

More information

FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL

FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL STATIsTICs 4 IV. RANDOm VECTORs 1. JOINTLY DIsTRIBUTED RANDOm VARIABLEs If are two rom variables defined on the same sample space we define the joint

More information

Some Notes on Taylor Polynomials and Taylor Series

Some Notes on Taylor Polynomials and Taylor Series Some Notes on Taylor Polynomials and Taylor Series Mark MacLean October 3, 27 UBC s courses MATH /8 and MATH introduce students to the ideas of Taylor polynomials and Taylor series in a fairly limited

More information

( ) = P Z > = P( Z > 1) = 1 Φ(1) = 1 0.8413 = 0.1587 P X > 17

( ) = P Z > = P( Z > 1) = 1 Φ(1) = 1 0.8413 = 0.1587 P X > 17 4.6 I company that manufactures and bottles of apple juice uses a machine that automatically fills 6 ounce bottles. There is some variation, however, in the amounts of liquid dispensed into the bottles

More information

APPLIED MATHEMATICS ADVANCED LEVEL

APPLIED MATHEMATICS ADVANCED LEVEL APPLIED MATHEMATICS ADVANCED LEVEL INTRODUCTION This syllabus serves to examine candidates knowledge and skills in introductory mathematical and statistical methods, and their applications. For applications

More information

RANDOM VARIABLES MATH CIRCLE (ADVANCED) 3/3/2013. 3 k) ( 52 3 )

RANDOM VARIABLES MATH CIRCLE (ADVANCED) 3/3/2013. 3 k) ( 52 3 ) RANDOM VARIABLES MATH CIRCLE (ADVANCED) //0 0) a) Suppose you flip a fair coin times. i) What is the probability you get 0 heads? ii) head? iii) heads? iv) heads? For = 0,,,, P ( Heads) = ( ) b) Suppose

More information

Important Probability Distributions OPRE 6301

Important Probability Distributions OPRE 6301 Important Probability Distributions OPRE 6301 Important Distributions... Certain probability distributions occur with such regularity in real-life applications that they have been given their own names.

More information

Normal distribution. ) 2 /2σ. 2π σ

Normal distribution. ) 2 /2σ. 2π σ Normal distribution The normal distribution is the most widely known and used of all distributions. Because the normal distribution approximates many natural phenomena so well, it has developed into a

More information

Math 370, Spring 2008 Prof. A.J. Hildebrand. Practice Test 2 Solutions

Math 370, Spring 2008 Prof. A.J. Hildebrand. Practice Test 2 Solutions Math 370, Spring 008 Prof. A.J. Hildebrand Practice Test Solutions About this test. This is a practice test made up of a random collection of 5 problems from past Course /P actuarial exams. Most of the

More information

Solutions for Review Problems

Solutions for Review Problems olutions for Review Problems 1. Let be the triangle with vertices A (,, ), B (4,, 1) and C (,, 1). (a) Find the cosine of the angle BAC at vertex A. (b) Find the area of the triangle ABC. (c) Find a vector

More information

Practice Problems for Homework #6. Normal distribution and Central Limit Theorem.

Practice Problems for Homework #6. Normal distribution and Central Limit Theorem. Practice Problems for Homework #6. Normal distribution and Central Limit Theorem. 1. Read Section 3.4.6 about the Normal distribution and Section 4.7 about the Central Limit Theorem. 2. Solve the practice

More information

Stirling s formula, n-spheres and the Gamma Function

Stirling s formula, n-spheres and the Gamma Function Stirling s formula, n-spheres and the Gamma Function We start by noticing that and hence x n e x dx lim a 1 ( 1 n n a n n! e ax dx lim a 1 ( 1 n n a n a 1 x n e x dx (1 Let us make a remark in passing.

More information

Lecture 8: Continuous random variables, expectation and variance

Lecture 8: Continuous random variables, expectation and variance Lecture 8: Continuous random variables, expectation and variance Lejla Batina Institute for Computing and Information Sciences Digital Security Version: spring 2012 Lejla Batina Version: spring 2012 Wiskunde

More information

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August

More information

Covariance and Correlation. Consider the joint probability distribution f XY (x, y).

Covariance and Correlation. Consider the joint probability distribution f XY (x, y). Chapter 5: JOINT PROBABILITY DISTRIBUTIONS Part 2: Section 5-2 Covariance and Correlation Consider the joint probability distribution f XY (x, y). Is there a relationship between X and Y? If so, what kind?

More information

correct-choice plot f(x) and draw an approximate tangent line at x = a and use geometry to estimate its slope comment The choices were:

correct-choice plot f(x) and draw an approximate tangent line at x = a and use geometry to estimate its slope comment The choices were: Topic 1 2.1 mode MultipleSelection text How can we approximate the slope of the tangent line to f(x) at a point x = a? This is a Multiple selection question, so you need to check all of the answers that

More information