Lecture 13: Some Important Continuous Probability Distributions (Part 2)

Size: px
Start display at page:

Download "Lecture 13: Some Important Continuous Probability Distributions (Part 2)"

Transcription

1 Lecture 13: Some Important Continuous Probability Distributions (Part 2) Kateřina Staňková Statistics (MAT1003) May 14, 2012

2 Outline 1 Erlang distribution Formulation Application of Erlang distribution Gamma distribution 2 Various Exercises 3 Chi-squared distribution Basics Applications Examples book: Sections ,6.6

3 And now... 1 Erlang distribution Formulation Application of Erlang distribution Gamma distribution 2 Various Exercises 3 Chi-squared distribution Basics Applications Examples

4 Formulation

5 Formulation What is Erlang PDF? Let X 1, X 2,... X n be IID exponentially distributed RV s with parameter λ. Then the RV X def = X 1 + X X n has an Erlang distribution with parameters n and λ, X Erl(n, λ),

6 Formulation What is Erlang PDF? Let X 1, X 2,... X n be IID exponentially distributed RV s with parameter λ. Then the RV X def = X 1 + X X n has an Erlang distribution with parameters n and λ, X Erl(n, λ), { λ n x n 1 exp( λ x), x > 0, f (x) = (n 1)! 0, elsewhere

7 Formulation What is Erlang PDF? Let X 1, X 2,... X n be IID exponentially distributed RV s with parameter λ. Then the RV X def = X 1 + X X n has an Erlang distribution with parameters n and λ, X Erl(n, λ), { λ n x n 1 exp( λ x), x > 0, f (x) = (n 1)! 0, elsewhere

8 Application of Erlang distribution

9 Application of Erlang distribution In a Poisson process the sum of n interarrival times has an Erlang distribution with parameters n and λ

10 Application of Erlang distribution In a Poisson process the sum of n interarrival times has an Erlang distribution with parameters n and λ Example 5(c) (from before) Suppose on average 6 people call some service number per minute. What is the probability that it takes at least one minute for 3 people to call?

11 Application of Erlang distribution In a Poisson process the sum of n interarrival times has an Erlang distribution with parameters n and λ Example 5(c) (from before) Suppose on average 6 people call some service number per minute. What is the probability that it takes at least one minute for 3 people to call? Z = 3 interarrival times

12 Application of Erlang distribution In a Poisson process the sum of n interarrival times has an Erlang distribution with parameters n and λ Example 5(c) (from before) Suppose on average 6 people call some service number per minute. What is the probability that it takes at least one minute for 3 people to call? Z = 3 interarrival times Z Erl(3, 18)

13 Application of Erlang distribution In a Poisson process the sum of n interarrival times has an Erlang distribution with parameters n and λ Example 5(c) (from before) Suppose on average 6 people call some service number per minute. What is the probability that it takes at least one minute for 3 people to call? Z = 3 interarrival times Z Erl(3, 18) (λ = 18 for 3 minute time-unit)

14 Application of Erlang distribution In a Poisson process the sum of n interarrival times has an Erlang distribution with parameters n and λ Example 5(c) (from before) Suppose on average 6 people call some service number per minute. What is the probability that it takes at least one minute for 3 people to call? Z = 3 interarrival times Z Erl(3, 18) (λ = 18 for 3 minute time-unit) P(Z 1 3 )

15 Application of Erlang distribution In a Poisson process the sum of n interarrival times has an Erlang distribution with parameters n and λ Example 5(c) (from before) Suppose on average 6 people call some service number per minute. What is the probability that it takes at least one minute for 3 people to call? Z = 3 interarrival times Z Erl(3, 18) (λ = 18 for 3 minute time-unit) P(Z 1 3 ) = λ n x n 1 exp( λ x) dx 1 (n 1)! 3

16 Application of Erlang distribution In a Poisson process the sum of n interarrival times has an Erlang distribution with parameters n and λ Example 5(c) (from before) Suppose on average 6 people call some service number per minute. What is the probability that it takes at least one minute for 3 people to call? Z = 3 interarrival times Z Erl(3, 18) (λ = 18 for 3 minute time-unit) P(Z 1 3 ) = λ n x n 1 exp( λ x) dx = 1 (n 1)! x 2 exp( 18 x) dx 2!

17 Application of Erlang distribution In a Poisson process the sum of n interarrival times has an Erlang distribution with parameters n and λ Example 5(c) (from before) Suppose on average 6 people call some service number per minute. What is the probability that it takes at least one minute for 3 people to call? Z = 3 interarrival times Z Erl(3, 18) (λ = 18 for 3 minute time-unit) P(Z 1 3 ) = λ n x n 1 exp( λ x) dx = 1 (n 1)! 3 [ ( 18 3 = 2 1 ) x 2 exp( 18 x) ] x= x 2 exp( 18 x) dx 2! x exp( 18 x) dx = [ 162 x 2 exp( 18 x) ] x= 1 3

18 Application of Erlang distribution In a Poisson process the sum of n interarrival times has an Erlang distribution with parameters n and λ Example 5(c) (from before) Suppose on average 6 people call some service number per minute. What is the probability that it takes at least one minute for 3 people to call? Z = 3 interarrival times Z Erl(3, 18) (λ = 18 for 3 minute time-unit) P(Z 1 3 ) = λ n x n 1 exp( λ x) dx = 1 (n 1)! 3 [ ( 18 3 = 2 1 ) x 2 exp( 18 x) ] x= x 2 exp( 18 x) dx 2! x exp( 18 x) dx = [ 162 x 2 exp( 18 x) [ 324 ( 1 ) x exp( 18 x) 18 ] x= ] x= 1 3 exp( 18 x) dx

19 Application of Erlang distribution In a Poisson process the sum of n interarrival times has an Erlang distribution with parameters n and λ Example 5(c) (from before) Suppose on average 6 people call some service number per minute. What is the probability that it takes at least one minute for 3 people to call? Z = 3 interarrival times Z Erl(3, 18) (λ = 18 for 3 minute time-unit) P(Z 1 3 ) = λ n x n 1 exp( λ x) dx = 1 (n 1)! 3 [ ( 18 3 = 2 1 ) x 2 exp( 18 x) ] x= x 2 exp( 18 x) dx 2! x exp( 18 x) dx = [ 162 x 2 exp( 18 x) [ 324 ( 1 ) x exp( 18 x) 18 ] x= = [18 exp( 6) + 6 exp( 6) exp( 18 x)] x= ] x= 1 3 exp( 18 x) dx

20 Application of Erlang distribution In a Poisson process the sum of n interarrival times has an Erlang distribution with parameters n and λ Example 5(c) (from before) Suppose on average 6 people call some service number per minute. What is the probability that it takes at least one minute for 3 people to call? Z = 3 interarrival times Z Erl(3, 18) (λ = 18 for 3 minute time-unit) P(Z 1 3 ) = λ n x n 1 exp( λ x) dx = 1 (n 1)! 3 [ ( 18 3 = 2 1 ) x 2 exp( 18 x) ] x= x 2 exp( 18 x) dx 2! x exp( 18 x) dx = [ 162 x 2 exp( 18 x) [ 324 ( 1 ) x exp( 18 x) 18 ] x= ] x= 1 3 exp( 18 x) dx = [18 exp( 6) + 6 exp( 6) exp( 18 x)] x=

21 Gamma distribution Derivation

22 Gamma distribution Derivation One can prove that (n 1)!

23 Gamma distribution Derivation One can prove that (n 1)! = Define the so-called Gamma-function: 0 t n 1 exp( t) d t

24 Gamma distribution Derivation One can prove that (n 1)! = Define the so-called Gamma-function: 0 t n 1 exp( t) d t Γ(α) def = 0 t α 1 exp( t) dt for α R

25 Gamma distribution Derivation One can prove that (n 1)! = Define the so-called Gamma-function: 0 t n 1 exp( t) d t Γ(α) def = Thenf (x) = λα x α 1 Γ(α) 0 t α 1 exp( t) dt exp( λ x) for α R for x > 0 and 0 elsewhere

26 Gamma distribution Derivation One can prove that (n 1)! = Define the so-called Gamma-function: 0 t n 1 exp( t) d t Γ(α) def = Thenf (x) = λα x α 1 Γ(α) 0 t α 1 exp( t) dt exp( λ x) for α R for x > 0 and 0 elsewhere is a PDF. An RV with such a PDF has so-called Gamma distribution with parameters α and λ, X Γ(λ, α)

27 Gamma distribution Derivation One can prove that (n 1)! = Define the so-called Gamma-function: 0 t n 1 exp( t) d t Γ(α) def = Thenf (x) = λα x α 1 Γ(α) 0 t α 1 exp( t) dt exp( λ x) for α R for x > 0 and 0 elsewhere is a PDF. An RV with such a PDF has so-called Gamma distribution with parameters α and λ, X Γ(λ, α) (Plug in n = 1 or α = 1 to obtain the exponential distribution)

28 And now... 1 Erlang distribution Formulation Application of Erlang distribution Gamma distribution 2 Various Exercises 3 Chi-squared distribution Basics Applications Examples

29 Example 6 (from last Thursday)

30 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams.

31 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (a) What is the probability that the net weight of a pack is at least 500 grams?

32 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (a) What is the probability that the net weight of a pack is at least 500 grams?

33 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (a) What is the probability that the net weight of a pack is at least 500 grams? P(X 500) = P(Z 500 µ = = 1) σ 5

34 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (a) What is the probability that the net weight of a pack is at least 500 grams? P(X 500) = P(Z 500 µ = = 1) σ 5 = 1 P(Z 1) = =

35 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (a) What is the probability that the net weight of a pack is at least 500 grams? P(X 500) = P(Z 500 µ = = 1) σ 5 = 1 P(Z 1) = = (b) What is the probability that the gross weight of a pack is between 510 and 520 grams?

36 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (a) What is the probability that the net weight of a pack is at least 500 grams? P(X 500) = P(Z 500 µ = = 1) σ 5 = 1 P(Z 1) = = (b) What is the probability that the gross weight of a pack is between 510 and 520 grams? A : gross weight of a pack

37 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (a) What is the probability that the net weight of a pack is at least 500 grams? P(X 500) = P(Z 500 µ = = 1) σ 5 = 1 P(Z 1) = = (b) What is the probability that the gross weight of a pack is between 510 and 520 grams? A : gross weight of a pack A = X + Y

38 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (a) What is the probability that the net weight of a pack is at least 500 grams? P(X 500) = P(Z 500 µ = = 1) σ 5 = 1 P(Z 1) = = (b) What is the probability that the gross weight of a pack is between 510 and 520 grams? A : gross weight of a pack ( ) A = X + Y N µ X + µ Y, σx 2 + σ2 Y

39 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (a) What is the probability that the net weight of a pack is at least 500 grams? P(X 500) = P(Z 500 µ = = 1) σ 5 = 1 P(Z 1) = = (b) What is the probability that the gross weight of a pack is between 510 and 520 grams? A : gross weight of a pack ( ) A = X + Y N µ X + µ Y, σx 2 + σ2 Y = N ( , )

40 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (a) What is the probability that the net weight of a pack is at least 500 grams? P(X 500) = P(Z 500 µ = = 1) σ 5 = 1 P(Z 1) = = (b) What is the probability that the gross weight of a pack is between 510 and 520 grams? A : gross weight of a pack ( ) A = X + Y N µ X + µ Y, σx 2 + σ2 Y = N ( , ) = N ( 515, ) 26

41 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (a) What is the probability that the net weight of a pack is at least 500 grams? P(X 500) = P(Z 500 µ = = 1) σ 5 = 1 P(Z 1) = = (b) What is the probability that the gross weight of a pack is between 510 and 520 grams? A : gross weight of a pack ( ) A = X + Y N µ X + µ Y, σx 2 + σ2 Y = N ( , ) = N ( 515, ) 26 Z = A 515 N (0, 1) P( Z ) 26 26

42 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (a) What is the probability that the net weight of a pack is at least 500 grams? P(X 500) = P(Z 500 µ = = 1) σ 5 = 1 P(Z 1) = = (b) What is the probability that the gross weight of a pack is between 510 and 520 grams? A : gross weight of a pack ( ) A = X + Y N µ X + µ Y, σx 2 + σ2 Y = N ( , ) = N ( 515, ) 26 Z = A 515 N (0, 1) P( Z ) = P(Z 0.98) P(Z 0.98) 26

43 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (a) What is the probability that the net weight of a pack is at least 500 grams? P(X 500) = P(Z 500 µ = = 1) σ 5 = 1 P(Z 1) = = (b) What is the probability that the gross weight of a pack is between 510 and 520 grams? A : gross weight of a pack ( ) A = X + Y N µ X + µ Y, σx 2 + σ2 Y = N ( , ) = N ( 515, ) 26 Z = A 515 N (0, 1) P( Z ) = P(Z 0.98) P(Z 0.98) 26 =

44 Example 6 (from last Thursday)

45 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams.

46 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (c) What is the probability that the gross weight of 10 packs is between 5120 and 5170 grams?

47 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (c) What is the probability that the gross weight of 10 packs is between 5120 and 5170 grams? B : Gross weight of 10 packs,

48 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (c) What is the probability that the gross weight of 10 packs is between 5120 and 5170 grams? B : Gross weight of 10 packs, B = 10 i=1 (X i + Y i ), with X i N (505, 5), Y i N (10, 1)

49 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (c) What is the probability that the gross weight of 10 packs is between 5120 and 5170 grams? B : Gross weight of 10 packs, B = 10 i=1 (X i + Y i ), with X i N (505, 5), Y i N (10, 1) B N (10 µ X + 10 µ Y, 10 σx σ2 Y ) = N (5150, 260)

50 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (c) What is the probability that the gross weight of 10 packs is between 5120 and 5170 grams? B : Gross weight of 10 packs, B = 10 i=1 (X i + Y i ), with X i N (505, 5), Y i N (10, 1) B N (10 µ X + 10 µ Y, 10 σx σ2 Y ) = N (5150, 260) Z = B N (0, 1)

51 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (c) What is the probability that the gross weight of 10 packs is between 5120 and 5170 grams? B : Gross weight of 10 packs, B = 10 i=1 (X i + Y i ), with X i N (505, 5), Y i N (10, 1) B N (10 µ X + 10 µ Y, 10 σx σ2 Y ) = N (5150, 260) Z = B N (0, 1)

52 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (c) What is the probability that the gross weight of 10 packs is between 5120 and 5170 grams? B : Gross weight of 10 packs, B = 10 i=1 (X i + Y i ), with X i N (505, 5), Y i N (10, 1) B N (10 µ X + 10 µ Y, 10 σx σ2 Y ) = N (5150, 260) Z = B N (0, 1) P(5120 B 5170)

53 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (c) What is the probability that the gross weight of 10 packs is between 5120 and 5170 grams? B : Gross weight of 10 packs, B = 10 i=1 (X i + Y i ), with X i N (505, 5), Y i N (10, 1) B N (10 µ X + 10 µ Y, 10 σx σ2 Y ) = N (5150, 260) Z = B N (0, 1) ( ) P(5120 B 5170) = P Z

54 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (c) What is the probability that the gross weight of 10 packs is between 5120 and 5170 grams? B : Gross weight of 10 packs, B = 10 i=1 (X i + Y i ), with X i N (505, 5), Y i N (10, 1) B N (10 µ X + 10 µ Y, 10 σx σ2 Y ) = N (5150, 260) Z = B N (0, 1) ( ) P(5120 B 5170) = P Z = P ( 1.86 Z 1.24)

55 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (c) What is the probability that the gross weight of 10 packs is between 5120 and 5170 grams? B : Gross weight of 10 packs, B = 10 i=1 (X i + Y i ), with X i N (505, 5), Y i N (10, 1) B N (10 µ X + 10 µ Y, 10 σx σ2 Y ) = N (5150, 260) Z = B N (0, 1) ( ) P(5120 B 5170) = P Z = P ( 1.86 Z 1.24) = P (Z 1.24) P (Z 1.86)

56 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (c) What is the probability that the gross weight of 10 packs is between 5120 and 5170 grams? B : Gross weight of 10 packs, B = 10 i=1 (X i + Y i ), with X i N (505, 5), Y i N (10, 1) B N (10 µ X + 10 µ Y, 10 σx σ2 Y ) = N (5150, 260) Z = B N (0, 1) ( ) P(5120 B 5170) = P Z = P ( 1.86 Z 1.24) = P (Z 1.24) P (Z 1.86) = =

57 Example 6 (from last Thursday)

58 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams.

59 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (d) 80 % of the gross weight of the packs is heavier than a particular weight. What is that weight?

60 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (d) 80 % of the gross weight of the packs is heavier than a particular weight. What is that weight? A N (515, 26)

61 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (d) 80 % of the gross weight of the packs is heavier than a particular weight. What is that weight? A N (515, 26) P(A a) = 0.8. What is a?

62 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (d) 80 % of the gross weight of the packs is heavier than a particular weight. What is that weight? A N (515, 26) P(A a) = 0.8. What is a? Consider Z = A N (0, 1)

63 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (d) 80 % of the gross weight of the packs is heavier than a particular weight. What is that weight? A N (515, 26) P(A a) = 0.8. What is a? Consider Z = A N (0, 1) P(A a)

64 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (d) 80 % of the gross weight of the packs is heavier than a particular weight. What is that weight? A N (515, 26) P(A a) = 0.8. What is a? Consider Z = A N (0, 1) ) ( P(A a) = P Z a = 0.8

65 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (d) 80 % of the gross weight of the packs is heavier than a particular weight. What is that weight? A N (515, 26) P(A a) = 0.8. What is a? Consider Z = A N (0, 1) ( P(A a) = P Z a 515 ) = ( P Z a 515 ) =

66 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (d) 80 % of the gross weight of the packs is heavier than a particular weight. What is that weight? A N (515, 26) P(A a) = 0.8. What is a? Consider Z = A N (0, 1) ( P(A a) = P Z a 515 ) = ( P Z a 515 ) = But P (Z z) = 0.2 for z = 0.84

67 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (d) 80 % of the gross weight of the packs is heavier than a particular weight. What is that weight? A N (515, 26) P(A a) = 0.8. What is a? Consider Z = A N (0, 1) ( P(A a) = P Z a 515 ) = ( P Z a 515 ) = But P (Z z) = 0.2 for z = 0.84 Apparently a = 0.84 a = = 510.7

68 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (d) 80 % of the gross weight of the packs is heavier than a particular weight. What is that weight? A N (515, 26) P(A a) = 0.8. What is a? Consider Z = A N (0, 1) ( P(A a) = P Z a 515 ) = ( P Z a 515 ) = But P (Z z) = 0.2 for z = 0.84 Apparently a = 0.84 a = = In general...

69 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (d) 80 % of the gross weight of the packs is heavier than a particular weight. What is that weight? A N (515, 26) P(A a) = 0.8. What is a? Consider Z = A N (0, 1) ( P(A a) = P Z a 515 ) = ( P Z a 515 ) = But P (Z z) = 0.2 for z = 0.84 Apparently a = 0.84 a = = In general... X N (µ, σ) and P(X x) = p, P ( ) Z z = x µ σ = p

70 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (d) 80 % of the gross weight of the packs is heavier than a particular weight. What is that weight? A N (515, 26) P(A a) = 0.8. What is a? Consider Z = A N (0, 1) ( P(A a) = P Z a 515 ) = ( P Z a 515 ) = But P (Z z) = 0.2 for z = 0.84 Apparently a = 0.84 a = = In general... X N (µ, σ) and P(X x) = p, P ( ) Z z = x µ σ = p Look up z in table A3 and then calculate x = σ x + µ

71 Example 6 (from last Thursday)

72 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams.

73 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (e) What is the probability that the difference of net weights of two packs is lower or equal than two grams?

74 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (e) What is the probability that the difference of net weights of two packs is lower or equal than two grams? X 1, X 2 N (505, 5)

75 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (e) What is the probability that the difference of net weights of two packs is lower or equal than two grams? X 1, X 2 N (505, 5) Q : P( X 1 X 2 2) = P( 2 X 1 X 2 2)

76 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (e) What is the probability that the difference of net weights of two packs is lower or equal than two grams? X 1, X 2 N (505, 5) Q : P( X 1 X 2 2) = P( 2 X 1 X 2 2) X 2 N (µ X1 µ X2 ), σ X1 X 2 = V (X 1 X 2 )

77 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (e) What is the probability that the difference of net weights of two packs is lower or equal than two grams? X 1, X 2 N (505, 5) Q : P( X 1 X 2 2) = P( 2 X 1 X 2 2) X 2 N (µ X1 µ X2 ), σ X1 X 2 = V (X 1 X 2 ) V (X 1 X 2 ) = V (X 1 ) + V ( X 2 ) = 1 2 V (X 1 ) + ( 1) 2 V (X 2 ) = V (X 1 ) + V (X 2 )

78 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (e) What is the probability that the difference of net weights of two packs is lower or equal than two grams? X 1, X 2 N (505, 5) Q : P( X 1 X 2 2) = P( 2 X 1 X 2 2) X 2 N (µ X1 µ X2 ), σ X1 X 2 = V (X 1 X 2 ) V (X 1 X 2 ) = V (X 1 ) + V ( X 2 ) = 1 2 V (X 1 ) + ( 1) 2 V (X 2 ) = V (X 1 ) + V (X 2 ) Hence X 1 X 2 N (0, σx σx 2 2 )

79 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (e) What is the probability that the difference of net weights of two packs is lower or equal than two grams? X 1, X 2 N (505, 5) Q : P( X 1 X 2 2) = P( 2 X 1 X 2 2) X 2 N (µ X1 µ X2 ), σ X1 X 2 = V (X 1 X 2 ) V (X 1 X 2 ) = V (X 1 ) + V ( X 2 ) = 1 2 V (X 1 ) + ( 1) 2 V (X 2 ) = V (X 1 ) + V (X 2 ) ( Hence X 1 X 2 N (0, σx σx 2 2 ) = N 0, ) ( = N 0, ) 50

80 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (e) What is the probability that the difference of net weights of two packs is lower or equal than two grams? X 1, X 2 N (505, 5) Q : P( X 1 X 2 2) = P( 2 X 1 X 2 2) X 2 N (µ X1 µ X2 ), σ X1 X 2 = V (X 1 X 2 ) V (X 1 X 2 ) = V (X 1 ) + V ( X 2 ) = 1 2 V (X 1 ) + ( 1) 2 V (X 2 ) = V (X 1 ) + V (X 2 ) ( Hence X 1 X 2 N (0, σx σx 2 2 ) = N 0, ) ( = N 0, ) 50 Z = (x 1 x 2 ) 0 50 N (0, 1)

81 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (e) What is the probability that the difference of net weights of two packs is lower or equal than two grams? X 1, X 2 N (505, 5) Q : P( X 1 X 2 2) = P( 2 X 1 X 2 2) X 2 N (µ X1 µ X2 ), σ X1 X 2 = V (X 1 X 2 ) V (X 1 X 2 ) = V (X 1 ) + V ( X 2 ) = 1 2 V (X 1 ) + ( 1) 2 V (X 2 ) = V (X 1 ) + V (X 2 ) ( Hence X 1 X 2 N (0, σx σx 2 2 ) = N 0, ) ( = N 0, ) 50 Z = (x 1 x 2 ) 0 50 N (0, 1) P ( 2 X 1 X 2 2) = P(X 1 X 2 2) P(X 1 X 2 2)

82 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (e) What is the probability that the difference of net weights of two packs is lower or equal than two grams? X 1, X 2 N (505, 5) Q : P( X 1 X 2 2) = P( 2 X 1 X 2 2) X 2 N (µ X1 µ X2 ), σ X1 X 2 = V (X 1 X 2 ) V (X 1 X 2 ) = V (X 1 ) + V ( X 2 ) = 1 2 V (X 1 ) + ( 1) 2 V (X 2 ) = V (X 1 ) + V (X 2 ) ( Hence X 1 X 2 N (0, σx σx 2 2 ) = N 0, ) ( = N 0, ) 50 Z = (x 1 x 2 ) 0 50 N (0, 1) P ( 2 ( X 1 ) X 2 ( 2) = P(X 1 ) X 2 2) P(X 1 X 2 2) P Z P Z

83 Example 6 (from last Thursday) The net weight of a pack of coffee (500 grams) is a normally distributed RV with parameters µ = 505 grams and σ = 5 grams. (e) What is the probability that the difference of net weights of two packs is lower or equal than two grams? X 1, X 2 N (505, 5) Q : P( X 1 X 2 2) = P( 2 X 1 X 2 2) X 2 N (µ X1 µ X2 ), σ X1 X 2 = V (X 1 X 2 ) V (X 1 X 2 ) = V (X 1 ) + V ( X 2 ) = 1 2 V (X 1 ) + ( 1) 2 V (X 2 ) = V (X 1 ) + V (X 2 ) ( Hence X 1 X 2 N (0, σx σx 2 2 ) = N 0, ) ( = N 0, ) 50 Z = (x 1 x 2 ) 0 50 N (0, 1) P ( 2 ( X 1 ) X 2 ( 2) = P(X 1 ) X 2 2) P(X 1 X 2 2) P Z P Z P (Z 0.28) P (Z 0.28) = =

84 Observation

85 Observation Let Z N (0, 1). For cumulative density of the RV X = Z 2 we have:

86 Observation Let Z N (0, 1). For cumulative density of the RV X = Z 2 we have: F(x) = P(X x) = P(Z 2 x) = P( z x) = P( x Z x) = G( x) G( x)

87 Observation Let Z N (0, 1). For cumulative density of the RV X = Z 2 we have: F(x) = P(X x) = P(Z 2 x) = P( z x) = P( x Z x) = G( x) G( x) where G is the cumulative density of Z. So

88 Observation Let Z N (0, 1). For cumulative density of the RV X = Z 2 we have: F(x) = P(X x) = P(Z 2 x) = P( z x) = P( x Z x) = G( x) G( x) where G is the cumulative density of Z. So f (x) = F (x) = 1 2 x 1 2 G ( x) ( 1 2 )x 1 2 G ( x)

89 Observation Let Z N (0, 1). For cumulative density of the RV X = Z 2 we have: F(x) = P(X x) = P(Z 2 x) = P( z x) = P( x Z x) = G( x) G( x) where G is the cumulative density of Z. So f (x) = F (x) = 1 2 x 1 2 G ( x) ( 1 2 )x 1 2 G ( x) = 1 2 x π exp( x ) + 1 x π exp( x ) 2

90 Observation Let Z N (0, 1). For cumulative density of the RV X = Z 2 we have: F(x) = P(X x) = P(Z 2 x) = P( z x) = P( x Z x) = G( x) G( x) where G is the cumulative density of Z. So f (x) = F (x) = 1 2 x 1 2 G ( x) ( 1 2 )x 1 2 G ( x) = 1 2 x π exp( x ) + 1 x π exp( x )= x 2 exp( x 2 ) 2 4 π

91 Observation Let Z N (0, 1). For cumulative density of the RV X = Z 2 we have: F(x) = P(X x) = P(Z 2 x) = P( z x) = P( x Z x) = G( x) G( x) where G is the cumulative density of Z. So f (x) = F (x) = 1 2 x 1 2 G ( x) ( 1 2 )x 1 2 G ( x) = 1 2 x π exp( x ) + 1 x π exp( x )= x 2 exp( x 2 ) 2 4 π Apparently: Γ( 1 2 ) = π and X = Z 2 has a Gamma-distribution with parameters λ = 1 2 (β = 2) and α = 1 2

92 And now... 1 Erlang distribution Formulation Application of Erlang distribution Gamma distribution 2 Various Exercises 3 Chi-squared distribution Basics Applications Examples

93 Basics Formulation

94 Basics Formulation If X 1, X 2,..., X n are IID, X i N (0, 1), then χ 2 def = X X X 2 n

95 Basics Formulation If X 1, X 2,..., X n are IID, X i N (0, 1), then χ 2 def = X X X 2 n has a Gamma-distribution with parameters α = n/2 and λ = 1/2 (β = 2)

96 Basics Formulation If X 1, X 2,..., X n are IID, X i N (0, 1), then χ 2 def = X X X 2 n has a Gamma-distribution with parameters α = n/2 and λ = 1/2 (β = 2) This distribution is also called a Chi-squared distribution with n degrees of freedom

97 Basics Formulation If X 1, X 2,..., X n are IID, X i N (0, 1), then χ 2 def = X X X 2 n has a Gamma-distribution with parameters α = n/2 and λ = 1/2 (β = 2) This distribution is also called a Chi-squared distribution with n degrees of freedom Notation χ 2 χ 2 n

98 Basics Formulation If X 1, X 2,..., X n are IID, X i N (0, 1), then χ 2 def = X X X 2 n has a Gamma-distribution with parameters α = n/2 and λ = 1/2 (β = 2) This distribution is also called a Chi-squared distribution with n degrees of freedom Notation χ 2 χ 2 n Book: Table A5: X s.t. P(χ 2 x) = α for several values of v ( degrees of freedom) and α

99 Applications Estimation of σ

100 Applications Estimation of σ Let X 1, X 2,..., X n N (µ, σ) IDD with known µ and unknown σ

101 Applications Estimation of σ Let X 1, X 2,..., X n N (µ, σ) IDD with known µ and unknown σ Then Z i = X i µ σ N (0, 1)

102 Applications Estimation of σ Let X 1, X 2,..., X n N (µ, σ) IDD with known µ and unknown σ Then Z i = X i µ N (0, 1) σ Zi 2 = (X i µ) 2 χ 2 σ 2 i

103 Applications Estimation of σ Let X 1, X 2,..., X n N (µ, σ) IDD with known µ and unknown σ Then Z i = X i µ N (0, 1) σ Zi 2 = (X i µ) 2 χ 2 σ 2 i χ 2 def = n i=1 Z i 2 = 1 n σ 2 i=1 (X i µ) 2 χ 2 n

104 Examples Example 7

105 Examples Example 7 Let X 1, X 2,..., X n N (µ, σ) IDD with µ = 5 and unknown σ. The realizations for X 1,..., X 5 are 3.5, 5.7, 1.2, 6.8, and 7.1. What values of σ are reasonable?

106 Examples Example 7 Let X 1, X 2,..., X n N (µ, σ) IDD with µ = 5 and unknown σ. The realizations for X 1,..., X 5 are 3.5, 5.7, 1.2, 6.8, and 7.1. What values of σ are reasonable? Let χ 2 def = 1 5 σ 2 i=1 (X i µ) 2

107 Examples Example 7 Let X 1, X 2,..., X n N (µ, σ) IDD with µ = 5 and unknown σ. The realizations for X 1,..., X 5 are 3.5, 5.7, 1.2, 6.8, and 7.1. What values of σ are reasonable? Let χ 2 def = 1 5 σ 2 i=1 (X i µ) 2 = 1 (( 1.5) 2 + (0.7) 2 + ( 3.8) 2 + (1.8) 2 + (2.1) 2 ) = σ 2 σ 2

108 Examples Example 7 Let X 1, X 2,..., X n N (µ, σ) IDD with µ = 5 and unknown σ. The realizations for X 1,..., X 5 are 3.5, 5.7, 1.2, 6.8, and 7.1. What values of σ are reasonable? Let χ 2 def = 1 5 σ 2 i=1 (X i µ) 2 = 1 (( 1.5) 2 + (0.7) 2 + ( 3.8) 2 + (1.8) 2 + (2.1) 2 ) = σ 2 σ 2 Table A5: P(χ ) = and P(χ ) = 0.975

109 Examples Example 7 Let X 1, X 2,..., X n N (µ, σ) IDD with µ = 5 and unknown σ. The realizations for X 1,..., X 5 are 3.5, 5.7, 1.2, 6.8, and 7.1. What values of σ are reasonable? Let χ 2 def = 1 5 σ 2 i=1 (X i µ) 2 = 1 (( 1.5) 2 + (0.7) 2 + ( 3.8) 2 + (1.8) 2 + (2.1) 2 ) = σ 2 σ 2 Table A5: P(χ ) = and P(χ ) = So (fill the realization for χ 2 :) P( σ ) = = 0.95 P(σ 2 [1.935, 29.88]) = 0.95

110 Examples Example 7 Let X 1, X 2,..., X n N (µ, σ) IDD with µ = 5 and unknown σ. The realizations for X 1,..., X 5 are 3.5, 5.7, 1.2, 6.8, and 7.1. What values of σ are reasonable? Let χ 2 def = 1 5 σ 2 i=1 (X i µ) 2 = 1 (( 1.5) 2 + (0.7) 2 + ( 3.8) 2 + (1.8) 2 + (2.1) 2 ) = σ 2 σ 2 Table A5: P(χ ) = and P(χ ) = So (fill the realization for χ 2 :) P( σ ) = = 0.95 P(σ 2 [1.935, 29.88]) = 0.95 P(σ [1.391, 5.466]) = 0.95

111 Examples Example 7 Let X 1, X 2,..., X n N (µ, σ) IDD with µ = 5 and unknown σ. The realizations for X 1,..., X 5 are 3.5, 5.7, 1.2, 6.8, and 7.1. What values of σ are reasonable? Let χ 2 def = 1 5 σ 2 i=1 (X i µ) 2 = 1 (( 1.5) 2 + (0.7) 2 + ( 3.8) 2 + (1.8) 2 + (2.1) 2 ) = σ 2 σ 2 Table A5: P(χ ) = and P(χ ) = So (fill the realization for χ 2 :) P( σ ) = = 0.95 P(σ 2 [1.935, 29.88]) = 0.95 P(σ [1.391, 5.466]) = 0.95 Apparently reasonable values for σ are between and We call [1.391, 5.466] a 95 % confidence interval for σ

112 Examples Example 7 Let X 1, X 2,..., X n N (µ, σ) IDD with µ = 5 and unknown σ. The realizations for X 1,..., X 5 are 3.5, 5.7, 1.2, 6.8, and 7.1. What values of σ are reasonable? Let χ 2 def = 1 5 σ 2 i=1 (X i µ) 2 = 1 (( 1.5) 2 + (0.7) 2 + ( 3.8) 2 + (1.8) 2 + (2.1) 2 ) = σ 2 σ 2 Table A5: P(χ ) = and P(χ ) = So (fill the realization for χ 2 :) P( σ ) = = 0.95 P(σ 2 [1.935, 29.88]) = 0.95 P(σ [1.391, 5.466]) = 0.95 Apparently reasonable values for σ are between and We call [1.391, 5.466] a 95 % confidence interval for σ Similarly, [1.935, 29.88] is a 95 % confidence interval for σ 2

113 Examples Example 8: What if µ is unknown as well?

114 Examples Example 8: What if µ is unknown as well? Data: X 1, X 2,..., X n N (µ, σ) IDD with µ, σ unknown.

115 Examples Example 8: What if µ is unknown as well? Data: X 1, X 2,..., X n N (µ, σ) IDD with µ, σ unknown. Some calculation show: n (X i µ) 2 i=1

116 Examples Example 8: What if µ is unknown as well? Data: X 1, X 2,..., X n N (µ, σ) IDD with µ, σ unknown. Some calculation show: n (X i µ) 2 = i=1 n ((x i x) + ( x µ)) 2 i=1

117 Examples Example 8: What if µ is unknown as well? Data: X 1, X 2,..., X n N (µ, σ) IDD with µ, σ unknown. Some calculation show: n (X i µ) 2 = i=1 = n ((x i x) + ( x µ)) 2 i=1 n n (x i x) (x i x)( x µ) + i=1 i=1 n ( x µ) 2 i=1

118 Examples Example 8: What if µ is unknown as well? Data: X 1, X 2,..., X n N (µ, σ) IDD with µ, σ unknown. Some calculation show: n (X i µ) 2 = i=1 = = n ((x i x) + ( x µ)) 2 i=1 n n (x i x) (x i x)( x µ) + i=1 i=1 n (x i x) 2 + n ( x µ) 2 i=1 n ( x µ) 2 i=1

119 Examples Example 8: What if µ is unknown as well? Data: X 1, X 2,..., X n N (µ, σ) IDD with µ, σ unknown. Some calculation show: n (X i µ) 2 = i=1 = = n ((x i x) + ( x µ)) 2 i=1 n n (x i x) (x i x)( x µ) + i=1 i=1 n (x i x) 2 + n ( x µ) 2 i=1 Now, since s 2 = 1 n 1 n i=1 (x i x) 2, we have n ( x µ) 2 i=1

120 Examples Example 8: What if µ is unknown as well? Data: X 1, X 2,..., X n N (µ, σ) IDD with µ, σ unknown. Some calculation show: n (X i µ) 2 = i=1 = = n ((x i x) + ( x µ)) 2 i=1 n n (x i x) (x i x)( x µ) + i=1 i=1 n (x i x) 2 + n ( x µ) 2 i=1 Now, since s 2 = 1 n n 1 i=1 (x i x) 2, we have 1 n (x i µ) 2 σ 2 i=1 }{{} χ 2 n = (n 1) s2 σ 2 + ( x µ)2 σ 2 /n n ( x µ) 2 i=1

121 Examples Example 8: What if µ is unknown as well? Data: X 1, X 2,..., X n N (µ, σ) IDD with µ, σ unknown. Some calculation show: n (X i µ) 2 = i=1 = = n ((x i x) + ( x µ)) 2 i=1 n n (x i x) (x i x)( x µ) + i=1 i=1 n (x i x) 2 + n ( x µ) 2 i=1 Now, since s 2 = 1 n n 1 i=1 (x i x) 2, we have 1 n (x i µ) 2 X = 1 n n i=1 x i N (µ, σ 2 i=1 }{{} χ 2 n = (n 1) s2 σ 2 + ( x µ)2 σ 2 /n n ( x µ) 2 i=1 σ n )

122 Examples Example 8: What if µ is unknown as well? Data: X 1, X 2,..., X n N (µ, σ) IDD with µ, σ unknown. Some calculation show: n (X i µ) 2 = i=1 = = n ((x i x) + ( x µ)) 2 i=1 n n (x i x) (x i x)( x µ) + i=1 i=1 n (x i x) 2 + n ( x µ) 2 i=1 Now, since s 2 = 1 n n 1 i=1 (x i x) 2, we have 1 n (x i µ) 2 X = 1 n n i=1 x i N (µ, σ 2 i=1 }{{} χ 2 n = (n 1) s2 σ 2 + ( x µ)2 σ 2 /n n ( x µ) 2 i=1 σ n )

123 Examples Example 8: What if µ is unknown as well? Data: X 1, X 2,..., X n N (µ, σ) IDD with µ, σ unknown. Some calculation show: n (X i µ) 2 = i=1 = = n ((x i x) + ( x µ)) 2 i=1 n n (x i x) (x i x)( x µ) + i=1 i=1 n (x i x) 2 + n ( x µ) 2 i=1 Now, since s 2 = 1 n n 1 i=1 (x i x) 2, we have 1 n (x i µ) 2 X = 1 n n i=1 x i N (µ, σ 2 i=1 }{{} = (n 1) s2 σ 2 + ( x µ)2 σ 2 /n n ( x µ) 2 i=1 σ n ) χ 2 n ( X µ) 2 σ 2 / n = ( X µ σ/ n ) 2 χ 2 1

124 Examples Example 8: What if µ is unknown as well? Data: X 1, X 2,..., X n N (µ, σ) IDD with µ, σ unknown. Some calculation show: n (X i µ) 2 = i=1 = = n ((x i x) + ( x µ)) 2 i=1 n n (x i x) (x i x)( x µ) + i=1 i=1 n (x i x) 2 + n ( x µ) 2 i=1 Now, since s 2 = 1 n n 1 i=1 (x i x) 2, we have 1 n (x i µ) 2 X = 1 n n i=1 x i N (µ, σ 2 i=1 }{{} = (n 1) s2 σ 2 + ( x µ)2 σ 2 /n n ( x µ) 2 i=1 σ n ) χ 2 n ( X µ) 2 σ 2 / n = ( X µ σ/ n ) 2 χ 2 1 the statistics

125 Examples Example 8: What if µ is unknown as well? Data: X 1, X 2,..., X n N (µ, σ) IDD with µ, σ unknown. Some calculation show: n (X i µ) 2 = i=1 = = n ((x i x) + ( x µ)) 2 i=1 n n (x i x) (x i x)( x µ) + i=1 i=1 n (x i x) 2 + n ( x µ) 2 i=1 Now, since s 2 = 1 n n 1 i=1 (x i x) 2, we have 1 n (x i µ) 2 X = 1 n n i=1 x i N (µ, σ 2 i=1 }{{} = (n 1) s2 σ 2 + ( x µ)2 σ 2 /n n ( x µ) 2 i=1 σ n ) χ 2 n ( X µ) 2 σ 2 / n χ 2 = (n 1) s2 σ 2 = ( X µ σ/ n ) 2 χ 2 1 the statistics = n (x i x) 2 i=1 χ 2 σ 2 n 1

126 Examples Example 7 (b)

127 Examples Example 7 (b) Let X 1, X 2,..., X n N (µ, σ) IDD with µ = 5 and unknown σ. The realizations for X 1,..., X 5 are 3.5, 5.7, 1.2, 6.8, and 7.1. What values of σ are reasonable

128 Examples Example 7 (b) Let X 1, X 2,..., X n N (µ, σ) IDD with µ = 5 and unknown σ. The realizations for X 1,..., X 5 are 3.5, 5.7, 1.2, 6.8, and 7.1. What values of σ are reasonable Let χ 2 def = (n 1)s2 χ 2 σ 2 n 1 = χ 2 4

129 Examples Example 7 (b) Let X 1, X 2,..., X n N (µ, σ) IDD with µ = 5 and unknown σ. The realizations for X 1,..., X 5 are 3.5, 5.7, 1.2, 6.8, and 7.1. What values of σ are reasonable Let χ 2 def = (n 1)s2 χ 2 σ 2 n 1 = χ 2 4 X = 1 ( ) =

130 Examples Example 7 (b) Let X 1, X 2,..., X n N (µ, σ) IDD with µ = 5 and unknown σ. The realizations for X 1,..., X 5 are 3.5, 5.7, 1.2, 6.8, and 7.1. What values of σ are reasonable Let χ 2 def = (n 1)s2 χ 2 σ 2 n 1 = χ 2 4 X = 1 ( ) = (n 1)s 2 = n i=1 (x i x) 2 =

131 Examples Example 7 (b) Let X 1, X 2,..., X n N (µ, σ) IDD with µ = 5 and unknown σ. The realizations for X 1,..., X 5 are 3.5, 5.7, 1.2, 6.8, and 7.1. What values of σ are reasonable Let χ 2 def = (n 1)s2 χ 2 σ 2 n 1 = χ 2 4 X = 1 ( ) = (n 1)s 2 = n i=1 (x i x) 2 = χ 2 = σ 2

132 Examples Example 7 (b) Let X 1, X 2,..., X n N (µ, σ) IDD with µ = 5 and unknown σ. The realizations for X 1,..., X 5 are 3.5, 5.7, 1.2, 6.8, and 7.1. What values of σ are reasonable Let χ 2 def = (n 1)s2 χ 2 σ 2 n 1 = χ 2 4 X = 1 ( ) = (n 1)s 2 = n i=1 (x i x) 2 = χ 2 = σ 2 Construct a 95% confidence interval for σ :

133 Examples Example 7 (b) Let X 1, X 2,..., X n N (µ, σ) IDD with µ = 5 and unknown σ. The realizations for X 1,..., X 5 are 3.5, 5.7, 1.2, 6.8, and 7.1. What values of σ are reasonable Let χ 2 def = (n 1)s2 χ 2 σ 2 n 1 = χ 2 4 X = 1 ( ) = (n 1)s 2 = n i=1 (x i x) 2 = χ 2 = σ 2 Construct a 95% confidence interval for σ : Table A5: P(χ ) = and P(χ ) = 0.975

134 Examples Example 7 (b) Let X 1, X 2,..., X n N (µ, σ) IDD with µ = 5 and unknown σ. The realizations for X 1,..., X 5 are 3.5, 5.7, 1.2, 6.8, and 7.1. What values of σ are reasonable Let χ 2 def = (n 1)s2 χ 2 σ 2 n 1 = χ 2 4 X = 1 ( ) = (n 1)s 2 = n i=1 (x i x) 2 = χ 2 = σ 2 Construct a 95% confidence interval for σ : Table A5: P(χ ) = and P(χ ) = P( /σ ) = 0.95

135 Examples Example 7 (b) Let X 1, X 2,..., X n N (µ, σ) IDD with µ = 5 and unknown σ. The realizations for X 1,..., X 5 are 3.5, 5.7, 1.2, 6.8, and 7.1. What values of σ are reasonable Let χ 2 def = (n 1)s2 χ 2 σ 2 n 1 = χ 2 4 X = 1 ( ) = (n 1)s 2 = n i=1 (x i x) 2 = χ 2 = σ 2 Construct a 95% confidence interval for σ : Table A5: P(χ ) = and P(χ ) = P( /σ ) = 0.95 P(24.732/ σ /0.484) = 0.95

136 Examples Example 7 (b) Let X 1, X 2,..., X n N (µ, σ) IDD with µ = 5 and unknown σ. The realizations for X 1,..., X 5 are 3.5, 5.7, 1.2, 6.8, and 7.1. What values of σ are reasonable Let χ 2 def = (n 1)s2 χ 2 σ 2 n 1 = χ 2 4 X = 1 ( ) = (n 1)s 2 = n i=1 (x i x) 2 = χ 2 = σ 2 Construct a 95% confidence interval for σ : Table A5: P(χ ) = and P(χ ) = P( /σ ) = 0.95 P(24.732/ σ /0.484) = % CI for σ 2 : [24.732/11.143, /0.484] = [0.220, ]

137 Examples Example 7 (b) Let X 1, X 2,..., X n N (µ, σ) IDD with µ = 5 and unknown σ. The realizations for X 1,..., X 5 are 3.5, 5.7, 1.2, 6.8, and 7.1. What values of σ are reasonable Let χ 2 def = (n 1)s2 χ 2 σ 2 n 1 = χ 2 4 X = 1 ( ) = (n 1)s 2 = n i=1 (x i x) 2 = χ 2 = σ 2 Construct a 95% confidence interval for σ : Table A5: P(χ ) = and P(χ ) = P( /σ ) = 0.95 P(24.732/ σ /0.484) = % CI for σ 2 : [24.732/11.143, /0.484] = [0.220, ] 95% CI for σ: [1.490, 7.148]

138 Examples Example 7 (b) Let X 1, X 2,..., X n N (µ, σ) IDD with µ = 5 and unknown σ. The realizations for X 1,..., X 5 are 3.5, 5.7, 1.2, 6.8, and 7.1. What values of σ are reasonable Let χ 2 def = (n 1)s2 χ 2 σ 2 n 1 = χ 2 4 X = 1 ( ) = (n 1)s 2 = n i=1 (x i x) 2 = χ 2 = σ 2 Construct a 95% confidence interval for σ : Table A5: P(χ ) = and P(χ ) = P( /σ ) = 0.95 P(24.732/ σ /0.484) = % CI for σ 2 : [24.732/11.143, /0.484] = [0.220, ] 95% CI for σ: [1.490, 7.148] Important!: These calculations use the fact that X i s are normally distributed. If it is not the case, we cannot use χ 2 -distributions!

What is Statistics? Lecture 1. Introduction and probability review. Idea of parametric inference

What is Statistics? Lecture 1. Introduction and probability review. Idea of parametric inference 0. 1. Introduction and probability review 1.1. What is Statistics? What is Statistics? Lecture 1. Introduction and probability review There are many definitions: I will use A set of principle and procedures

More information

VISUALIZATION OF DENSITY FUNCTIONS WITH GEOGEBRA

VISUALIZATION OF DENSITY FUNCTIONS WITH GEOGEBRA VISUALIZATION OF DENSITY FUNCTIONS WITH GEOGEBRA Csilla Csendes University of Miskolc, Hungary Department of Applied Mathematics ICAM 2010 Probability density functions A random variable X has density

More information

Probability Calculator

Probability Calculator Chapter 95 Introduction Most statisticians have a set of probability tables that they refer to in doing their statistical wor. This procedure provides you with a set of electronic statistical tables that

More information

Exponential Distribution

Exponential Distribution Exponential Distribution Definition: Exponential distribution with parameter λ: { λe λx x 0 f(x) = 0 x < 0 The cdf: F(x) = x Mean E(X) = 1/λ. f(x)dx = Moment generating function: φ(t) = E[e tx ] = { 1

More information

Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab

Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab Monte Carlo Simulation: IEOR E4703 Fall 2004 c 2004 by Martin Haugh Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab 1 Overview of Monte Carlo Simulation 1.1 Why use simulation?

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Math 541: Statistical Theory II Lecturer: Songfeng Zheng Maximum Likelihood Estimation 1 Maximum Likelihood Estimation Maximum likelihood is a relatively simple method of constructing an estimator for

More information

Chapter 4 - Lecture 1 Probability Density Functions and Cumul. Distribution Functions

Chapter 4 - Lecture 1 Probability Density Functions and Cumul. Distribution Functions Chapter 4 - Lecture 1 Probability Density Functions and Cumulative Distribution Functions October 21st, 2009 Review Probability distribution function Useful results Relationship between the pdf and the

More information

Important Probability Distributions OPRE 6301

Important Probability Distributions OPRE 6301 Important Probability Distributions OPRE 6301 Important Distributions... Certain probability distributions occur with such regularity in real-life applications that they have been given their own names.

More information

Lecture Notes 1. Brief Review of Basic Probability

Lecture Notes 1. Brief Review of Basic Probability Probability Review Lecture Notes Brief Review of Basic Probability I assume you know basic probability. Chapters -3 are a review. I will assume you have read and understood Chapters -3. Here is a very

More information

Lecture 8. Confidence intervals and the central limit theorem

Lecture 8. Confidence intervals and the central limit theorem Lecture 8. Confidence intervals and the central limit theorem Mathematical Statistics and Discrete Mathematics November 25th, 2015 1 / 15 Central limit theorem Let X 1, X 2,... X n be a random sample of

More information

Example: 1. You have observed that the number of hits to your web site follow a Poisson distribution at a rate of 2 per day.

Example: 1. You have observed that the number of hits to your web site follow a Poisson distribution at a rate of 2 per day. 16 The Exponential Distribution Example: 1. You have observed that the number of hits to your web site follow a Poisson distribution at a rate of 2 per day. Let T be the time (in days) between hits. 2.

More information

Introduction to Probability

Introduction to Probability Introduction to Probability EE 179, Lecture 15, Handout #24 Probability theory gives a mathematical characterization for experiments with random outcomes. coin toss life of lightbulb binary data sequence

More information

INSURANCE RISK THEORY (Problems)

INSURANCE RISK THEORY (Problems) INSURANCE RISK THEORY (Problems) 1 Counting random variables 1. (Lack of memory property) Let X be a geometric distributed random variable with parameter p (, 1), (X Ge (p)). Show that for all n, m =,

More information

M/M/1 and M/M/m Queueing Systems

M/M/1 and M/M/m Queueing Systems M/M/ and M/M/m Queueing Systems M. Veeraraghavan; March 20, 2004. Preliminaries. Kendall s notation: G/G/n/k queue G: General - can be any distribution. First letter: Arrival process; M: memoryless - exponential

More information

Probability density function : An arbitrary continuous random variable X is similarly described by its probability density function f x = f X

Probability density function : An arbitrary continuous random variable X is similarly described by its probability density function f x = f X Week 6 notes : Continuous random variables and their probability densities WEEK 6 page 1 uniform, normal, gamma, exponential,chi-squared distributions, normal approx'n to the binomial Uniform [,1] random

More information

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Module No. #01 Lecture No. #15 Special Distributions-VI Today, I am going to introduce

More information

Chapter 3 RANDOM VARIATE GENERATION

Chapter 3 RANDOM VARIATE GENERATION Chapter 3 RANDOM VARIATE GENERATION In order to do a Monte Carlo simulation either by hand or by computer, techniques must be developed for generating values of random variables having known distributions.

More information

Joint Exam 1/P Sample Exam 1

Joint Exam 1/P Sample Exam 1 Joint Exam 1/P Sample Exam 1 Take this practice exam under strict exam conditions: Set a timer for 3 hours; Do not stop the timer for restroom breaks; Do not look at your notes. If you believe a question

More information

Department of Civil Engineering-I.I.T. Delhi CEL 899: Environmental Risk Assessment Statistics and Probability Example Part 1

Department of Civil Engineering-I.I.T. Delhi CEL 899: Environmental Risk Assessment Statistics and Probability Example Part 1 Department of Civil Engineering-I.I.T. Delhi CEL 899: Environmental Risk Assessment Statistics and Probability Example Part Note: Assume missing data (if any) and mention the same. Q. Suppose X has a normal

More information

LECTURE 16. Readings: Section 5.1. Lecture outline. Random processes Definition of the Bernoulli process Basic properties of the Bernoulli process

LECTURE 16. Readings: Section 5.1. Lecture outline. Random processes Definition of the Bernoulli process Basic properties of the Bernoulli process LECTURE 16 Readings: Section 5.1 Lecture outline Random processes Definition of the Bernoulli process Basic properties of the Bernoulli process Number of successes Distribution of interarrival times The

More information

5. Continuous Random Variables

5. Continuous Random Variables 5. Continuous Random Variables Continuous random variables can take any value in an interval. They are used to model physical characteristics such as time, length, position, etc. Examples (i) Let X be

More information

UNIT I: RANDOM VARIABLES PART- A -TWO MARKS

UNIT I: RANDOM VARIABLES PART- A -TWO MARKS UNIT I: RANDOM VARIABLES PART- A -TWO MARKS 1. Given the probability density function of a continuous random variable X as follows f(x) = 6x (1-x) 0

More information

e.g. arrival of a customer to a service station or breakdown of a component in some system.

e.g. arrival of a customer to a service station or breakdown of a component in some system. Poisson process Events occur at random instants of time at an average rate of λ events per second. e.g. arrival of a customer to a service station or breakdown of a component in some system. Let N(t) be

More information

Master s Theory Exam Spring 2006

Master s Theory Exam Spring 2006 Spring 2006 This exam contains 7 questions. You should attempt them all. Each question is divided into parts to help lead you through the material. You should attempt to complete as much of each problem

More information

2WB05 Simulation Lecture 8: Generating random variables

2WB05 Simulation Lecture 8: Generating random variables 2WB05 Simulation Lecture 8: Generating random variables Marko Boon http://www.win.tue.nl/courses/2wb05 January 7, 2013 Outline 2/36 1. How do we generate random variables? 2. Fitting distributions Generating

More information

Section 6.1 Joint Distribution Functions

Section 6.1 Joint Distribution Functions Section 6.1 Joint Distribution Functions We often care about more than one random variable at a time. DEFINITION: For any two random variables X and Y the joint cumulative probability distribution function

More information

An Introduction to Basic Statistics and Probability

An Introduction to Basic Statistics and Probability An Introduction to Basic Statistics and Probability Shenek Heyward NCSU An Introduction to Basic Statistics and Probability p. 1/4 Outline Basic probability concepts Conditional probability Discrete Random

More information

Stat 704 Data Analysis I Probability Review

Stat 704 Data Analysis I Probability Review 1 / 30 Stat 704 Data Analysis I Probability Review Timothy Hanson Department of Statistics, University of South Carolina Course information 2 / 30 Logistics: Tuesday/Thursday 11:40am to 12:55pm in LeConte

More information

MATH 10: Elementary Statistics and Probability Chapter 5: Continuous Random Variables

MATH 10: Elementary Statistics and Probability Chapter 5: Continuous Random Variables MATH 10: Elementary Statistics and Probability Chapter 5: Continuous Random Variables Tony Pourmohamad Department of Mathematics De Anza College Spring 2015 Objectives By the end of this set of slides,

More information

How To Calculate The Power Of A Cluster In Erlang (Orchestra)

How To Calculate The Power Of A Cluster In Erlang (Orchestra) Network Traffic Distribution Derek McAvoy Wireless Technology Strategy Architect March 5, 21 Data Growth is Exponential 2.5 x 18 98% 2 95% Traffic 1.5 1 9% 75% 5%.5 Data Traffic Feb 29 25% 1% 5% 2% 5 1

More information

Aggregate Loss Models

Aggregate Loss Models Aggregate Loss Models Chapter 9 Stat 477 - Loss Models Chapter 9 (Stat 477) Aggregate Loss Models Brian Hartman - BYU 1 / 22 Objectives Objectives Individual risk model Collective risk model Computing

More information

MATH4427 Notebook 2 Spring 2016. 2 MATH4427 Notebook 2 3. 2.1 Definitions and Examples... 3. 2.2 Performance Measures for Estimators...

MATH4427 Notebook 2 Spring 2016. 2 MATH4427 Notebook 2 3. 2.1 Definitions and Examples... 3. 2.2 Performance Measures for Estimators... MATH4427 Notebook 2 Spring 2016 prepared by Professor Jenny Baglivo c Copyright 2009-2016 by Jenny A. Baglivo. All Rights Reserved. Contents 2 MATH4427 Notebook 2 3 2.1 Definitions and Examples...................................

More information

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4) Summary of Formulas and Concepts Descriptive Statistics (Ch. 1-4) Definitions Population: The complete set of numerical information on a particular quantity in which an investigator is interested. We assume

More information

Exact Confidence Intervals

Exact Confidence Intervals Math 541: Statistical Theory II Instructor: Songfeng Zheng Exact Confidence Intervals Confidence intervals provide an alternative to using an estimator ˆθ when we wish to estimate an unknown parameter

More information

IEOR 6711: Stochastic Models, I Fall 2012, Professor Whitt, Final Exam SOLUTIONS

IEOR 6711: Stochastic Models, I Fall 2012, Professor Whitt, Final Exam SOLUTIONS IEOR 6711: Stochastic Models, I Fall 2012, Professor Whitt, Final Exam SOLUTIONS There are four questions, each with several parts. 1. Customers Coming to an Automatic Teller Machine (ATM) (30 points)

More information

Lecture 6: Discrete & Continuous Probability and Random Variables

Lecture 6: Discrete & Continuous Probability and Random Variables Lecture 6: Discrete & Continuous Probability and Random Variables D. Alex Hughes Math Camp September 17, 2015 D. Alex Hughes (Math Camp) Lecture 6: Discrete & Continuous Probability and Random September

More information

MAS108 Probability I

MAS108 Probability I 1 QUEEN MARY UNIVERSITY OF LONDON 2:30 pm, Thursday 3 May, 2007 Duration: 2 hours MAS108 Probability I Do not start reading the question paper until you are instructed to by the invigilators. The paper

More information

Confidence Intervals for Exponential Reliability

Confidence Intervals for Exponential Reliability Chapter 408 Confidence Intervals for Exponential Reliability Introduction This routine calculates the number of events needed to obtain a specified width of a confidence interval for the reliability (proportion

More information

), 35% use extra unleaded gas ( A

), 35% use extra unleaded gas ( A . At a certain gas station, 4% of the customers use regular unleaded gas ( A ), % use extra unleaded gas ( A ), and % use premium unleaded gas ( A ). Of those customers using regular gas, onl % fill their

More information

The Exponential Distribution

The Exponential Distribution 21 The Exponential Distribution From Discrete-Time to Continuous-Time: In Chapter 6 of the text we will be considering Markov processes in continuous time. In a sense, we already have a very good understanding

More information

Data Modeling & Analysis Techniques. Probability & Statistics. Manfred Huber 2011 1

Data Modeling & Analysis Techniques. Probability & Statistics. Manfred Huber 2011 1 Data Modeling & Analysis Techniques Probability & Statistics Manfred Huber 2011 1 Probability and Statistics Probability and statistics are often used interchangeably but are different, related fields

More information

Section 5.1 Continuous Random Variables: Introduction

Section 5.1 Continuous Random Variables: Introduction Section 5. Continuous Random Variables: Introduction Not all random variables are discrete. For example:. Waiting times for anything (train, arrival of customer, production of mrna molecule from gene,

More information

4. Continuous Random Variables, the Pareto and Normal Distributions

4. Continuous Random Variables, the Pareto and Normal Distributions 4. Continuous Random Variables, the Pareto and Normal Distributions A continuous random variable X can take any value in a given range (e.g. height, weight, age). The distribution of a continuous random

More information

A Uniform Asymptotic Estimate for Discounted Aggregate Claims with Subexponential Tails

A Uniform Asymptotic Estimate for Discounted Aggregate Claims with Subexponential Tails 12th International Congress on Insurance: Mathematics and Economics July 16-18, 2008 A Uniform Asymptotic Estimate for Discounted Aggregate Claims with Subexponential Tails XUEMIAO HAO (Based on a joint

More information

PSTAT 120B Probability and Statistics

PSTAT 120B Probability and Statistics - Week University of California, Santa Barbara April 10, 013 Discussion section for 10B Information about TA: Fang-I CHU Office: South Hall 5431 T Office hour: TBA email: chu@pstat.ucsb.edu Slides will

More information

Probability Distribution

Probability Distribution Lecture 4 Probability Distribution Continuous Case Definition: A random variable that can take on any value in an interval is called continuous. Definition: Let Y be any r.v. The distribution function

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES Contents 1. Random variables and measurable functions 2. Cumulative distribution functions 3. Discrete

More information

1.5 / 1 -- Communication Networks II (Görg) -- www.comnets.uni-bremen.de. 1.5 Transforms

1.5 / 1 -- Communication Networks II (Görg) -- www.comnets.uni-bremen.de. 1.5 Transforms .5 / -- Communication Networks II (Görg) -- www.comnets.uni-bremen.de.5 Transforms Using different summation and integral transformations pmf, pdf and cdf/ccdf can be transformed in such a way, that even

More information

Normal distribution. ) 2 /2σ. 2π σ

Normal distribution. ) 2 /2σ. 2π σ Normal distribution The normal distribution is the most widely known and used of all distributions. Because the normal distribution approximates many natural phenomena so well, it has developed into a

More information

. (3.3) n Note that supremum (3.2) must occur at one of the observed values x i or to the left of x i.

. (3.3) n Note that supremum (3.2) must occur at one of the observed values x i or to the left of x i. Chapter 3 Kolmogorov-Smirnov Tests There are many situations where experimenters need to know what is the distribution of the population of their interest. For example, if they want to use a parametric

More information

Definition: Suppose that two random variables, either continuous or discrete, X and Y have joint density

Definition: Suppose that two random variables, either continuous or discrete, X and Y have joint density HW MATH 461/561 Lecture Notes 15 1 Definition: Suppose that two random variables, either continuous or discrete, X and Y have joint density and marginal densities f(x, y), (x, y) Λ X,Y f X (x), x Λ X,

More information

Tenth Problem Assignment

Tenth Problem Assignment EECS 40 Due on April 6, 007 PROBLEM (8 points) Dave is taking a multiple-choice exam. You may assume that the number of questions is infinite. Simultaneously, but independently, his conscious and subconscious

More information

1. Let A, B and C are three events such that P(A) = 0.45, P(B) = 0.30, P(C) = 0.35,

1. Let A, B and C are three events such that P(A) = 0.45, P(B) = 0.30, P(C) = 0.35, 1. Let A, B and C are three events such that PA =.4, PB =.3, PC =.3, P A B =.6, P A C =.6, P B C =., P A B C =.7. a Compute P A B, P A C, P B C. b Compute P A B C. c Compute the probability that exactly

More information

1 Prior Probability and Posterior Probability

1 Prior Probability and Posterior Probability Math 541: Statistical Theory II Bayesian Approach to Parameter Estimation Lecturer: Songfeng Zheng 1 Prior Probability and Posterior Probability Consider now a problem of statistical inference in which

More information

Math 370, Spring 2008 Prof. A.J. Hildebrand. Practice Test 1 Solutions

Math 370, Spring 2008 Prof. A.J. Hildebrand. Practice Test 1 Solutions Math 70, Spring 008 Prof. A.J. Hildebrand Practice Test Solutions About this test. This is a practice test made up of a random collection of 5 problems from past Course /P actuarial exams. Most of the

More information

Stats on the TI 83 and TI 84 Calculator

Stats on the TI 83 and TI 84 Calculator Stats on the TI 83 and TI 84 Calculator Entering the sample values STAT button Left bracket { Right bracket } Store (STO) List L1 Comma Enter Example: Sample data are {5, 10, 15, 20} 1. Press 2 ND and

More information

Satistical modelling of clinical trials

Satistical modelling of clinical trials Sept. 21st, 2012 1/13 Patients recruitment in a multicentric clinical trial : Recruit N f patients via C centres. Typically, N f 300 and C 50. Trial can be very long (several years) and very expensive

More information

SOLUTIONS. f x = 6x 2 6xy 24x, f y = 3x 2 6y. To find the critical points, we solve

SOLUTIONS. f x = 6x 2 6xy 24x, f y = 3x 2 6y. To find the critical points, we solve SOLUTIONS Problem. Find the critical points of the function f(x, y = 2x 3 3x 2 y 2x 2 3y 2 and determine their type i.e. local min/local max/saddle point. Are there any global min/max? Partial derivatives

More information

Experimental Uncertainty and Probability

Experimental Uncertainty and Probability 02/04/07 PHY310: Statistical Data Analysis 1 PHY310: Lecture 03 Experimental Uncertainty and Probability Road Map The meaning of experimental uncertainty The fundamental concepts of probability 02/04/07

More information

Average rate of change of y = f(x) with respect to x as x changes from a to a + h:

Average rate of change of y = f(x) with respect to x as x changes from a to a + h: L15-1 Lecture 15: Section 3.4 Definition of the Derivative Recall the following from Lecture 14: For function y = f(x), the average rate of change of y with respect to x as x changes from a to b (on [a,

More information

1.1 Introduction, and Review of Probability Theory... 3. 1.1.1 Random Variable, Range, Types of Random Variables... 3. 1.1.2 CDF, PDF, Quantiles...

1.1 Introduction, and Review of Probability Theory... 3. 1.1.1 Random Variable, Range, Types of Random Variables... 3. 1.1.2 CDF, PDF, Quantiles... MATH4427 Notebook 1 Spring 2016 prepared by Professor Jenny Baglivo c Copyright 2009-2016 by Jenny A. Baglivo. All Rights Reserved. Contents 1 MATH4427 Notebook 1 3 1.1 Introduction, and Review of Probability

More information

1. Repetition probability theory and transforms

1. Repetition probability theory and transforms 1. Repetition probability theory and transforms 1.1. A prisoner is kept in a cell with three doors. Through one of them he can get out of the prison. The other one leads to a tunnel: through this he is

More information

Poisson Processes. Chapter 5. 5.1 Exponential Distribution. The gamma function is defined by. Γ(α) = t α 1 e t dt, α > 0.

Poisson Processes. Chapter 5. 5.1 Exponential Distribution. The gamma function is defined by. Γ(α) = t α 1 e t dt, α > 0. Chapter 5 Poisson Processes 5.1 Exponential Distribution The gamma function is defined by Γ(α) = t α 1 e t dt, α >. Theorem 5.1. The gamma function satisfies the following properties: (a) For each α >

More information

Lecture 2: Discrete Distributions, Normal Distributions. Chapter 1

Lecture 2: Discrete Distributions, Normal Distributions. Chapter 1 Lecture 2: Discrete Distributions, Normal Distributions Chapter 1 Reminders Course website: www. stat.purdue.edu/~xuanyaoh/stat350 Office Hour: Mon 3:30-4:30, Wed 4-5 Bring a calculator, and copy Tables

More information

1 Sufficient statistics

1 Sufficient statistics 1 Sufficient statistics A statistic is a function T = rx 1, X 2,, X n of the random sample X 1, X 2,, X n. Examples are X n = 1 n s 2 = = X i, 1 n 1 the sample mean X i X n 2, the sample variance T 1 =

More information

Practice problems for Homework 11 - Point Estimation

Practice problems for Homework 11 - Point Estimation Practice problems for Homework 11 - Point Estimation 1. (10 marks) Suppose we want to select a random sample of size 5 from the current CS 3341 students. Which of the following strategies is the best:

More information

14.1. Basic Concepts of Integration. Introduction. Prerequisites. Learning Outcomes. Learning Style

14.1. Basic Concepts of Integration. Introduction. Prerequisites. Learning Outcomes. Learning Style Basic Concepts of Integration 14.1 Introduction When a function f(x) is known we can differentiate it to obtain its derivative df. The reverse dx process is to obtain the function f(x) from knowledge of

More information

ECE302 Spring 2006 HW5 Solutions February 21, 2006 1

ECE302 Spring 2006 HW5 Solutions February 21, 2006 1 ECE3 Spring 6 HW5 Solutions February 1, 6 1 Solutions to HW5 Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in italics

More information

Generating Random Variates

Generating Random Variates CHAPTER 8 Generating Random Variates 8.1 Introduction...2 8.2 General Approaches to Generating Random Variates...3 8.2.1 Inverse Transform...3 8.2.2 Composition...11 8.2.3 Convolution...16 8.2.4 Acceptance-Rejection...18

More information

Stochastic Processes and Queueing Theory used in Cloud Computer Performance Simulations

Stochastic Processes and Queueing Theory used in Cloud Computer Performance Simulations 56 Stochastic Processes and Queueing Theory used in Cloud Computer Performance Simulations Stochastic Processes and Queueing Theory used in Cloud Computer Performance Simulations Florin-Cătălin ENACHE

More information

A LOGNORMAL MODEL FOR INSURANCE CLAIMS DATA

A LOGNORMAL MODEL FOR INSURANCE CLAIMS DATA REVSTAT Statistical Journal Volume 4, Number 2, June 2006, 131 142 A LOGNORMAL MODEL FOR INSURANCE CLAIMS DATA Authors: Daiane Aparecida Zuanetti Departamento de Estatística, Universidade Federal de São

More information

Continuous Random Variables

Continuous Random Variables Chapter 5 Continuous Random Variables 5.1 Continuous Random Variables 1 5.1.1 Student Learning Objectives By the end of this chapter, the student should be able to: Recognize and understand continuous

More information

STAT 830 Convergence in Distribution

STAT 830 Convergence in Distribution STAT 830 Convergence in Distribution Richard Lockhart Simon Fraser University STAT 830 Fall 2011 Richard Lockhart (Simon Fraser University) STAT 830 Convergence in Distribution STAT 830 Fall 2011 1 / 31

More information

Lecture 7: Continuous Random Variables

Lecture 7: Continuous Random Variables Lecture 7: Continuous Random Variables 21 September 2005 1 Our First Continuous Random Variable The back of the lecture hall is roughly 10 meters across. Suppose it were exactly 10 meters, and consider

More information

Math 370/408, Spring 2008 Prof. A.J. Hildebrand. Actuarial Exam Practice Problem Set 5 Solutions

Math 370/408, Spring 2008 Prof. A.J. Hildebrand. Actuarial Exam Practice Problem Set 5 Solutions Math 370/408, Spring 2008 Prof. A.J. Hildebrand Actuarial Exam Practice Problem Set 5 Solutions About this problem set: These are problems from Course 1/P actuarial exams that I have collected over the

More information

Part 2: One-parameter models

Part 2: One-parameter models Part 2: One-parameter models Bernoilli/binomial models Return to iid Y 1,...,Y n Bin(1, θ). The sampling model/likelihood is p(y 1,...,y n θ) =θ P y i (1 θ) n P y i When combined with a prior p(θ), Bayes

More information

Survival Distributions, Hazard Functions, Cumulative Hazards

Survival Distributions, Hazard Functions, Cumulative Hazards Week 1 Survival Distributions, Hazard Functions, Cumulative Hazards 1.1 Definitions: The goals of this unit are to introduce notation, discuss ways of probabilistically describing the distribution of a

More information

Wes, Delaram, and Emily MA751. Exercise 4.5. 1 p(x; β) = [1 p(xi ; β)] = 1 p(x. y i [βx i ] log [1 + exp {βx i }].

Wes, Delaram, and Emily MA751. Exercise 4.5. 1 p(x; β) = [1 p(xi ; β)] = 1 p(x. y i [βx i ] log [1 + exp {βx i }]. Wes, Delaram, and Emily MA75 Exercise 4.5 Consider a two-class logistic regression problem with x R. Characterize the maximum-likelihood estimates of the slope and intercept parameter if the sample for

More information

STAT 3502. x 0 < x < 1

STAT 3502. x 0 < x < 1 Solution - Assignment # STAT 350 Total mark=100 1. A large industrial firm purchases several new word processors at the end of each year, the exact number depending on the frequency of repairs in the previous

More information

Optimization of Business Processes: An Introduction to Applied Stochastic Modeling. Ger Koole Department of Mathematics, VU University Amsterdam

Optimization of Business Processes: An Introduction to Applied Stochastic Modeling. Ger Koole Department of Mathematics, VU University Amsterdam Optimization of Business Processes: An Introduction to Applied Stochastic Modeling Ger Koole Department of Mathematics, VU University Amsterdam Version of March 30, 2010 c Ger Koole, 1998 2010. These lecture

More information

Institute of Actuaries of India Subject CT3 Probability and Mathematical Statistics

Institute of Actuaries of India Subject CT3 Probability and Mathematical Statistics Institute of Actuaries of India Subject CT3 Probability and Mathematical Statistics For 2015 Examinations Aim The aim of the Probability and Mathematical Statistics subject is to provide a grounding in

More information

Principle of Data Reduction

Principle of Data Reduction Chapter 6 Principle of Data Reduction 6.1 Introduction An experimenter uses the information in a sample X 1,..., X n to make inferences about an unknown parameter θ. If the sample size n is large, then

More information

Sums of Independent Random Variables

Sums of Independent Random Variables Chapter 7 Sums of Independent Random Variables 7.1 Sums of Discrete Random Variables In this chapter we turn to the important question of determining the distribution of a sum of independent random variables

More information

Random variables, probability distributions, binomial random variable

Random variables, probability distributions, binomial random variable Week 4 lecture notes. WEEK 4 page 1 Random variables, probability distributions, binomial random variable Eample 1 : Consider the eperiment of flipping a fair coin three times. The number of tails that

More information

36 CHAPTER 1. LIMITS AND CONTINUITY. Figure 1.17: At which points is f not continuous?

36 CHAPTER 1. LIMITS AND CONTINUITY. Figure 1.17: At which points is f not continuous? 36 CHAPTER 1. LIMITS AND CONTINUITY 1.3 Continuity Before Calculus became clearly de ned, continuity meant that one could draw the graph of a function without having to lift the pen and pencil. While this

More information

Statistical Functions in Excel

Statistical Functions in Excel Statistical Functions in Excel There are many statistical functions in Excel. Moreover, there are other functions that are not specified as statistical functions that are helpful in some statistical analyses.

More information

Lecture. S t = S t δ[s t ].

Lecture. S t = S t δ[s t ]. Lecture In real life the vast majority of all traded options are written on stocks having at least one dividend left before the date of expiration of the option. Thus the study of dividends is important

More information

Math 425 (Fall 08) Solutions Midterm 2 November 6, 2008

Math 425 (Fall 08) Solutions Midterm 2 November 6, 2008 Math 425 (Fall 8) Solutions Midterm 2 November 6, 28 (5 pts) Compute E[X] and Var[X] for i) X a random variable that takes the values, 2, 3 with probabilities.2,.5,.3; ii) X a random variable with the

More information

Random variables P(X = 3) = P(X = 3) = 1 8, P(X = 1) = P(X = 1) = 3 8.

Random variables P(X = 3) = P(X = 3) = 1 8, P(X = 1) = P(X = 1) = 3 8. Random variables Remark on Notations 1. When X is a number chosen uniformly from a data set, What I call P(X = k) is called Freq[k, X] in the courseware. 2. When X is a random variable, what I call F ()

More information

On Compulsory Per-Claim Deductibles in Automobile Insurance

On Compulsory Per-Claim Deductibles in Automobile Insurance The Geneva Papers on Risk and Insurance Theory, 28: 25 32, 2003 c 2003 The Geneva Association On Compulsory Per-Claim Deductibles in Automobile Insurance CHU-SHIU LI Department of Economics, Feng Chia

More information

Notes on Continuous Random Variables

Notes on Continuous Random Variables Notes on Continuous Random Variables Continuous random variables are random quantities that are measured on a continuous scale. They can usually take on any value over some interval, which distinguishes

More information

A Tutorial on Probability Theory

A Tutorial on Probability Theory Paola Sebastiani Department of Mathematics and Statistics University of Massachusetts at Amherst Corresponding Author: Paola Sebastiani. Department of Mathematics and Statistics, University of Massachusetts,

More information

Math 4310 Handout - Quotient Vector Spaces

Math 4310 Handout - Quotient Vector Spaces Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable

More information

CONTINUOUS COUNTERPARTS OF POISSON AND BINOMIAL DISTRIBUTIONS AND THEIR PROPERTIES

CONTINUOUS COUNTERPARTS OF POISSON AND BINOMIAL DISTRIBUTIONS AND THEIR PROPERTIES Annales Univ. Sci. Budapest., Sect. Comp. 39 213 137 147 CONTINUOUS COUNTERPARTS OF POISSON AND BINOMIAL DISTRIBUTIONS AND THEIR PROPERTIES Andrii Ilienko Kiev, Ukraine Dedicated to the 7 th anniversary

More information

MATH2740: Environmental Statistics

MATH2740: Environmental Statistics MATH2740: Environmental Statistics Lecture 6: Distance Methods I February 10, 2016 Table of contents 1 Introduction Problem with quadrat data Distance methods 2 Point-object distances Poisson process case

More information

Generating Random Variables and Stochastic Processes

Generating Random Variables and Stochastic Processes Monte Carlo Simulation: IEOR E4703 Fall 2004 c 2004 by Martin Haugh Generating Random Variables and Stochastic Processes 1 Generating U(0,1) Random Variables The ability to generate U(0, 1) random variables

More information

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 17 Shannon-Fano-Elias Coding and Introduction to Arithmetic Coding

More information

Monte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMS091)

Monte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMS091) Monte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMS091) Magnus Wiktorsson Centre for Mathematical Sciences Lund University, Sweden Lecture 5 Sequential Monte Carlo methods I February

More information

WEEK #22: PDFs and CDFs, Measures of Center and Spread

WEEK #22: PDFs and CDFs, Measures of Center and Spread WEEK #22: PDFs and CDFs, Measures of Center and Spread Goals: Explore the effect of independent events in probability calculations. Present a number of ways to represent probability distributions. Textbook

More information