Markov Chains. Chapter Stochastic Processes

Size: px
Start display at page:

Download "Markov Chains. Chapter 4. 4.1 Stochastic Processes"

Transcription

1 Chapter 4 Markov Chains 4 Stochastic Processes Often, we need to estimate probabilities using a collection of random variables For example, an actuary may be interested in estimating the probability that he is able to buy a house in The Hamptons before his company bankrupt To study this problem, the actuary needs to estimate the evolution of reserves of his company over a period of time For example, consider the following insurance model U t is the surplus at time t P t is the earned premium up to time t S t is the paid losses until time t The basic equation is U t = U 0 + P t S t We may be interested in the probability of ruin, ie the probability that at some moment U(t) < 0 The continuous time infinite horizon survival probability is given by Pr{U t 0 for all t 0 U 0 = u} The continuous time finite horizon survival probability is given by Pr{U t 0 for all 0 t τ U 0 = u}, where τ > 0 Probabilities like the one before are called ruin probabilities We also are interested in the distribution of the time of the ruin Another model is the one used for day traders Suppose that a day trader decides to make a series of investment Wisely, he wants to cut his losses So, he decides to sell when either the price of the stock is up 50% or down 0% Let P (k) be the price of the stock, k days after begin bought For each nonnegative integer k, P (k) is a rv To study the considered problem, we need to consider the collection of rv s {P (k) : k {0,,, } We may be interested in the probability that the stock goes up 50% before going down 0% We may be interested in the amount of time it takes the stock to go either 50% up or 0% down To estimate these probabilities, we need to consider probabilities which depend on the collection of rv s {P (k) : k {0,,, } Recall that ω is the sample space, consisting by the outcomes of a possible experiment A probability P is defined in the (subsets) events of Ω A rv X is a function from Ω into R 45

2 46 CHAPTER 4 MARKOV CHAINS Definition 4 A stochastic process is a collection of rv s {X(t) : t T } defined in the same probability space T is a parameter set Usually, T is called the time set If T is discrete, {X(t) : t T } is called a discrete-time process Usually, T = {0,,, } If T is an interval, {X(t) : t T } is called a continuous-time process Usually, T = [0, ) Let R T be the collection of functions from T into R A stochastic process {X(t) : t T } defines a function from Ω into R T as follows ω Ω X( )(ω) R T In some sense, we have that a rv associates a number to each outcome and a stochastic processes associates a function to each outcome A stochastic process is a way to assign probabilities to the set of functions R T To find the probability of ruin, we need to study stochastic processes A stochastic process can be used to model: (a) the sequence of daily prices of a stock; (b) the surplus of an insurance company; (c) the price of a stock over time; (d) the evolution of investments; (e) the sequence of scores in a football game; (f) the sequence of failure times of a machine; (g) the sequence of hourly traffic loads at a node of a communication network; (h) the sequence of radar measurements of the position of an airplane 4 Random Walk Let {ɛ j } j= be a sequence of iidrv s with P(ɛ j = ) = p and P(ɛ j = ) = p, where 0 < p < Let X 0 = 0 and let X n = n j= ɛ j for n The stochastic process {X n : n 0} is called a random walk Imagine a drunkard coming back home at night We assume that the drunkard goes in a straight line, but, he does not know which direction to take After giving one step in one direction, the drunkard ponders which way to take home and decides randomly which direction to take The drunkard s path is a random walk Let s j = ɛ j+ Then, P(s j = ) = p and P(s j = 0) = p = q S n = n j= s j has a binomial distribution with parameters n and p We have that for each m n, P (S n = k) = ( n k) p k ( p) k, for 0 k n, E[S n ] = np, Var(S n ) = np( p), S m and S n S m are independent Cov(S m, S n S m ) = 0, Cov(S m, S n ) = Cov(S m, S m ) = Var(S m ) = mp( p), S n S m has the distribution of S n m Since S n = Xn+n and X n = S n n, for each m n, P (X n = j) = P (S n = n+j ) = ( ) n n+j n+j p ( p) n j, for n j n, with n+j integer E[X n ] = E[S n n] = E[S n ] n = n(p ), Var(X n ) = Var(S n n) = n4p( p), X m and X n X m are independent Cov(X m, X n X m ) = 0, Cov(X m, X n ) = Cov(X m, X m ) = Var(X m ) = m4p( p), X n X m has the distribution of X n m

3 4 RANDOM WALK 47 In general, a random walk can start anywhere, ie we will assume that X 0 = i and X n = i + n j= ɛ j Example 4 Suppose that {X n : n } is a random walk with X 0 = 0 and probability p of a step to the right, find: (i) P{X 4 = } (ii) P{X 3 =, X 6 =, } (iii) P{X 5 =, X 0 = 4, X 6 = } Solution: (i) (ii) P{X 4 = } = P{Binom(4, p) = (4 + ( ))/} = ( 4 ) p q 3 = 4p q 3 P{X 3 =, X 6 =, } = P{X 3 =, X 6 X 3 = 3, } = P{X 3 = }P{X 6 X 3 = 3} = ( ) 3 p q ( 3 3) p 3 q 0 = 3p 4 q (iii) P{X 5 =, X 0 = 4, X 6 = } = P{X 5 =, X 0 X 5 = 3, X 0 = 4, X 6 X 0 = } = P{X 5 = }P{X 0 X 5 = 3}P{X 6 X 0 = } = ( ) 5 3 p q 4( ) 5 4 p 4 q ( ) 6 p q 4 = 750p 7 q 9 Example 4 Suppose that {X n : n } is a random walk with X 0 = 0 and probability p of a step to the right Show that: (i) If m n, then Cov(X m, X n ) = m4p( p) (ii) If m n, then Var(X n X m ) = (n m)4p( p) (iii) Var(aX n + bx m ) = (a + b) n4p( p) + b (m n)4p( p) Solution: (i) Since X m and X n X m are independent rv s, Cov(X m, X n ) = Cov(X m, X m + X n X m ) = Cov(X m, X m ) + Cov(X m, X n X m ) = Var(X m ) = m4p( p) (ii) Since X n X m has the distribution of X n m, Var(X n X m ) = Var(X n m ) = (n m)4p( p) (iii) Since X m and X n are independent rv s, Var(aX m + bx n ) = Var(aX m + b(x m + X n X m )) = Var((a + b)x m + b(x n X m )) = Var((a + b)x m ) + Var(b(X n X m ) = (a + b) m4p( p) + b (n m)4p( p) Example 43 Suppose that {X n : n } is a random walk with X 0 = 0 and probability p of a step to the right, find: (i) Var( 3 + X 4 ) (ii) Var( + 3X X 5 ) (iii) Var(3X 4 X 5 + 4X 0 )

4 48 CHAPTER 4 MARKOV CHAINS Solution: (i) (ii) (iii) Var( 3 + X 4 ) = () Var(X 4 ) = () (4)p( p) = 6p( p) Var( + 3X X 5 ) = Var(3X X 5 ) = Var((3 )X (X 5 X )) = Var(X (X 5 X )) = Var(X ) + ( ) Var(X 5 X ) = ()(4)p( p) + ( ) (3)(4)p( p) = 56p( p) Var(3X 4 X 5 + 4X 0 ) = Var((3 + 4)X 4 + ( + 4)(X 5 X 4 ) + 4(X 0 X 5 )) = Var(5X 4 + (X 5 X 4 ) + 4(X 0 X 5 )) = Var(5X 4 ) + Var((X 5 X 4 ) + Var(4(X 0 X 5 )) = (5) Var(X 4 ) + () Var(X 5 X 4 ) + (4) Var(X 0 X 5 ) = (5) (4)(4)p( p) + () ()(4)p( p) + (4) (5)(4)p( p) = 736p( p) Exercise 4 Let {S n : n 0} be a random walk with S 0 = 0 and P [S n+ = i+ S n = i] = p and P [S n+ = i S n = i] = p Find: (i) P [S 0 = 4] (ii) P [S 4 =, S 0 = ] (iii) P [S 3 =, S 9 = 3, S 4 = 6] (iv) Var(S 7 ) (v) Var(5 S 3 S 0 + 3S 0 ) (vi) Cov( S 3 + 5S 0, 5 S 4 + S 7 ) Given k > 0, let T k be the first time that S n = k, ie T k = inf{n 0 : S n = k} Then, T k has negative binomial distribution, ( ) n P (T k = n) = p k ( p) n k, n k k We have that E[T k ] = k p and Var(T k) = kq p Exercise 4 The probability that a given driver stops to pick up a hitchhiker is p = 0004; different drivers, of course, make their decisions to stop or not independently of each other Given that our hitchhiker counted 30 cars passing her without stopping, what is the probability that she will be picked up by the 37-th car or before? Solution: You need to find the probability that any of the seven cars stops This is the complementary that none of the cars stops So, it is ( 0004) 7 = Gambler s ruin problem Imagine a game played by two players Player A starts with $k and player B starts with $(N k) They play successive games until one of them ruins In every game, they bet $, the probability that A wins is p and the probability that A losses

5 4 RANDOM WALK 49 is p Assume that the outcomes in different games are independent Let X n be player A s money after n games After, one of the player ruins, no more wagers are done If X n = N, then X m = N, for each m n If X n = 0, then X m = 0, for each m n For k N, P (X n+ = k + X n = k) = p, P (X n+ = k X n = k) = p Let P k be the probability that player A wins (player B gets broke) We will prove that ( p) q k if p P k = ( p) q N k if p = N P k is the probability that a random walk with X 0 = k, reaches N before reaching 0 P k is the probability that a random walk goes up N k before it goes down k If N, ( ) k q if p > p P k = 0 if p Now, P k is the probability that a random walk with X 0 = k, where k > 0, never reaches 0 Exercise 43 Two gamblers, A and B make a series of $ wagers where B has 55 chance of winning and A has a 45 chance of winning on each wager What is the probability that B wins $0 before A wins $5? Solution: Here, p = 055, k = 5, N k = 0 So, the probability that B wins $0 before A wins $5 is ( ) k q p ( ) = ( ) N ( ) q p Exercise 44 Suppose that on each play of a game a gambler either wins $ with probability p or losses $ with probability p, where 0 < p < The gamble continuous betting until she or he is either winning n or losing m What is the probability that the gambler quits a winner assuming that she/he start with $i? Solution: We have to find the probability that a random walk goes up n before it goes down m So, N k = n and k = m, and the probability is ( q p) m ( q p) m+n, if p m m+n, if p = Problem 4 (# 7, November 00) For an allosaur with 0,000 calories stored at the start of a day: (i) The allosaur uses calories uniformly at a rate of 5,000 per day If his stored calories reach 0, he dies (ii) Each day, the allosaur eats scientist (0,000 calories) with probability 045 and no

6 50 CHAPTER 4 MARKOV CHAINS scientist with probability 055 (iii) The allosaur eats only scientists (iv) The allosaur can store calories without limit until needed Calculate the probability that the allosaur ever has 5,000 or more calories stored (A) 054 (B) 057 (C) 060 (D) 063 (E) 066 Solution: This is the gamblers ruin problem with p = 045, q = 055 and X 0 = We need to find P(X k is up before going down ) = ( ( ) = ) 43 Markov Chains Definition 43 A Markov chain {X n : n = 0,,, } is a stochastic process with countably many states such that for each states i 0, i,, i n, j, P(X n+ = j X 0 = i 0, X = i,, X n = i n, X n = i n ) = P(X n+ = j X n = i n ) In words, for a Markov chain the conditional distribution of any future state X n+ given the past states X 0, X,, X n and the present state X n is independent of the past values and depends only on the present state Having observed the process until time n, the distribution of the process from time n + on depends only on the value of the process at time n Let E the set consisting the possible states of the Markov chain Usually, E = {0,,, } or E = {,,, m} We will assume that E = {0,,, } We define P ij = P (X n+ = j X n = i n ) Note that P (X n+ = j X n = i n ) is independent of n P ij is called the one-step transition probability from state i into state j We define P (k) ij = P (X n+k = j X n = i n ) P (k) ij is called the k-step transition probability from state i into state j We denote P = (P ij ) i,j to the matrix consisting by the one-step transition probabilities, ie P 00 P 0 P 0 P P = 0 P P P 0 P P We denote P (k) = (P (k) ij ) i,j to the matrix consisting by the k-step transition probabilities We have that P () = P Every row of the matrix P (k) is a conditional probability mass function So, the sum of the elements in each row of the matrix P (k) adds, ie for each i, j=0 P (k) ij =

7 43 MARKOV CHAINS 5 Kolmogorov-Chapman equations Let α i = P(X 0 = i), then Proof P(X 0 = i 0, X = i, X n = i n, X n = i n ) = α i0 P i0 i P i i P in i n = k=0 P ikp kj = P () ij P (n) ij P (n+m) ij k=0 P (n ) ik P kj k=0 P (n) ik P (m) kj = P (n+m) = P n P (m) P (n) = P n P(X n = j) = i=0 α ip (n) ij P(X 0 = i 0, X = i,, X n = i n, X n = i n ) = P(X 0 = i 0 )P (X = i X 0 = i 0 )P (X = i X 0 = i 0, X = i ) P (X n = i n X 0 = i 0, X = i, X n = i n ) = P(X 0 = i 0 )P (X = i X 0 = i 0 )P (X = i X = i )P (X n = i n X n = i n ) = α(i 0 )P i0,i P i,i P in,i n For 0 < n < m, P(X 0 = i, X n = j, X m = k) = P(X 0 = i)p (X n = j X 0 = i)p (X m = k X 0 = i, X n = k) = P(X 0 = i)p (X n = j X 0 = i)p (X m = k X n = k) = α(i)p (n) i,j P (m n) j,k Finally, P (n+m) ij = P(X n+m = j X 0 = i) = P(X 0=i,X n+m =j) = α(i)p (n) i,k P (m) k,j k=0 α(i) = k=0 P (n) ik P (m) kj P(X 0 =i) = P(X 0 =i,x n=k,x n+m =j) k=0 P(X 0 =i) P(X n = j) = i=0 P(X 0 = i)p(x n = j X 0 = i) = i=0 α ip (n) ij QED Exercise 45 Let E = {,, 3}, transition matrix /3 /3 0 P = / 0 /, /4 /4 / and α = (/, /3, /6) Find: (i) P, (ii) P 3, (iii) P (X = ), (iv) P (X 0 =, X 3 = 3), (v) P (X =, X = 3, X 3 = X 0 = ), (vi) P (X = 3 X = 3), (vii) P (X = X 5 = 3, X 0 = ), (viii) P (X 3 = 3, X 5 = X 0 = ), (ix) P (X 3 = 3 X 0 = )

8 5 CHAPTER 4 MARKOV CHAINS Solution: (i) /3 /3 0 /3 / P = / 0 / / 0 / = /4 /4 / /4 /4 / (ii) = P 3 /3 / = / 0 / /4 /4 / (iii) αp = (/, /3, /6) = (0375, 035, 035) So, P (X = ) = 035 (iv) P (X 0 =, X 3 = 3) = P (X 0 = )P (X 3 = 3 X 0 = ) = P (X 0 = )P (),3 = (05)(033333) = (v) P (X =, X = 3, X 3 = X 0 = ) = P (X 0 = )P, P,3 P 3, = (/)(/3)(/)(/4) = /4 (vi) (vii) P (X = 3 X = 3) = P 3,3 = / P (X = X 5 = 3, X 0 = ) = P (X = X 0 = ) = P (), = (viii) P (X 3 = 3, X 5 = X 0 = ) = P (X 3 = 3 X 0 = )P (X 5 = X 3 = 3) == P (),3 P () 3, = (03333)(03333) = 0 (ix) P (X 3 = 3 X 0 = ) = P (),3 = Exercise 46 Let P be a three state Markov Chain with transition matrix P =

9 43 MARKOV CHAINS 53 Suppose that chain starts at time 0 in state (a) Find the probability that at time 3 the chain is in state (b) Find the probability that the first time the chain is in state is time 3 (c) Find the probability that the during times,,3 the chain is ever in state Exercise 47 Suppose that 3 white balls and 5 black balls are distributed in two urns in such a way that each contains 4 balls We say that the system is in state i is the first urn contains i white balls, i = 0,,, 3 At each stage ball is drawn from each urn at random add the ball drawn from the first urn is placed in the second, and conversely the ball of the second urn is placed in the first urn Let X n denote the state of the system after the n th stage Prove {X n } n= is a Markov chain Find the matrix of transition probabilities Exercise 48 Let E = {, }, transition matrix ( ) p p P = p p Prove by induction that: P n = ( ) + (p )n (p )n (p )n + (p )n Problem 4 (# 38, May, 000) For Shoestring Swim Club, with three possible financial states at the end of each year: (i) State 0 means cash of 500 If in state 0, aggregate member charges for the next year are set equal to operating expenses (ii) State means cash of 500 If in state, aggregate member charges for the next year are set equal to operating expenses plus 000, hoping to return the club to state 0 (iii) State means cash less than 0 If in state, the club is bankrupt and remains in state (iv) The club is subject to four risks each year These risks are independent Each of the four risks occurs at most once per year, but may recur in a subsequent year (v) Three of the four risks each have a cost of 000 and a probability of occurrence 05 per year (vi) The fourth risk has a cost of 000 and a probability of occurrence 00 per year (vii) Aggregate member charges are received at the beginning of the year (viii) i = 0 Calculate the probability that the club is in state at the end of three years, given that it is in state 0 at time 0 (A) 04 (B) 07 (C) 030 (D) 037 (E) 056 Solution: We may joint states 0 and in a unique state Then, P = P (no big risk and at most one small risk) = (09)(75) 3 + (09)(3)(5)(75) =

10 54 CHAPTER 4 MARKOV CHAINS So, P (3) = P 3 = (07594) 3 = P (3) = P 3 = 056 Problem 43 (#6, Sample Test) You are given: The Driveco Insurance company classifies all of its auto customers into two classes: preferred with annual expected losses of 400 and standard with annual expected losses of 900 There will be no change in the expected losses for either class over the next three years The one year transition matrix between driver classes is given by: with states E = {preferred, standard} i = 005 Losses are paid at the end of each year There are no expenses ( ) T = All drivers insured with Driveco at the start of the period will remain insured for three years Calculate the 3-year term insurance single benefit premium for a standard driver Solution: Let T be the transition matrix Then: ( ) T = Hence, the 3-year insurance single benefit premium for a standard driver is P(X = X 0 = )(900)(05) + P(X = X 0 = )(400)(05) + P(X = X 0 = )(900)(05) + P(X = X 0 = )(400)(05) + P(X 3 = X 0 = )(900)(05) 3 + P(X 3 = X 0 = )(400)(05) 3 = ()(900)(05) + (0)(400)(05) + (085)(900)(05) + (05)(400)(05) + (085)(900)(05) 3 + (0875)(400)(05) 3 = Problem 44 (#, May 00) The Simple Insurance Company starts at time t = 0 with a surplus of S = 3 At the beginning of every year, it collects a premium of P = Every year, it pays a random claim amount: Claim Amount Probability of Claim Amount

11 43 MARKOV CHAINS 55 Claim amounts are mutually independent If, at the end of the year, Simple s surplus is more than 3, it pays a dividend equal to the amount of surplus in excess of 3 If Simple is unable to pay its claims, or if its surplus drops to 0, it goes out of business Simple has no administrative expenses and its interest income is 0 Calculate the expected dividend at the end of the third year (A) 05 (B) 0350 (C) 044 (D) 0458 (E) 0550 Solution: The Markov chain has states E = {0,,, 3} and transition matrix is P = We have that α = (0, 0, 0, ), αp = (0, 00, 0, 090) and αp = (00, 04, 005, 085) The expected dividend at the end of the third year is (005)()(05) + (085)()(05) + (085)()(05) = Problem 45 (# 3, November 00) The Simple Insurance Company starts at time 0 with a surplus of 3 At the beginning of every year, it collects a premium of Every year, it pays a random claim amount as shown: Claim Amount Probability of Claim Amount If, at the end of the year, Simple s surplus is more than 3, it pays a dividend equal to the amount of surplus in excess of 3 If Simple is unable to pay its claims, or if its surplus drops to 0, it goes out of business Simple has no administration expenses and interest is equal to 0 Calculate the probability that Simple will be in business at the end of three years (A) 000 (B) 049 (C) 059 (D) 090 (E) 00 Solution: Let states 0,,, 3 correspond to surplus of 0,,, 3 The probability that Simple will be in business at the end of three years is P [X 3 0 X 0 = 3] The one year transition matrix is P = We have that α = (0, 0, 0, ), αp = (0, 0, 0, 08), αp = (004, 04, 005, 067) and αp 3 = (0098, 030, 0080, 059) So, P [X 3 0 X 0 = 3] = 0098 = 090 Problem 46 (#, November 00) An insurance company is established on January (i) The initial surplus is

12 56 CHAPTER 4 MARKOV CHAINS (ii) At the 5th of every month a premium of is collected (iii) At the middle of every month the company pays a random claim amount with distribution as follows x p(x) (iv) The company goes out of business if its surplus is 0 or less at any time (v) i = 0 Calculate the largest number m such that the probability that the company will be out of business at the end of m complete months is less than 5% (A) (B) (C) 3 (D) 4 (E) 5 Solution: Let states 0,,, correspond to surplus of 0,, The one year transition matrix is 0 0 P = We have that α = (0, 0, ), αp = (00, 009, 090), αp = (008, 06, 08) and αp 3 = (0053, 087, 079) So, the answer is Problem 47 (# 9, November 00) A machine is in one of four states (F, G, H, I) and migrates annually among them according to a Markov process with transition matrix: F G H I F G H I At time 0, the machine is in State F A salvage company will pay 500 at the end of 3 years if the machine is in State F Assuming ν = 090, calculate the actuarial present value at time 0 of this payment (A) 50 (B) 55 (C) 60 (D) 65 (E) 70 Solution: We need to find P (3), We have that α = (, 0, 0, 0) and P = So, αp = (0, 08, 0, 0), αp = (044, 06, 04, 0), and αp 3 = (0469, 035, 008, 0) Hence, P (3), = 0468 The actuarial present value at time 0 of this payment is (0468)(500)(09) 3 = 7058

13 44 CLASSIFICATION OF STATES 57 Random Walk: A random walk Markov chain with states E = {0, ±, ±, } and one-step transition probabilities, P i,i+ = p and P i,i = p Absorbing Random Walk with one barrier: The absorbing random walk with one barrier is the Markov chain with states E = {0,,, } and one-step transition probabilities, P i,i+ = p and P i,i = p for i, and P 0,0 = Absorbing Random Walk with two barriers: The absorbing random walk with two barriers is the Markov chain with states E = {0,,,, N} and one-step transition probabilities, P i,i+ = p and P i,i = p for i N, and P 0,0 = P N,N = Reflecting Random Walk with one barrier: The reflecting Random Walk with one barrier is the Markov chain with states E = {0,,, } and one-step transition probabilities, P i,i+ = p and P i,i = p for i, and P 0, = 44 Classification of States 44 Accessibility, communication State j is said to be accessible form state i if P (n) ij > 0, for some n 0 This is equivalent to P((X n ) n 0 ever enters j X 0 = i) > 0 If this happens, we say that i j We say that i j if i j and j i Since P (0) ii =, i i We can put the information about accessibility in a graph We put an arrow from i to j if P ij > 0 Usually, we do not write the arrows from an element to itself Exercise 49 Let E = {,, 3} and let 0 P = 0 0 Represent using graphs the one-step accessibility of states Exercise 40 Let E = {,, 3} and let the transition matrix be /3 /3 0 P = / 0 / /4 / / Represent using graphs the one-step accessibility of states The relation i j is an equivalence relation, ie it satisfies (i) For each i, i i (ii) For each i, j, i j if and only if j i (iii) For each i, j, k, if i j and j k, then i k 3 So, we may decompose E into disjoint equivalent classes A class consists of elements which communicate between themselves and no element from outside A Markov chain is said to be irreducible if there is only one class, that is, if all state communicate with each other 3 3

14 58 CHAPTER 4 MARKOV CHAINS Exercise 4 Prove that if the number of states in a Markov chain is M, where M is a positive integer, and if state j can be reached from state i, where i j, then it can be reached in M steps or less Exercise 4 Consider the Markov chain with states E = {,, 3, 4, 5} and transition matrix / / 0 / 0 / 0 P = /4 /4 /4 /4 / / Represent using graphs the one-step accessibility of states Identity the communicating classes Exercise 43 Let {X n } be a Markov chain with one step transition probability matrix P = and states E = {,, 3, 4, 5, 6, 7, 8} Represent using graphs the one-step accessibility of states Find the communicating classes 44 Recurrent and transient states Let f i denote the probability that, starting in state i, the process will ever reenters state i, ie f i = P(X n = i, for some n X 0 = i) State i is said to be recurrent if f i = State i is said to be transient if f i < A state i is absorbing if P ii = An absorbing state is recurrent If state i is recurrent, If state i is transient, P(X n = i, infinitely often n X 0 = i) = P((X n ) n is never i X 0 = i) = ( f i ) P((X n ) n enters i exactly k times n X 0 = i) = fi k ( f i ), k 0 P((X n ) n 0 enters i exactly often k times X 0 = i) = f k i ( f i ), k 0 Let N i be the number of times, which the Markov chain stays at state i, ie N i = n=0 I(X n = i) Then,

15 44 CLASSIFICATION OF STATES 59 (i) If state i is recurrent, (ii) If state i is transient, We have that P(N i = X 0 = i) =, E[N i X 0 = i] = P(N i = k X 0 = i) = f k i ( f i ), k 0, E[N i X 0 = i] = f i E[N i X 0 = i] = E[ n=0 I(X n = i) X 0 = i)] = n=0 P (n) ii Theorem 4 State i is recurrent, if n=0 P (n) ii = State i is transient, if n=0 P (n) ii < If state i is transient, the averate number of times spent at i is f i = n=0 P (n) ii Corollary 4 If state i is recurrent and states i and j communicate, then state j is recurrent Proof If states i and j communicate, then there are integers k and m such that P (k) ij P (m) ji > 0 For each integer n, P (m+n+k) jj P (m) ji P (n) ii P (k) ji > 0 and So, n=0 P (m+n+k) jj n=0 P (m) ji P (n) ii P (k) ji = QED All states in the same communicating class have the same type, either all are recurrent or all are transient Theorem 4 A finite state Markov chain has at least one recurrent class Theorem 43 A communicating class C is closed if it cannot leave C, ie for each i C, each j C c and each n 0, P (n) ij = 0 Theorem 44 A recurrent class is closed We may have elements i of a transient class such that i j, for some j C We may have elements i of a recurrent class such that j i, for some j C Theorem 45 A closed finite class is recurrent So, if E has finite many states, a class C is recurrent if and only if it is closed

16 60 CHAPTER 4 MARKOV CHAINS Exercise 44 Let {X n } be a Markov chain with one step transition probability matrix P = and states E = {0,,, 3, 4} Find the communicating classes Classify the states as transient or recurrent Exercise 45 Let {X n } be a Markov chain with one step transition probability matrix P = and states E = {,, 3} Find the communicating classes Classify the states as transient or recurrent Exercise 46 Let {X n } be a Markov chain with one step transition probability matrix P = and states E = {,, 3, 4, 5} Find the communicating classes Classify the states as transient or recurrent Exercise 47 Let {X n } be a Markov chain with one step transition probability matrix P = and states E = {,, 3, 4, 5, 6, 7} Find the communicating classes Classify the states as transient or recurrent Exercise 48 Let {X n } be a Markov chain with one step transition probability matrix P =

17 44 CLASSIFICATION OF STATES 6 and states E = {,, 3, 4, 5} Find the communicating classes Classify the states as transient or recurrent Exercise 49 Let {X n } be a Markov chain with one step transition probability matrix P = and states E = {,, 3, 4, 5, 6, 7, 8} Find the communicating classes Classify the states as transient or recurrent Exercise 40 Let {X n } be a Markov chain with one step transition probability matrix P = and states E = {,, 3, 4, 5, 6, 7} Find the communicating classes Classify the states as transient or recurrent Problem 48 (#, May 00) The Simple Insurance Company starts at time t = 0 with a surplus of S = 3 At the beginning of every year, it collects a premium of P = Every year, it pays a random claim amount: Claim Amount Probability of Claim Amount Claim amounts are mutually independent If, at the end of the year, Simple s surplus is more than 3, it pays a dividend equal to the amount of surplus in excess of 3 If Simple is unable to pay its claims, or if its surplus drops to 0, it goes out of business Simple has no administrative expenses and its interest income is 0 Determine the probability that Simple will ultimately go out of business (A) 000 (B) 00 (C) 044 (D) 056 (E) 00

18 6 CHAPTER 4 MARKOV CHAINS Solution: The Markov chain has states E = {0,,, 3} and transition matrix is P = State 0 is recurrent So, the probability that Simple will ultimately go out of business is Random walk: The random walk is the Markov chain with state space {0, ±, ±, } and transition probabilities P i,i+ = p and P i,i = p The Markov chain is irreducible X By the law of the large numbers, given X 0 = i, with probability one, lim n n n = p So, if p > /, with probability one, lim X n = If p < /, with probability one, lim X n = So, if p /, the Markov chain is transient We have that P (n) 00 = ( ) n n p n ( p) n By the Stirling formula, So, lim n lim n n! n n+ e n π P00 n (4p( p)) n πn Hence, for p = /, lim n P00 n πn and n=0 P n 00 = The random walk is recurrent for p = / 443 Periodicity State i has period d if P (n) ii = 0 whenever n is not divisible by d and d s the largest integer with this property Starting in i, the Markov chain can only return at i at the times d, d, 3, d and d is the largest integer satisfying this property A state with period is said to be aperidodic Theorem 46 If i has period d and i j, then j has period d Example 44 Consider the Markov chain with state space E = {,, 3} and transition matrix P = Find the communicating classes Classify the communicating classes as transient or recurrent Find the period d of each communicating class

19 45 LIMITING PROBABILITIES 63 Example 45 Consider a random walk on E = {0,,, 3, 4, 5, 6} with transition matrix P = Find the communicating classes Classify the communicating classes as transient or recurrent Find the period d of each communicating class Example 46 Consider a random walk on E = {0,,, 3, 4, 5, 6} with transition matrix P = Find the communicating classes Classify the communicating classes as transient or recurrent Find the period d of each communicating class Example 47 Consider the Markov chain with state space E = {,, 3, 4} and transition matrix P = Find the communicating classes Classify the communicating classes as transient or recurrent Find the period d of each communicating class 45 Limiting Probabilities A recurrent state i is said to be positive recurrent is starting at i, the expected time until the process returns to state i is finite A recurrent state i is said to be null recurrent is starting at i, the expected time until the process returns to state i is infinite A positive recurrent aperiodic state is called ergodic Theorem 47 In a finite state Markov chain all recurrent states are positive recurrent Theorem 48 For an irreducible ergodic Markov chain, lim n P (n) ij exists and is independent of i Furthermore, letting π j = lim n P (n) ij, j 0, then π j is the unique nonnegative solution of π j = i=0 π ip ij, j 0 = i=0 π i

20 64 CHAPTER 4 MARKOV CHAINS The equation above can be written as π 0 π π = ( π 0 π π P 00 P 0 P 0 ) P 0 P P P 0 P P π i is called the long run proportion of time spent at state i Let a j (N) = N n= I(X n = j) be the amount of time the Markov chain spends in state j during the periods,, N Then, with probability one, lim n N N I(X n = j) = π j n= Let T i = inf{x n : n, X n = i} Given X 0 = i, T i is the time to return to state i We have that f i = P(T i < X 0 = i) A state is recurrent if f i = A state is positive recurrent if E[T i X 0 = i] < A state is null recurrent if E[T i X 0 = i] = We have that π i = E[T i X 0 =i] π i is the average time to return to state i Definition 45 A distribution (α 0, α, α, ) of initial probabilities is called stationary distribution If given that X 0 has this distribution, then P (X N = j) = α j, for each j 0 (α 0, α, α, ) has a stationary distribution if P 00 P 0 P 0 P (α 0, α, α, ) = (α 0, α, α, ) 0 P P P 0 P P The long run probabilities (π 0, π, π, ) determines a stationary distribution Theorem 49 Let {X n : n 0} be an irreducible Markov chain with long run probabilities (π 0, π, π, ), and let r be a bounded function on the state space Then, with proability one, lim N N N r(x n ) = n= r(j)π j j=0 Exercise 4 Consider the Markov chain with state space E = {,, 3, 4, 5, 6} and transition matrix P =

21 45 LIMITING PROBABILITIES 65 Compute the matrix F Compute lim n P n Compute where r = (,, 3, 4, 5, 6) lim n n + n E[r(X m ) X 0 = i] m= Exercise 4 Let E = {, } and let ( α P = β ) α β Show that if 0 < α, β <, the Markov chain is irreducible and ergodic Find the limiting probabilities Exercise 43 Find the following limit ( lim n a a b b ) n, where 0 < a, b < Exercise 44 Consider the Markov chain with state space E = {,, 3} and let P = Show that the Markov chain is irreducible and aperiodic Find the limiting probabilities Exercise 45 A particle moves on a circle through points which have been marked 0,,,3, and 4 (in a clockwise order) At the each step it has probability p of moving to the right (clockwise) and p to the left (counter wise) Let X n denote its location on the circle after the n th step The process {X n : n 0} is a Markov chain (a) Find the transition probability matrix (b) Calculate the limiting probabilities Exercise 46 Automobile buyers of brands A, B and C, stay with or change brands according to the following matrix: P = After several years, what share of the market does brand C have? Exercise 47 Harry, the semipro Our hero, Happy Harry, used to play semipro basketball where he was a defensive specialist His scoring productivity per game fluctuated between three state states: (scored 0 or points), (scored between and 5 points), 3 (scored more than 5 points) Inevitably, if Harry scored a lot of points in one game, his jealous teammates

22 66 CHAPTER 4 MARKOV CHAINS refused to pass him the ball in the next game, so his productivity in the next game was nil The team statistician, Mrs Doc, upon observing the transition between states concluded these transitions could be modeled by a Markov chain with transition matrix P = What is the long run proportion of games that our hero had high scoring games? Check that the Markov chain is irreducible and ergodic Find the long run proportion vector (π, π, π 3 ) Exercise 48 Harry, the semipro Our hero, Happy Harry, used to play semipro basketball where he was a defensive specialist His scoring productivity per game fluctuated between three state states: (scored 0 or ponts), (scored between and 5 points), 3 (scored more than 5 points) Invevitaby, if Harry scored a lot of points in one game, his jealous teammates refused to pass him the ball in the next game, so his productivity in the next game was nill The team statistician, Mrs Doc, upon observing the transition between states concluded these transitions could be modelled by a Markov chain with transition matrix P = What is the long run proportion of games that our hero had high scoring games? The salary structure in the the semipro leagues includes incentives for scoring Harry was paid $40/game for a high scoring performance, $30/game when he scored between and 5 points and only $0/game when he scored one or less What was the long run earning rate of our hero? Exercise 49 Suppose that whether or not it rains tomorrow depends on past weather conditions only through the last two days Specifically, suppose that if it has rained yesterday and today then it will rain tomorrow with probability 8; if it rained yesterday but not today then it will rain tomorrow with probability 3; if it rained today but not yesterday then it will rain tomorrow with probability 4; and if it has not rained whether yesterday or today then it will rain tomorrow with probability What proportion of days does it rain? Exercise 430 Harry, the semipro Our hero, Happy Harry, used to play semipro basketball where he was a defensive specialist His scoring productivity per game fluctuated between three state states: (scored 0 or ponts), (scored between and 5 points), 3 (scored more than 5 points) Invevitaby, if Harry scored a lot of points in one game, his jealous teammates refused to pass him the ball in the next game, so his productivity in the next game was nill The team statistician, Mrs Doc, upon observing the transition between states concluded these transitions could be modelled by a Markov chain with transition matrix P =

23 45 LIMITING PROBABILITIES 67 What is the long run proportion of games that our hero had high scoring games? The salary structure in the the semipro leagues includes incentives for scoring Harry was paid $40/game for a high scoring performance, $30/game when he scored between and 5 points and only $0/game when he scored one or less What was the long run earning rate of our hero? Exercise 43 At the start of a business day an individual worker s computer is one of the three states: () operating, () down for the day, (3) down for the second consecutive day Changes in status take place at the end of a day If the computer is operating at the beginning of a day there is a 5 % chance it will be down at the start of the next day If the computer is down for the first day at the start of day, there is a 90 % chance that it sill be operable for the start of the next day If the computer is down for two consecutive days, it is automatically replaced by a new machine for the following day Calculate the limiting probability that a worker will have an operable computer at the start of a day Assume the state transitions satisfy the Markov chain assumptions Problem 49 (# 33, May, 000) In the state of Elbonia all adults are drivers It is illegal to drive drunk If you are caught, your driver s license is suspended for the following year Driver s licenses are suspended only for drunk driving If you are caught driving with a suspended license, your license is revoked and you are imprisoned for one year Licenses are reinstated upon release from prison Every year, 5% of adults with an active license have their license suspended for drunk driving Every year, 40% of drivers with suspended licenses are caught driving Assume that all changes in driving status take place on January, all drivers act independently, and the adult population does not change Calculate the limiting probability of an Elbonian adult having a suspended license (A) 009 (B) 000 (C) 008 (D) 0036 (E) 0047 Solution: We have a Markov chain with states E = {,, 3}, where = {ok license}, = {suspended license}, and 3 = {prison for one year} The transition matrix is To find the limiting probabilities π, π, π 3, we solve π = 095π + 06π + π 3 π = 005π = π + π + π 3 We get π = 00 07, π = 5 07, π 3 = 07 So, the answer is π = 5 07 = Problem 40 (# 40, May, 000) Rain is modeled as a Markov process with two states: (i) If it rains today, the probability that it rains tomorrow is 050 (ii) If it does not rain today, the probability that it rains tomorrow is 030 Calculate the limiting probability that it rains on two consecutive days (A) 0 (B) 04 (C) 06 (D) 09 (E) 0

24 68 CHAPTER 4 MARKOV CHAINS Solution: We have a Markov chain with states E = {, }, where = {rain} and = {not rain} The transition matrix is ( ) To find the limiting probabilities π, π, we solve We get π = 3, π 8 = 5 So, the answer is 8 π = 05π + 03π = π + π lim P [X n =, X n+ = ] = lim P [X n = ]P = 3 n n 8 = 3 6 = 0875 Problem 4 (# 34, November 000)The Town Council purchases weather insurance for their annual July 4 picnic (i) For each of the next three years, the insurance pays 000 on July 4 if it rains on that day (ii) Weather is modeled by a Markov chain, with two states, as follows: The probability that it rains on a day is 050 if it rained on the prior day The probability that it rains on a day is 00 if it did not rain on the prior day (iii) i = 00 Calculate the single benefit premium for this insurance purchased one year before the first scheduled picnic (A) 340 (B) 40 (C) 540 (D) 70 (E) 760 Solution: We have a Markov chain with states E = {, }, where = {rain} and = {not rain} The transition matrix is ( ) To find the limiting probabilities π, π, we solve π = 05π + 0π = π + π We get π =, π 7 = 5 So, the benefit premium is 7 ( 7 (000)() + 7 (000)() + 7 (000)() 3 = () 3 7 (000)() 3 ) = 7053 Problem 4 (# 7, May 00) A coach can give two types of training, light or heavy, to his sports team before a game If the team wins the prior game, the next training is equally

25 45 LIMITING PROBABILITIES 69 likely to be light or heavy But, if the team loses the prior game, the next training is always heavy The probability that the team will win the game is 04 after light training and 08 after heavy training Calculate the long run proportion of time that the coach will give heavy training to the team (A) 06 (B) 064 (C) 067 (D) 070 (E) 073 Solution: We have the following options Option Probability light win light (04)(5) = 0 light win heavy (04)(5) = 0 light lose heavy (06)() = 06 heavy win light (08)(5) = 04 heavy win heavy (08)(5) = 04 heavy lose heavy (0)(5) = 0 We have a Markov chain with states E = {, }, where = {light} and = {heavy} The transition matrix is ( 0 ) To find the limiting probabilities π, π, we solve We get π = 3, π = 3 π = 0π + 04π π + π = Problem 43 (#, November 00) A taxi driver provides service in city R and city S only If the taxi driver is in city R, the probability that he has to drive passengers to city S is 08 If he is in city S, the probability that he has to drive passengers to city R is 03 The expected profit for each trip is as follows: a trip within city R: 00 a trip within city S: 0 a trip between city R and city S: 00 Calculate the long-run expected profit per trip for this taxi driver (A) 44 (B) 54 (C) 58 (D) 70 (E) 80 Solution: We have a Markov chain with states E = {R, S, }, where R means the taxi is in city R and S means the taxi is in city S The transition matrix is ( ) To find the limiting probabilities π, π, we solve π + π = π = 0π + 03π π = 08π + 07π

26 70 CHAPTER 4 MARKOV CHAINS We get π = 3, π = 8 The long expected profit is 3 (0)() + 3 (08)() + 8 (03)() + 8 (07)() = 54 Problem 44 (# 9, November 00) Homerecker Insurance Company classifies its insureds based on each insured s credit rating, as one of Preferred, Standard or Poor Individual transition between classes is modeled as a discrete Markov process with a transition matrix as follows: Preferred Standard Poor Preferred Standard Poor Calculate the percentage of insureds in the Preferred class in the long run (A) 33% (B) 50% (C) 69% (D) 75% (E) 9 Solution: We have a Markov chain with states E = {Preferred, Standard, Poor}, The transition matrix is To find the limiting probabilities π, π, π 3, we solve We get π + π + π 3 = π = 9π π 0 π = π 5 + 4π π 4 3 π = 75 08, π = 5 55, π 3 = 8 08 The percentage of insureds in the Preferred class in the long run is π = 75 = 6944% 08 Example 48 Let X be an irreducible aperiodic Markov chain with m < states, and suppose its transition matrix P is doubly Markov Show that then π(i) =, i E is the m limiting distribution 46 Hitting Probabilities Given a set A E, let h i = P((X n ) n 0 is ever in A X 0 = i) h i is the probability that the process hits A given that it starts at i Theorem 40 (h 0, h, ) is the minimal nonnegative solution to the system of linear equations, h i =, for i A, h i = j=0 P ijh j, for i A

27 46 HITTING PROBABILITIES 7 (Minimality means that if (x 0, x, ) is another solution with x i 0, for each i, then x i h i for each i) Given a set A E, let H = inf{n 0 : X n A} H is the hitting time, ie it is the time the Markov chain hits A We have that h i = P(H < X 0 = i) Let k i = E[H X 0 = i] = k i is the mean hitting time np(h = n X 0 = i) + P(H = X 0 = i) n= Theorem 4 (k 0, k, ) is the minimal nonnegative solution to the system of linear equations, k i = 0, for i A, k i = + j A P ijk j, for i A We may also consider hitting times, requiring that a hits only occurs for times bigger than Given a set A E, let H = inf{n : X n A} Let k i = E[ H X 0 = i] = We can find k i, using the formula np( H = n X 0 = i) + P( H = X 0 = i) n= k i = j A P i,j + j A P i,j ( + k j ), where k i = E[H X 0 = i] and H = inf{n 0 : X n A} If E is an irreducible ergodic Markov chain and A{i}, then, k i = π i Theorem 4 (k 0, k, ) is the minimal nonnegative solution to the system of linear equations, k i = 0, for i A, k i = + j A P ijk j, for i A Exercise 43 For the random walk, ie the Markov chain with states E = {0, ±, ±,, } the transition probabilities are P i,i+ = p, and P i,i = p The Markov chain is irreducible If p, the chain is transient If p =, the chain is null recurrent Using the previous theorem, it is possible to prove that the mean hitting time to {0} is Note that there is no solution to k 0 = 0, k i = + j 0 P ijk j, for i 0

28 7 CHAPTER 4 MARKOV CHAINS Exercise 433 For the random walk with a reflecting barrier, ie the Markov chain with states E = {0,,, } and transition probabilities P i,i+ = p, and P i,i = p for i, and P 0 = The Markov chain is irreducible For A = {0}, ( ) i q if p > p h i = if p, if i > 0 If follows from this that for a regular Markov chain, { ( p) if p > f(0, 0) = P((X n ) n ever transits to 0 X 0 = 0) = p if p Exercise 434 A mouse is in a maze, form a room, the mouse can go any available room which is either up, or down or to the right or to the left at random There is a cat in room 4 If the mouse gets to either room 4, stays there forever Find the average life of the mouse assuming that it starts at room i 3 4 CAT Solution: The states are E = {,, 3, 4} The transition matrix is P = For A = {}, we need to find k i We setup the equations The solutions are k = k 3 = 3 and k = 4 k 4 = 0 k = + (05)k + (05)k 3 k = + (05)k k 3 = + (05)k Exercise 435 A mouse is in a 3 maze, form a room, the mouse can go any available room which is either up, or down or to the right or to the left at random Room 6 is full of cheese and there is a cat in room 3 If the mouse gets to either room 3 or 6, stays there forever Find the probability that the mouse end up in room 6 assuming that it starts at room i 3 CAT CHEESE

29 46 HITTING PROBABILITIES 73 Solution: The states are E = {,, 3, 4, 5, 6} The transition matrix is /3 0 /3 0 /3 0 P = For A = {}, we need to find h i We setup the equations h 6 =, h 3 = 0 h = h + h 4 h = 3 h + 3 h 5 h 4 = h + h 5 h 5 = 3 h h + 3 The solutions are h = 5, h = 4, h 4 = 6, h 5 = 7 Exercise 436 A mouse is in a 3 3 maze, form a room, the mouse can go any available room which is either up, or down or to the right or to the left at random Room 9 is full of cheese and there is a cat in room 3 If the mouse gets to either room 3 or 9, stays there forever Find the probability that the mouse end up in room 3 assuming that it starts at room i Hint: By symmetry h 4 = h 5 = h 6 = 3 CAT CHEESE Exercise 437 A student goes to an Atlantic City casino during a spring break to play a roulette table The student begins with $50 and wants to simply win $50 (or go broke) and go home The student bets on red each play In this case, the probability of winning her bet (and her doubling her ante) at each play is 8/38 To achieve her goal of reaching a total of $00 of wealth quickly, the student will bet her entire fortune until she gets $00 (a) What is the probability the student will go home a winner? (b) How many plays will be needed, on average, to reach a closure at the roulette table? Solution: E = {0, 50, 00, 00} and /9 0 9/9 0 P = 0 0/9 0 9/ (i) With A = {00}, we need to find h 50 We have that h 0 = 0 and h 00 =, h 50 = (9/9)h 00, h 00 = h 00

30 74 CHAPTER 4 MARKOV CHAINS So, h 50 = (9/9) = 04 (i) With A = {0, 00}, we need to find k 50, the equations to solve are k 0 = 0 and k 00 = 0, So, k 50 = (8/9) = k 50 = + (9/9)k 00, k 00 = Exercise 438 In a certain state, a voter is allowed to change his or her party affiliation (for primary elections) only by abstaining from the next primary election Experience shows that, in the next primary, a former Democrat will abstain /3 of the time, whereas a Republican will abstain /5 of the time A voter who abstains in a primary election is equally likely to vote for either party in the next election (regardless of their past party affiliation) (i) Model this process as a Markov chain by writing and labeling the transition probability matrix (ii) Find the percentage of voters who vote Democrat in the primary elections What percentage vote Republican? Abstain? For the remaining questions, suppose Eileen Tudor-Wright is a Republican (iii) What is the probability that, three elections from now, Eileen votes Republican? (iv) How many primaries, on average, does it take before Eileen will again vote Republican? Until she abstains? Solution: (i) We have a Markov chain with state space E = {,, 3}, where,,3 correspond to Democrat, Republican and Independent, respectively The transition matrix is /3 0 /3 P = 0 4/5 /5 / / 0 (ii) We find the long run probabilities, solving π = 3 π + π 3 π = 4 5 π + π 3 π 3 = 3 π + 5 π = π + π + π 3 We get π = 3, π 0 =, π 3 = 0 (iii) We need to find P (3), We find (0,, 0)P 3 We have that /3 0 /3 (0,, 0)P = (0,, 0) 0 4/5 /5 = (0, 4, ) 5 5 / / 0 /3 0 /3 (0,, 0)P = (0, 4, )P = (0, 4, ) /5 /5 = (, 37 0 / / 0 /3 0 /3 (0,, 0)P 3 = (, 37, 4 )P = (, 37, 4 ) /5 /5 = ( / / 0, 4 ) 50 5, 84, 68 )

31 46 HITTING PROBABILITIES 75 The probability that, three elections from now, Eileen votes Republican is 84 5 (iv) We take A = {}, we need to find k and k 3 First, we find k, k, k 3 We have that k = 0, k = + 3 k + 3 k 3, k 3 = + k, We get that k = 8 and k 3 = 5 Hence, the average time until she votes Republican again is k = (4/5)() + (/5)( + 5) = The average time until she votes abstain is k 3 = 5 Using Theorem 40, it is possible to prove the following theorem: Theorem 43 (Gambler s ruin probability) Consider the Markov chain with states E = {0,,,, N} and one-step transition probabilities, P i,i+ = p and P i,i = p for i N, and P 0,0 = P N,N = Let h i be the probability that the Markov chain hits 0 Then, ( p) q i if p h i = ( p) q N i if p = N Theorem 44 (Gambler s ruin probability against an adversary with infinite amount of money) Consider the Markov chain with states E = {0,,, } and one-step transition probabilities, P i,i+ = p and P i,i = p for i, and P 0,0 = Let h i be the probability that the Markov chain hits N Then, ( ) k q if p > p P k = 0 if p Exercise 439 A gambler has $ and he needs $0 He play successive plays until he is broke or he gets $0 In every play, he stakes $ The probability that the wins is and the probability that the losses is Find the probability that the gambler gets $0 Find the average amount of time that the game last Solution: The states are E = {0,,, 3, 4, 5, 6, 7, 8, 9, 0} For A = {0, 0}, we need to find k i We setup the equations k 0 = k 0 = 0 k 9 = k 0 + k 8 k 8 = k 9 + k 7 k 7 = k 8 + k 6 k 6 = k 7 + k 5 k 5 = k 6 + k 4 k 4 = k 5 + k 3 k 3 = k 4 + k k = k 3 + k k = k + k 0

1. (First passage/hitting times/gambler s ruin problem:) Suppose that X has a discrete state space and let i be a fixed state. Let

1. (First passage/hitting times/gambler s ruin problem:) Suppose that X has a discrete state space and let i be a fixed state. Let Copyright c 2009 by Karl Sigman 1 Stopping Times 1.1 Stopping Times: Definition Given a stochastic process X = {X n : n 0}, a random time τ is a discrete random variable on the same probability space as

More information

IEOR 6711: Stochastic Models, I Fall 2012, Professor Whitt, Final Exam SOLUTIONS

IEOR 6711: Stochastic Models, I Fall 2012, Professor Whitt, Final Exam SOLUTIONS IEOR 6711: Stochastic Models, I Fall 2012, Professor Whitt, Final Exam SOLUTIONS There are four questions, each with several parts. 1. Customers Coming to an Automatic Teller Machine (ATM) (30 points)

More information

Chapter 4 Lecture Notes

Chapter 4 Lecture Notes Chapter 4 Lecture Notes Random Variables October 27, 2015 1 Section 4.1 Random Variables A random variable is typically a real-valued function defined on the sample space of some experiment. For instance,

More information

The Math. P (x) = 5! = 1 2 3 4 5 = 120.

The Math. P (x) = 5! = 1 2 3 4 5 = 120. The Math Suppose there are n experiments, and the probability that someone gets the right answer on any given experiment is p. So in the first example above, n = 5 and p = 0.2. Let X be the number of correct

More information

1 Gambler s Ruin Problem

1 Gambler s Ruin Problem Coyright c 2009 by Karl Sigman 1 Gambler s Ruin Problem Let N 2 be an integer and let 1 i N 1. Consider a gambler who starts with an initial fortune of $i and then on each successive gamble either wins

More information

3.2 Roulette and Markov Chains

3.2 Roulette and Markov Chains 238 CHAPTER 3. DISCRETE DYNAMICAL SYSTEMS WITH MANY VARIABLES 3.2 Roulette and Markov Chains In this section we will be discussing an application of systems of recursion equations called Markov Chains.

More information

1/3 1/3 1/3 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0 1 2 3 4 5 6 7 8 0.6 0.6 0.6 0.6 0.6 0.6 0.6

1/3 1/3 1/3 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0 1 2 3 4 5 6 7 8 0.6 0.6 0.6 0.6 0.6 0.6 0.6 HOMEWORK 4: SOLUTIONS. 2. A Markov chain with state space {, 2, 3} has transition probability matrix /3 /3 /3 P = 0 /2 /2 0 0 Show that state 3 is absorbing and, starting from state, find the expected

More information

LECTURE 4. Last time: Lecture outline

LECTURE 4. Last time: Lecture outline LECTURE 4 Last time: Types of convergence Weak Law of Large Numbers Strong Law of Large Numbers Asymptotic Equipartition Property Lecture outline Stochastic processes Markov chains Entropy rate Random

More information

Statistics 100A Homework 4 Solutions

Statistics 100A Homework 4 Solutions Chapter 4 Statistics 00A Homework 4 Solutions Ryan Rosario 39. A ball is drawn from an urn containing 3 white and 3 black balls. After the ball is drawn, it is then replaced and another ball is drawn.

More information

Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 13. Random Variables: Distribution and Expectation

Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 13. Random Variables: Distribution and Expectation CS 70 Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 3 Random Variables: Distribution and Expectation Random Variables Question: The homeworks of 20 students are collected

More information

From the probabilities that the company uses to move drivers from state to state the next year, we get the following transition matrix:

From the probabilities that the company uses to move drivers from state to state the next year, we get the following transition matrix: MAT 121 Solutions to Take-Home Exam 2 Problem 1 Car Insurance a) The 3 states in this Markov Chain correspond to the 3 groups used by the insurance company to classify their drivers: G 0, G 1, and G 2

More information

ECE 316 Probability Theory and Random Processes

ECE 316 Probability Theory and Random Processes ECE 316 Probability Theory and Random Processes Chapter 4 Solutions (Part 2) Xinxin Fan Problems 20. A gambling book recommends the following winning strategy for the game of roulette. It recommends that

More information

**BEGINNING OF EXAMINATION**

**BEGINNING OF EXAMINATION** November 00 Course 3 Society of Actuaries **BEGINNING OF EXAMINATION**. You are given: R = S T µ x 0. 04, 0 < x < 40 0. 05, x > 40 Calculate e o 5: 5. (A) 4.0 (B) 4.4 (C) 4.8 (D) 5. (E) 5.6 Course 3: November

More information

How to Gamble If You Must

How to Gamble If You Must How to Gamble If You Must Kyle Siegrist Department of Mathematical Sciences University of Alabama in Huntsville Abstract In red and black, a player bets, at even stakes, on a sequence of independent games

More information

FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL

FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL STATIsTICs 4 IV. RANDOm VECTORs 1. JOINTLY DIsTRIBUTED RANDOm VARIABLEs If are two rom variables defined on the same sample space we define the joint

More information

Notes on Probability Theory

Notes on Probability Theory Notes on Probability Theory Christopher King Department of Mathematics Northeastern University July 31, 2009 Abstract These notes are intended to give a solid introduction to Probability Theory with a

More information

Week 4: Gambler s ruin and bold play

Week 4: Gambler s ruin and bold play Week 4: Gambler s ruin and bold play Random walk and Gambler s ruin. Imagine a walker moving along a line. At every unit of time, he makes a step left or right of exactly one of unit. So we can think that

More information

Master s Theory Exam Spring 2006

Master s Theory Exam Spring 2006 Spring 2006 This exam contains 7 questions. You should attempt them all. Each question is divided into parts to help lead you through the material. You should attempt to complete as much of each problem

More information

6.042/18.062J Mathematics for Computer Science December 12, 2006 Tom Leighton and Ronitt Rubinfeld. Random Walks

6.042/18.062J Mathematics for Computer Science December 12, 2006 Tom Leighton and Ronitt Rubinfeld. Random Walks 6.042/8.062J Mathematics for Comuter Science December 2, 2006 Tom Leighton and Ronitt Rubinfeld Lecture Notes Random Walks Gambler s Ruin Today we re going to talk about one-dimensional random walks. In

More information

Math 370/408, Spring 2008 Prof. A.J. Hildebrand. Actuarial Exam Practice Problem Set 5 Solutions

Math 370/408, Spring 2008 Prof. A.J. Hildebrand. Actuarial Exam Practice Problem Set 5 Solutions Math 370/408, Spring 2008 Prof. A.J. Hildebrand Actuarial Exam Practice Problem Set 5 Solutions About this problem set: These are problems from Course 1/P actuarial exams that I have collected over the

More information

Lectures on Stochastic Processes. William G. Faris

Lectures on Stochastic Processes. William G. Faris Lectures on Stochastic Processes William G. Faris November 8, 2001 2 Contents 1 Random walk 7 1.1 Symmetric simple random walk................... 7 1.2 Simple random walk......................... 9 1.3

More information

Random access protocols for channel access. Markov chains and their stability. Laurent Massoulié.

Random access protocols for channel access. Markov chains and their stability. Laurent Massoulié. Random access protocols for channel access Markov chains and their stability laurent.massoulie@inria.fr Aloha: the first random access protocol for channel access [Abramson, Hawaii 70] Goal: allow machines

More information

Unit 19: Probability Models

Unit 19: Probability Models Unit 19: Probability Models Summary of Video Probability is the language of uncertainty. Using statistics, we can better predict the outcomes of random phenomena over the long term from the very complex,

More information

b g is the future lifetime random variable.

b g is the future lifetime random variable. **BEGINNING OF EXAMINATION** 1. Given: (i) e o 0 = 5 (ii) l = ω, 0 ω (iii) is the future lifetime random variable. T Calculate Var Tb10g. (A) 65 (B) 93 (C) 133 (D) 178 (E) 333 COURSE/EXAM 3: MAY 000-1

More information

Probability and Expected Value

Probability and Expected Value Probability and Expected Value This handout provides an introduction to probability and expected value. Some of you may already be familiar with some of these topics. Probability and expected value are

More information

Multi-state transition models with actuarial applications c

Multi-state transition models with actuarial applications c Multi-state transition models with actuarial applications c by James W. Daniel c Copyright 2004 by James W. Daniel Reprinted by the Casualty Actuarial Society and the Society of Actuaries by permission

More information

e.g. arrival of a customer to a service station or breakdown of a component in some system.

e.g. arrival of a customer to a service station or breakdown of a component in some system. Poisson process Events occur at random instants of time at an average rate of λ events per second. e.g. arrival of a customer to a service station or breakdown of a component in some system. Let N(t) be

More information

MARTINGALES AND GAMBLING Louis H. Y. Chen Department of Mathematics National University of Singapore

MARTINGALES AND GAMBLING Louis H. Y. Chen Department of Mathematics National University of Singapore MARTINGALES AND GAMBLING Louis H. Y. Chen Department of Mathematics National University of Singapore 1. Introduction The word martingale refers to a strap fastened belween the girth and the noseband of

More information

Elementary Statistics and Inference. Elementary Statistics and Inference. 16 The Law of Averages (cont.) 22S:025 or 7P:025.

Elementary Statistics and Inference. Elementary Statistics and Inference. 16 The Law of Averages (cont.) 22S:025 or 7P:025. Elementary Statistics and Inference 22S:025 or 7P:025 Lecture 20 1 Elementary Statistics and Inference 22S:025 or 7P:025 Chapter 16 (cont.) 2 D. Making a Box Model Key Questions regarding box What numbers

More information

Statistics 100A Homework 3 Solutions

Statistics 100A Homework 3 Solutions Chapter Statistics 00A Homework Solutions Ryan Rosario. Two balls are chosen randomly from an urn containing 8 white, black, and orange balls. Suppose that we win $ for each black ball selected and we

More information

Wald s Identity. by Jeffery Hein. Dartmouth College, Math 100

Wald s Identity. by Jeffery Hein. Dartmouth College, Math 100 Wald s Identity by Jeffery Hein Dartmouth College, Math 100 1. Introduction Given random variables X 1, X 2, X 3,... with common finite mean and a stopping rule τ which may depend upon the given sequence,

More information

Gambling and Data Compression

Gambling and Data Compression Gambling and Data Compression Gambling. Horse Race Definition The wealth relative S(X) = b(x)o(x) is the factor by which the gambler s wealth grows if horse X wins the race, where b(x) is the fraction

More information

Phantom bonuses. November 22, 2004

Phantom bonuses. November 22, 2004 Phantom bonuses November 22, 2004 1 Introduction The game is defined by a list of payouts u 1, u 2,..., u l, and a list of probabilities p 1, p 2,..., p l, p i = 1. We allow u i to be rational numbers,

More information

Hacking-proofness and Stability in a Model of Information Security Networks

Hacking-proofness and Stability in a Model of Information Security Networks Hacking-proofness and Stability in a Model of Information Security Networks Sunghoon Hong Preliminary draft, not for citation. March 1, 2008 Abstract We introduce a model of information security networks.

More information

Feb 7 Homework Solutions Math 151, Winter 2012. Chapter 4 Problems (pages 172-179)

Feb 7 Homework Solutions Math 151, Winter 2012. Chapter 4 Problems (pages 172-179) Feb 7 Homework Solutions Math 151, Winter 2012 Chapter Problems (pages 172-179) Problem 3 Three dice are rolled. By assuming that each of the 6 3 216 possible outcomes is equally likely, find the probabilities

More information

Math 370/408, Spring 2008 Prof. A.J. Hildebrand. Actuarial Exam Practice Problem Set 2 Solutions

Math 370/408, Spring 2008 Prof. A.J. Hildebrand. Actuarial Exam Practice Problem Set 2 Solutions Math 70/408, Spring 2008 Prof. A.J. Hildebrand Actuarial Exam Practice Problem Set 2 Solutions About this problem set: These are problems from Course /P actuarial exams that I have collected over the years,

More information

How To Find Out How Much Money You Get From A Car Insurance Claim

How To Find Out How Much Money You Get From A Car Insurance Claim Chapter 11. Poisson processes. Section 11.4. Superposition and decomposition of a Poisson process. Extract from: Arcones Fall 2009 Edition, available at http://www.actexmadriver.com/ 1/18 Superposition

More information

Midterm Exam #1 Instructions:

Midterm Exam #1 Instructions: Public Affairs 818 Professor: Geoffrey L. Wallace October 9 th, 008 Midterm Exam #1 Instructions: You have 10 minutes to complete the examination and there are 6 questions worth a total of 10 points. The

More information

Midterm Exam #1 Instructions:

Midterm Exam #1 Instructions: Public Affairs 818 Professor: Geoffrey L. Wallace October 9 th, 008 Midterm Exam #1 Instructions: You have 10 minutes to complete the examination and there are 6 questions worth a total of 10 points. The

More information

LOOKING FOR A GOOD TIME TO BET

LOOKING FOR A GOOD TIME TO BET LOOKING FOR A GOOD TIME TO BET LAURENT SERLET Abstract. Suppose that the cards of a well shuffled deck of cards are turned up one after another. At any time-but once only- you may bet that the next card

More information

arxiv:1112.0829v1 [math.pr] 5 Dec 2011

arxiv:1112.0829v1 [math.pr] 5 Dec 2011 How Not to Win a Million Dollars: A Counterexample to a Conjecture of L. Breiman Thomas P. Hayes arxiv:1112.0829v1 [math.pr] 5 Dec 2011 Abstract Consider a gambling game in which we are allowed to repeatedly

More information

Joint Exam 1/P Sample Exam 1

Joint Exam 1/P Sample Exam 1 Joint Exam 1/P Sample Exam 1 Take this practice exam under strict exam conditions: Set a timer for 3 hours; Do not stop the timer for restroom breaks; Do not look at your notes. If you believe a question

More information

Homework set 4 - Solutions

Homework set 4 - Solutions Homework set 4 - Solutions Math 495 Renato Feres Problems R for continuous time Markov chains The sequence of random variables of a Markov chain may represent the states of a random system recorded at

More information

M/M/1 and M/M/m Queueing Systems

M/M/1 and M/M/m Queueing Systems M/M/ and M/M/m Queueing Systems M. Veeraraghavan; March 20, 2004. Preliminaries. Kendall s notation: G/G/n/k queue G: General - can be any distribution. First letter: Arrival process; M: memoryless - exponential

More information

MOVIES, GAMBLING, SECRET CODES, JUST MATRIX MAGIC

MOVIES, GAMBLING, SECRET CODES, JUST MATRIX MAGIC MOVIES, GAMBLING, SECRET CODES, JUST MATRIX MAGIC DR. LESZEK GAWARECKI 1. The Cartesian Coordinate System In the Cartesian system points are defined by giving their coordinates. Plot the following points:

More information

Ch5: Discrete Probability Distributions Section 5-1: Probability Distribution

Ch5: Discrete Probability Distributions Section 5-1: Probability Distribution Recall: Ch5: Discrete Probability Distributions Section 5-1: Probability Distribution A variable is a characteristic or attribute that can assume different values. o Various letters of the alphabet (e.g.

More information

V. RANDOM VARIABLES, PROBABILITY DISTRIBUTIONS, EXPECTED VALUE

V. RANDOM VARIABLES, PROBABILITY DISTRIBUTIONS, EXPECTED VALUE V. RANDOM VARIABLES, PROBABILITY DISTRIBUTIONS, EXPETED VALUE A game of chance featured at an amusement park is played as follows: You pay $ to play. A penny and a nickel are flipped. You win $ if either

More information

Essentials of Stochastic Processes

Essentials of Stochastic Processes i Essentials of Stochastic Processes Rick Durrett 70 60 50 10 Sep 10 Jun 10 May at expiry 40 30 20 10 0 500 520 540 560 580 600 620 640 660 680 700 Almost Final Version of the 2nd Edition, December, 2011

More information

Solved Problems. Chapter 14. 14.1 Probability review

Solved Problems. Chapter 14. 14.1 Probability review Chapter 4 Solved Problems 4. Probability review Problem 4.. Let X and Y be two N 0 -valued random variables such that X = Y + Z, where Z is a Bernoulli random variable with parameter p (0, ), independent

More information

THE ROULETTE BIAS SYSTEM

THE ROULETTE BIAS SYSTEM 1 THE ROULETTE BIAS SYSTEM Please note that all information is provided as is and no guarantees are given whatsoever as to the amount of profit you will make if you use this system. Neither the seller

More information

1 Interest rates, and risk-free investments

1 Interest rates, and risk-free investments Interest rates, and risk-free investments Copyright c 2005 by Karl Sigman. Interest and compounded interest Suppose that you place x 0 ($) in an account that offers a fixed (never to change over time)

More information

Probability Models.S1 Introduction to Probability

Probability Models.S1 Introduction to Probability Probability Models.S1 Introduction to Probability Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard The stochastic chapters of this book involve random variability. Decisions are

More information

1. A survey of a group s viewing habits over the last year revealed the following

1. A survey of a group s viewing habits over the last year revealed the following 1. A survey of a group s viewing habits over the last year revealed the following information: (i) 8% watched gymnastics (ii) 9% watched baseball (iii) 19% watched soccer (iv) 14% watched gymnastics and

More information

X: 0 1 2 3 4 5 6 7 8 9 Probability: 0.061 0.154 0.228 0.229 0.173 0.094 0.041 0.015 0.004 0.001

X: 0 1 2 3 4 5 6 7 8 9 Probability: 0.061 0.154 0.228 0.229 0.173 0.094 0.041 0.015 0.004 0.001 Tuesday, January 17: 6.1 Discrete Random Variables Read 341 344 What is a random variable? Give some examples. What is a probability distribution? What is a discrete random variable? Give some examples.

More information

Betting systems: how not to lose your money gambling

Betting systems: how not to lose your money gambling Betting systems: how not to lose your money gambling G. Berkolaiko Department of Mathematics Texas A&M University 28 April 2007 / Mini Fair, Math Awareness Month 2007 Gambling and Games of Chance Simple

More information

Binomial lattice model for stock prices

Binomial lattice model for stock prices Copyright c 2007 by Karl Sigman Binomial lattice model for stock prices Here we model the price of a stock in discrete time by a Markov chain of the recursive form S n+ S n Y n+, n 0, where the {Y i }

More information

Goal Problems in Gambling and Game Theory. Bill Sudderth. School of Statistics University of Minnesota

Goal Problems in Gambling and Game Theory. Bill Sudderth. School of Statistics University of Minnesota Goal Problems in Gambling and Game Theory Bill Sudderth School of Statistics University of Minnesota 1 Three problems Maximizing the probability of reaching a goal. Maximizing the probability of reaching

More information

Discrete Math in Computer Science Homework 7 Solutions (Max Points: 80)

Discrete Math in Computer Science Homework 7 Solutions (Max Points: 80) Discrete Math in Computer Science Homework 7 Solutions (Max Points: 80) CS 30, Winter 2016 by Prasad Jayanti 1. (10 points) Here is the famous Monty Hall Puzzle. Suppose you are on a game show, and you

More information

36 Odds, Expected Value, and Conditional Probability

36 Odds, Expected Value, and Conditional Probability 36 Odds, Expected Value, and Conditional Probability What s the difference between probabilities and odds? To answer this question, let s consider a game that involves rolling a die. If one gets the face

More information

Worksheet for Teaching Module Probability (Lesson 1)

Worksheet for Teaching Module Probability (Lesson 1) Worksheet for Teaching Module Probability (Lesson 1) Topic: Basic Concepts and Definitions Equipment needed for each student 1 computer with internet connection Introduction In the regular lectures in

More information

Random variables, probability distributions, binomial random variable

Random variables, probability distributions, binomial random variable Week 4 lecture notes. WEEK 4 page 1 Random variables, probability distributions, binomial random variable Eample 1 : Consider the eperiment of flipping a fair coin three times. The number of tails that

More information

WHERE DOES THE 10% CONDITION COME FROM?

WHERE DOES THE 10% CONDITION COME FROM? 1 WHERE DOES THE 10% CONDITION COME FROM? The text has mentioned The 10% Condition (at least) twice so far: p. 407 Bernoulli trials must be independent. If that assumption is violated, it is still okay

More information

( ) is proportional to ( 10 + x)!2. Calculate the

( ) is proportional to ( 10 + x)!2. Calculate the PRACTICE EXAMINATION NUMBER 6. An insurance company eamines its pool of auto insurance customers and gathers the following information: i) All customers insure at least one car. ii) 64 of the customers

More information

Markov Chains. Chapter 11. 11.1 Introduction. Specifying a Markov Chain

Markov Chains. Chapter 11. 11.1 Introduction. Specifying a Markov Chain Chapter 11 Markov Chains 11.1 Introduction Most of our study of probability has dealt with independent trials processes. These processes are the basis of classical probability theory and much of statistics.

More information

EXAM. Exam #3. Math 1430, Spring 2002. April 21, 2001 ANSWERS

EXAM. Exam #3. Math 1430, Spring 2002. April 21, 2001 ANSWERS EXAM Exam #3 Math 1430, Spring 2002 April 21, 2001 ANSWERS i 60 pts. Problem 1. A city has two newspapers, the Gazette and the Journal. In a survey of 1, 200 residents, 500 read the Journal, 700 read the

More information

SOCIETY OF ACTUARIES/CASUALTY ACTUARIAL SOCIETY EXAM P PROBABILITY EXAM P SAMPLE QUESTIONS

SOCIETY OF ACTUARIES/CASUALTY ACTUARIAL SOCIETY EXAM P PROBABILITY EXAM P SAMPLE QUESTIONS SOCIETY OF ACTUARIES/CASUALTY ACTUARIAL SOCIETY EXAM P PROBABILITY EXAM P SAMPLE QUESTIONS Copyright 007 by the Society of Actuaries and the Casualty Actuarial Society Some of the questions in this study

More information

Math 431 An Introduction to Probability. Final Exam Solutions

Math 431 An Introduction to Probability. Final Exam Solutions Math 43 An Introduction to Probability Final Eam Solutions. A continuous random variable X has cdf a for 0, F () = for 0 <

More information

Random variables P(X = 3) = P(X = 3) = 1 8, P(X = 1) = P(X = 1) = 3 8.

Random variables P(X = 3) = P(X = 3) = 1 8, P(X = 1) = P(X = 1) = 3 8. Random variables Remark on Notations 1. When X is a number chosen uniformly from a data set, What I call P(X = k) is called Freq[k, X] in the courseware. 2. When X is a random variable, what I call F ()

More information

SOCIETY OF ACTUARIES EXAM P PROBABILITY EXAM P SAMPLE QUESTIONS

SOCIETY OF ACTUARIES EXAM P PROBABILITY EXAM P SAMPLE QUESTIONS SOCIETY OF ACTUARIES EXAM P PROBABILITY EXAM P SAMPLE QUESTIONS Copyright 015 by the Society of Actuaries Some of the questions in this study note are taken from past examinations. Some of the questions

More information

Exam Introduction Mathematical Finance and Insurance

Exam Introduction Mathematical Finance and Insurance Exam Introduction Mathematical Finance and Insurance Date: January 8, 2013. Duration: 3 hours. This is a closed-book exam. The exam does not use scrap cards. Simple calculators are allowed. The questions

More information

Math/Stats 425 Introduction to Probability. 1. Uncertainty and the axioms of probability

Math/Stats 425 Introduction to Probability. 1. Uncertainty and the axioms of probability Math/Stats 425 Introduction to Probability 1. Uncertainty and the axioms of probability Processes in the real world are random if outcomes cannot be predicted with certainty. Example: coin tossing, stock

More information

Week 5: Expected value and Betting systems

Week 5: Expected value and Betting systems Week 5: Expected value and Betting systems Random variable A random variable represents a measurement in a random experiment. We usually denote random variable with capital letter X, Y,. If S is the sample

More information

Probability Generating Functions

Probability Generating Functions page 39 Chapter 3 Probability Generating Functions 3 Preamble: Generating Functions Generating functions are widely used in mathematics, and play an important role in probability theory Consider a sequence

More information

ST 371 (IV): Discrete Random Variables

ST 371 (IV): Discrete Random Variables ST 371 (IV): Discrete Random Variables 1 Random Variables A random variable (rv) is a function that is defined on the sample space of the experiment and that assigns a numerical variable to each possible

More information

Ch. 13.3: More about Probability

Ch. 13.3: More about Probability Ch. 13.3: More about Probability Complementary Probabilities Given any event, E, of some sample space, U, of a random experiment, we can always talk about the complement, E, of that event: this is the

More information

Solution. Let us write s for the policy year. Then the mortality rate during year s is q 30+s 1. q 30+s 1

Solution. Let us write s for the policy year. Then the mortality rate during year s is q 30+s 1. q 30+s 1 Solutions to the May 213 Course MLC Examination by Krzysztof Ostaszewski, http://wwwkrzysionet, krzysio@krzysionet Copyright 213 by Krzysztof Ostaszewski All rights reserved No reproduction in any form

More information

Chapter 2 Discrete-Time Markov Models

Chapter 2 Discrete-Time Markov Models Chapter Discrete-Time Markov Models.1 Discrete-Time Markov Chains Consider a system that is observed at times 0;1;;:::: Let X n be the state of the system at time n for n D 0;1;;:::: Suppose we are currently

More information

SOCIETY OF ACTUARIES/CASUALTY ACTUARIAL SOCIETY EXAM P PROBABILITY EXAM P SAMPLE QUESTIONS

SOCIETY OF ACTUARIES/CASUALTY ACTUARIAL SOCIETY EXAM P PROBABILITY EXAM P SAMPLE QUESTIONS SOCIETY OF ACTUARIES/CASUALTY ACTUARIAL SOCIETY EXAM P PROBABILITY EXAM P SAMPLE QUESTIONS Copyright 005 by the Society of Actuaries and the Casualty Actuarial Society Some of the questions in this study

More information

Statistics and Random Variables. Math 425 Introduction to Probability Lecture 14. Finite valued Random Variables. Expectation defined

Statistics and Random Variables. Math 425 Introduction to Probability Lecture 14. Finite valued Random Variables. Expectation defined Expectation Statistics and Random Variables Math 425 Introduction to Probability Lecture 4 Kenneth Harris kaharri@umich.edu Department of Mathematics University of Michigan February 9, 2009 When a large

More information

Lotto Master Formula (v1.3) The Formula Used By Lottery Winners

Lotto Master Formula (v1.3) The Formula Used By Lottery Winners Lotto Master Formula (v.) The Formula Used By Lottery Winners I. Introduction This book is designed to provide you with all of the knowledge that you will need to be a consistent winner in your local lottery

More information

National Sun Yat-Sen University CSE Course: Information Theory. Gambling And Entropy

National Sun Yat-Sen University CSE Course: Information Theory. Gambling And Entropy Gambling And Entropy 1 Outline There is a strong relationship between the growth rate of investment in a horse race and the entropy of the horse race. The value of side information is related to the mutual

More information

Further Topics in Actuarial Mathematics: Premium Reserves. Matthew Mikola

Further Topics in Actuarial Mathematics: Premium Reserves. Matthew Mikola Further Topics in Actuarial Mathematics: Premium Reserves Matthew Mikola April 26, 2007 Contents 1 Introduction 1 1.1 Expected Loss...................................... 2 1.2 An Overview of the Project...............................

More information

Elementary Statistics and Inference. Elementary Statistics and Inference. 17 Expected Value and Standard Error. 22S:025 or 7P:025.

Elementary Statistics and Inference. Elementary Statistics and Inference. 17 Expected Value and Standard Error. 22S:025 or 7P:025. Elementary Statistics and Inference S:05 or 7P:05 Lecture Elementary Statistics and Inference S:05 or 7P:05 Chapter 7 A. The Expected Value In a chance process (probability experiment) the outcomes of

More information

THE WINNING ROULETTE SYSTEM by http://www.webgoldminer.com/

THE WINNING ROULETTE SYSTEM by http://www.webgoldminer.com/ THE WINNING ROULETTE SYSTEM by http://www.webgoldminer.com/ Is it possible to earn money from online gambling? Are there any 100% sure winning roulette systems? Are there actually people who make a living

More information

Texas Hold em. From highest to lowest, the possible five card hands in poker are ranked as follows:

Texas Hold em. From highest to lowest, the possible five card hands in poker are ranked as follows: Texas Hold em Poker is one of the most popular card games, especially among betting games. While poker is played in a multitude of variations, Texas Hold em is the version played most often at casinos

More information

Probabilities. Probability of a event. From Random Variables to Events. From Random Variables to Events. Probability Theory I

Probabilities. Probability of a event. From Random Variables to Events. From Random Variables to Events. Probability Theory I Victor Adamchi Danny Sleator Great Theoretical Ideas In Computer Science Probability Theory I CS 5-25 Spring 200 Lecture Feb. 6, 200 Carnegie Mellon University We will consider chance experiments with

More information

Chapter 5. Discrete Probability Distributions

Chapter 5. Discrete Probability Distributions Chapter 5. Discrete Probability Distributions Chapter Problem: Did Mendel s result from plant hybridization experiments contradicts his theory? 1. Mendel s theory says that when there are two inheritable

More information

Matrix Calculations: Applications of Eigenvalues and Eigenvectors; Inner Products

Matrix Calculations: Applications of Eigenvalues and Eigenvectors; Inner Products Matrix Calculations: Applications of Eigenvalues and Eigenvectors; Inner Products H. Geuvers Institute for Computing and Information Sciences Intelligent Systems Version: spring 2015 H. Geuvers Version:

More information

Random Variables. Chapter 2. Random Variables 1

Random Variables. Chapter 2. Random Variables 1 Random Variables Chapter 2 Random Variables 1 Roulette and Random Variables A Roulette wheel has 38 pockets. 18 of them are red and 18 are black; these are numbered from 1 to 36. The two remaining pockets

More information

Current Accounts in Open Economies Obstfeld and Rogoff, Chapter 2

Current Accounts in Open Economies Obstfeld and Rogoff, Chapter 2 Current Accounts in Open Economies Obstfeld and Rogoff, Chapter 2 1 Consumption with many periods 1.1 Finite horizon of T Optimization problem maximize U t = u (c t ) + β (c t+1 ) + β 2 u (c t+2 ) +...

More information

Monte Carlo Methods in Finance

Monte Carlo Methods in Finance Author: Yiyang Yang Advisor: Pr. Xiaolin Li, Pr. Zari Rachev Department of Applied Mathematics and Statistics State University of New York at Stony Brook October 2, 2012 Outline Introduction 1 Introduction

More information

MONT 107N Understanding Randomness Solutions For Final Examination May 11, 2010

MONT 107N Understanding Randomness Solutions For Final Examination May 11, 2010 MONT 07N Understanding Randomness Solutions For Final Examination May, 00 Short Answer (a) (0) How are the EV and SE for the sum of n draws with replacement from a box computed? Solution: The EV is n times

More information

Definition: Suppose that two random variables, either continuous or discrete, X and Y have joint density

Definition: Suppose that two random variables, either continuous or discrete, X and Y have joint density HW MATH 461/561 Lecture Notes 15 1 Definition: Suppose that two random variables, either continuous or discrete, X and Y have joint density and marginal densities f(x, y), (x, y) Λ X,Y f X (x), x Λ X,

More information

Basic Probability Concepts

Basic Probability Concepts page 1 Chapter 1 Basic Probability Concepts 1.1 Sample and Event Spaces 1.1.1 Sample Space A probabilistic (or statistical) experiment has the following characteristics: (a) the set of all possible outcomes

More information

In the situations that we will encounter, we may generally calculate the probability of an event

In the situations that we will encounter, we may generally calculate the probability of an event What does it mean for something to be random? An event is called random if the process which produces the outcome is sufficiently complicated that we are unable to predict the precise result and are instead

More information

The Exponential Distribution

The Exponential Distribution 21 The Exponential Distribution From Discrete-Time to Continuous-Time: In Chapter 6 of the text we will be considering Markov processes in continuous time. In a sense, we already have a very good understanding

More information

ACMS 10140 Section 02 Elements of Statistics October 28, 2010 Midterm Examination II Answers

ACMS 10140 Section 02 Elements of Statistics October 28, 2010 Midterm Examination II Answers ACMS 10140 Section 02 Elements of Statistics October 28, 2010 Midterm Examination II Answers Name DO NOT remove this answer page. DO turn in the entire exam. Make sure that you have all ten (10) pages

More information

Chapter 4. Probability Distributions

Chapter 4. Probability Distributions Chapter 4 Probability Distributions Lesson 4-1/4-2 Random Variable Probability Distributions This chapter will deal the construction of probability distribution. By combining the methods of descriptive

More information

SOME ASPECTS OF GAMBLING WITH THE KELLY CRITERION. School of Mathematical Sciences. Monash University, Clayton, Victoria, Australia 3168

SOME ASPECTS OF GAMBLING WITH THE KELLY CRITERION. School of Mathematical Sciences. Monash University, Clayton, Victoria, Australia 3168 SOME ASPECTS OF GAMBLING WITH THE KELLY CRITERION Ravi PHATARFOD School of Mathematical Sciences Monash University, Clayton, Victoria, Australia 3168 In this paper we consider the problem of gambling with

More information

6.042/18.062J Mathematics for Computer Science. Expected Value I

6.042/18.062J Mathematics for Computer Science. Expected Value I 6.42/8.62J Mathematics for Computer Science Srini Devadas and Eric Lehman May 3, 25 Lecture otes Expected Value I The expectation or expected value of a random variable is a single number that tells you

More information