1 Gambler s Ruin Problem

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "1 Gambler s Ruin Problem"

Transcription

1 Coyright c 2009 by Karl Sigman 1 Gambler s Ruin Problem Let N 2 be an integer and let 1 i N 1. Consider a gambler who starts with an initial fortune of $i and then on each successive gamble either wins $1 or loses $1 indeendent of the ast with robabilities and q = 1 resectively. Let X n denote the total fortune after the n th gamble. The gambler s objective is to reach a total fortune of $N, without first getting ruined (running out of money). If the gambler succeeds, then the gambler is said to win the game. In any case, the gambler stos laying after winning or getting ruined, whichever haens first. {X n } yields a Markov chain (MC) on the state sace S = {0, 1,..., N}. The transition robabilities are given by P i,i+1 =, P i,i 1 = q, 0 < i < N, and both 0 and N are absorbing states, P 00 = P NN = 1. 1 For examle, when N = 4 the transition matrix is given by q P = 0 q q While the game roceeds, this MC forms a simle random walk X n = i n, n 1, X 0 = i, where { n } forms an i.i.d. sequence of r.v.s. distributed as P ( = 1) =, P ( = 1) = q = 1, and reresents the earnings on the successive gambles. Since the game stos when either X n = 0 or X n = N, let τ i = min{n 0 : X n {0, N} X 0 = i}, denote the time at which the game stos when X 0 = i. If X τi = N, then the gambler wins, if X τi = 0, then the gambler is ruined. Let P i (N) = P (X τi = N) denote the robability that the gambler wins when X 0 = i. P i (N) denotes the robability that the gambler, starting initially with $i, reaches a total fortune of N before ruin; 1 P i (N) is thus the corresonding robably of ruin Clearly P 0 (N) = 0 and P N (N) = 1 by definition, and we next roceed to comute P i (N), 1 i N 1. Proosition 1.1 (Gambler s Ruin Problem) 1 ( q )i P i (N) = 1 ( q, if q; )N (1) i N, if = q = There are three communication classes: C 1 = {0}, C 2 = {1,..., N 1}, C 3 = {N}. C 1 and C 3 are recurrent whereas C 2 is transient. 1

2 Proof : For our derivation, we let P i = P i (N), that is, we suress the deendence on N for ease of notation. The key idea is to condition on the outcome of the first gamble, 1 = 1 or 1 = 1, yielding P i = P i+1 + qp i 1. (2) The derivation of this recursion is as follows: If 1 = 1, then the gambler s total fortune increases to X 1 = i+1 and so by the Markov roerty the gambler will now win with robability P i+1. Similarly, if 1 = 1, then the gambler s fortune decreases to X 1 = i 1 and so by the Markov roerty the gambler will now win with robability P i 1. The robabilities corresonding to the two outcomes are and q yielding (2). Since + q = 1, (2) can be re-written as P i + qp i = P i+1 + qp i 1, yielding P i+1 P i = q (P i P i 1 ). In articular, P 2 P 1 = (q/)(p 1 P 0 ) = (q/)p 1 (since P 0 = 0), so that P 3 P 2 = (q/)(p 2 P 1 ) = (q/) 2 P 1, and more generally Thus P i+1 P i = ( q )i P 1, 0 < i < N. P i+1 P 1 = = i (P k+1 P k ) k=1 i k=1 ( q )k P 1, yielding i P i+1 = P 1 + P 1 ( q i )k = P 1 ( q )k = k=1 k=0 1 ( q P )i ( q ), if q; P 1 (i + 1), if = q = 0.5. (3) (Here we are using the geometric series equation i n=0 a i = 1 ai+1 1 a, for any number a and any integer i 1.) Choosing i = N 1 and using the fact that P N = 1 yields 1 ( q 1 = P N = P )N 1 1 ( q, if q; ) P 1 N, if = q = 0.5, from which we conclude that P 1 = 1 q 1 ( q )N, if q; 1 N, if = q = 0.5, 2

3 thus obtaining from (3) (after algebra) the solution 1 ( q )i P i = 1 ( q, if q; )N (4) i N, if = q = Becoming infinitely rich or getting ruined In the formula (1), it is of interest to see what haens as N ; denote this by P i ( ) = lim N P i (N). This limiting quantity denotes the robability that the gambler, if allowed to lay forever unless ruined, will in fact never get ruined and instead will obtain an infinitely large fortune. Proosition 1.2 Define P i ( ) = lim N P i (N). If > 0.5, then P i ( ) = 1 ( q )i > 0. (5) If 0.50, then P i ( ) = 0. (6) Thus, unless the gambles are strictly better than fair ( > 0.5), ruin is certain. Proof : If > 0.5, then q < 1; hence in the denominator of (1), ( q )N 0 yielding the result. If < 0.50, then q > 1; hence in the the denominator of (1), ( q )N yielding the result. Finally, if = 0.5, then i (N) = i/n 0. Examles 1. John starts with $2, and = 0.6: What is the robability that John obtains a fortune of N = 4 without going broke? i = 2, N = 4 and q = 1 = 0.4, so q/ = 2/3, and we want P 2 (4) = 1 (2/3)2 1 (2/3) 4 = What is the robability that John will become infinitely rich? P 2 ( ) = 1 (2/3) 2 = 5/9 = If John instead started with i = $1, what is the robability that he would go broke? The robability he becomes infinitely rich is P 1 ( ) = 1 (q/) = 1/3, so the robability of ruin is 1 P 1 ( ) = 2/3. 3

4 1.2 Alications Risk insurance business Consider an insurance comany that earns $1 er day (from interest), but on each day, indeendent of the ast, might suffer a claim against it for the amount $2 with robability q = 1. Whenever such a claim is suffered, $2 is removed from the reserve of money. Thus on the n th day, the net income for that day is exactly n as in the gamblers ruin roblem: 1 with robability, 1 with robability q. If the insurance comany starts off initially with a reserve of $i 1, then what is the robability it will eventually get ruined (run out of money)? The answer is given by (5) and (??): If > 0.5 then the robability is given by ( q )i > 0, whereas if 0.5 ruin will always ocurr. This makes intuitive sense because if > 0.5, then the average net income er day is E( ) = q > 0, whereas if 0.5, then the average net income er day is E( ) = q 0. So the comany can not exect to stay in business unless earning (on average) more than is taken away by claims. 1.3 Random walk hitting robabilities Let a > 0 and b > 0 be integers, and let R n = n, n 1, R 0 = 0 denote a simle random walk initially at the origin. Let (a) = P ({R n } hits level a before hitting level b). By letting i = b, and N = a + b, we can equivalently imagine a gambler who starts with i = b and wishes to reach N = a + b before going broke. So we can comute (a) by casting the roblem into the framework of the gambler s ruin roblem: (a) = P i (N) where N = a + b, i = b. Thus Examles 1 ( q )b (a) = 1 ( q, if q; )a+b (7) b a+b, if = q = Ellen bought a share of stock for $10, and it is believed that the stock rice moves (day by day) as a simle random walk with = What is the robability that Ellen s stock reaches the high value of $15 before the low value of $5? We want the robability that the stock goes u by 5 before going down by 5. This is equivalent to starting the random walk at 0 with a = 5 and b = 5, and comuting (a). (a) = 1 ( q )b 1 (0.82)5 1 ( q = = 0.73 )a+b 1 (0.82) What is the robability that Ellen will become infinitely rich? 4

5 Here we equivalently want to know the robability that a gambler starting with i = 10 becomes infinitely rich before going broke. Just like Examle 2 on Page 3: 1 (q/) i = 1 (0.82) = Maximums and minimums of the simle random walk Formula (7) can immediately be used for comuting the robability that the simle random walk {R n }, starting initially at R 0 = 0, will ever hit level a, for any given ositive integer a 1: Kee a fixed while taking the limit as b in (7). The result deends on wether < 0.50 or A little thought reveals that we can state this roblem as comuting the tail P (M a), a 0, where M def = max{r n : n 0} is the all-time maximum of the random walk; a non-negative random variable, because {M a} = {R n = a, for some n 1}. Proosition 1.3 Let M def = max{r n : n 0} for the simle random walk starting initially at the origin (R 0 = 0). 1. When < 0.50, P (M a) = (/q) a, a 0; M has a geometric distribution with success robability 1 (/q): P (M = k) = (/q) k (1 (/q)), k 0. In this case, the random walk drifts down to, w1, but before doing so reaches the finite maximum M. 2. If 0.50, then P (M a) = 1, a 0: P (M = ) = 1; the random walk will, with robability 1, reach any ositive integer a no matter how large. Proof : Taking the limit in (7) as b yields the result by considering the two cases < 0.5 or 0.5: If < 0.5, then (q/) > 1 and so both (q/) b and (q/) a+b tend to as b. But before taking the limit, multily both numerator and denominator by (q/) b = (/q) b, yielding (a) = (/q)b 1 (/q) b (q/) a. Since (/q) b 0 as b, the result follows. If > 0.5, then (q/) < 1 and so both (q/) b and (q/) a+b tend to 0 as b yielding the limit in (7) as 1. If = 0.5, then (a) = b/(b + a) 1 as b. If < 0.5, then E( ) < 0, and if > 0.5, then E( ) > 0; so Proosition 1.3 is consistent with the fact that any random walk with E( ) < 0 (called the negative drift case) satisfies lim n R n =, w1, and any random walk with E( ) > 0 ( called the ositive drift case) satisfies lim n R n = +, w1. 2 But furthermore we learn that when < 0.5, although w1 the chain drifts off to, it first reaches a finite maximum M before doing so, and this rv M has a geometric distribution. 2 R From the strong law of large numbers, lim n n n + if E( ) > 0. = E( ), w1, so R n ne( ) if E( ) < 0 and 5

6 Finally Proosition 1.3 also offers us a roof that when = 0.5, the symmetric case, the random walk will w1 hit any ositive value, P (M a) = 1. By symmetry, we also obtain analogous results for the minimum, : Corollary 1.1 Let m def = min{r n : n 0} for the simle random walk starting initially at the origin (R 0 = 0). 1. If > 0.5, then P (m b) = (q/) b, b 0. In this case, the random walk drifts u to +, but before doing so dros down to a finite minimum m 0. Taking absolute values of m makes it non-negative and so we can exress this result as P ( m b) = (q/) b, b 0; m has a geometric distribution with success robability 1 (q/): P ( m = k) = (q/) k (1 (q/)), k If 0.50, then P (m b) = 1, b 0: P (m = ) = 1; the random walk will, with robability 1, reach any negative integer a no matter how small. Note that when < 0.5, P (M = 0) = 1 (/q) > 0. This is because it is ossible that the random walk will never enter the ositive axis before drifting off to ; with ositive robability R n 0, n 0. Similarly, if > 0.5, then P (m = 0) = 1 (q/) > 0; with ositive robability R n 0, n 0. Recurrence of the simle symmetric random walk Combining the results for both M and m in the revious section when = 0.5, we have Proosition 1.4 The simle symmetric ( = 0.50) random walk, starting at the origin, will w1 eventually hit any integer a, ositive or negative. In fact it will hit any given integer a infinitely often, always returning yet again after leaving; it is a recurrent Markov chain. Proof : The first statement follows directly from Proosition 1.3 and Corollary 1.1; P (M = ) = 1 = P (m = ) = 1. For the second statement we argue as follows: Using the first statement, we know that the simle symmetric random walk starting at 0 will hit 1 eventually. But when it does, we can use the same result to conclude that it will go back and hit 0 eventually after that, because that is stochastically equivalent to starting at R 0 = 0 and waiting for the chain to hit 1, which also will haen eventually. But then yet again it must hit 1 again and so on (back and forth infinitely often), all by the same logic. We conclude that the chain will, over and over again, return to state 0 w1; it will do so infinitely often; 0 is a recurrent state for the simle symmetric random walk. Thus (since the chain is irreducible) all states are recurrent. Let τ = min{n 1 : R n = 0 R 0 = 0}, the so-called return time to state 0. We just argued that τ is a roer random variable, that is, P (τ < ) = 1. This means that if the chain starts in state 0, then, if we wait long enough, we will (w1) see it return to state 0. What we will rove later is that E(τ) = ; meaning that on average our wait is infinite. This imlies that the simle symmetric random walk forms a null recurrent Markov chain. 6

1. (First passage/hitting times/gambler s ruin problem:) Suppose that X has a discrete state space and let i be a fixed state. Let

1. (First passage/hitting times/gambler s ruin problem:) Suppose that X has a discrete state space and let i be a fixed state. Let Copyright c 2009 by Karl Sigman 1 Stopping Times 1.1 Stopping Times: Definition Given a stochastic process X = {X n : n 0}, a random time τ is a discrete random variable on the same probability space as

More information

6.042/18.062J Mathematics for Computer Science December 12, 2006 Tom Leighton and Ronitt Rubinfeld. Random Walks

6.042/18.062J Mathematics for Computer Science December 12, 2006 Tom Leighton and Ronitt Rubinfeld. Random Walks 6.042/8.062J Mathematics for Comuter Science December 2, 2006 Tom Leighton and Ronitt Rubinfeld Lecture Notes Random Walks Gambler s Ruin Today we re going to talk about one-dimensional random walks. In

More information

Stat 134 Fall 2011: Gambler s ruin

Stat 134 Fall 2011: Gambler s ruin Stat 134 Fall 2011: Gambler s ruin Michael Lugo Setember 12, 2011 In class today I talked about the roblem of gambler s ruin but there wasn t enough time to do it roerly. I fear I may have confused some

More information

1 Limiting distribution for a Markov chain

1 Limiting distribution for a Markov chain Copyright c 2009 by Karl Sigman Limiting distribution for a Markov chain In these Lecture Notes, we shall study the limiting behavior of Markov chains as time n In particular, under suitable easy-to-check

More information

Markov Chains. Chapter Introduction and Definitions 110SOR201(2002)

Markov Chains. Chapter Introduction and Definitions 110SOR201(2002) page 5 SOR() Chapter Markov Chains. Introduction and Definitions Consider a sequence of consecutive times ( or trials or stages): n =,,,... Suppose that at each time a probabilistic experiment is performed,

More information

Pythagorean Triples and Rational Points on the Unit Circle

Pythagorean Triples and Rational Points on the Unit Circle Pythagorean Triles and Rational Points on the Unit Circle Solutions Below are samle solutions to the roblems osed. You may find that your solutions are different in form and you may have found atterns

More information

(This result should be familiar, since if the probability to remain in a state is 1 p, then the average number of steps to leave the state is

(This result should be familiar, since if the probability to remain in a state is 1 p, then the average number of steps to leave the state is How many coin flis on average does it take to get n consecutive heads? 1 The rocess of fliing n consecutive heads can be described by a Markov chain in which the states corresond to the number of consecutive

More information

Binomial lattice model for stock prices

Binomial lattice model for stock prices Copyright c 2007 by Karl Sigman Binomial lattice model for stock prices Here we model the price of a stock in discrete time by a Markov chain of the recursive form S n+ S n Y n+, n 0, where the {Y i }

More information

Markov Chains. Gonzalo Mateos

Markov Chains. Gonzalo Mateos Markov Chains Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ October 5, 2016 Introduction

More information

2.1 Simple & Compound Propositions

2.1 Simple & Compound Propositions 2.1 Simle & Comound Proositions 1 2.1 Simle & Comound Proositions Proositional Logic can be used to analyse, simlify and establish the equivalence of statements. A knowledge of logic is essential to the

More information

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 31

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 31 MATH 56A SPRING 2008 STOCHASTIC PROCESSES 3.3. Invariant probability distribution. Definition.4. A probability distribution is a function π : S [0, ] from the set of states S to the closed unit interval

More information

c 2009 Je rey A. Miron We have seen how to derive a rm s supply curve from its MC curve.

c 2009 Je rey A. Miron We have seen how to derive a rm s supply curve from its MC curve. Lecture 9: Industry Suly c 9 Je rey A. Miron Outline. Introduction. Short-Run Industry Suly. Industry Equilibrium in the Short Run. Industry Equilibrium in the Long Run. The Long-Run Suly Curve. The Meaning

More information

Price Elasticity of Demand MATH 104 and MATH 184 Mark Mac Lean (with assistance from Patrick Chan) 2011W

Price Elasticity of Demand MATH 104 and MATH 184 Mark Mac Lean (with assistance from Patrick Chan) 2011W Price Elasticity of Demand MATH 104 and MATH 184 Mark Mac Lean (with assistance from Patrick Chan) 2011W The rice elasticity of demand (which is often shortened to demand elasticity) is defined to be the

More information

2 Random variables. The definition. 2b Distribution function

2 Random variables. The definition. 2b Distribution function Tel Aviv University, 2006 Probability theory 2 2 Random variables 2a The definition Discrete robability defines a random variable X as a function X : Ω R. The robability of a ossible value R is P ( X =

More information

Lecture Notes The Fibonacci Sequence page 1

Lecture Notes The Fibonacci Sequence page 1 Lecture Notes The Fibonacci Sequence age De nition: The Fibonacci sequence starts with and and for all other terms in the sequence, we must add the last two terms. F F and for all n, + + + So the rst few

More information

Solutions to Problem Set 3

Solutions to Problem Set 3 Massachusetts Institute of Technology 6.042J/18.062J, Fall 05: Mathematics for Comuter Science October 3 Prof. Albert R. Meyer and Prof. Ronitt Rubinfeld revised October 8, 2005, 979 minutes Solutions

More information

As we have seen, there is a close connection between Legendre symbols of the form

As we have seen, there is a close connection between Legendre symbols of the form Gauss Sums As we have seen, there is a close connection between Legendre symbols of the form 3 and cube roots of unity. Secifically, if is a rimitive cube root of unity, then 2 ± i 3 and hence 2 2 3 In

More information

FREQUENCIES OF SUCCESSIVE PAIRS OF PRIME RESIDUES

FREQUENCIES OF SUCCESSIVE PAIRS OF PRIME RESIDUES FREQUENCIES OF SUCCESSIVE PAIRS OF PRIME RESIDUES AVNER ASH, LAURA BELTIS, ROBERT GROSS, AND WARREN SINNOTT Abstract. We consider statistical roerties of the sequence of ordered airs obtained by taking

More information

Fourier Analysis of Stochastic Processes

Fourier Analysis of Stochastic Processes Fourier Analysis of Stochastic Processes. Time series Given a discrete time rocess ( n ) nz, with n :! R or n :! C 8n Z, we de ne time series a realization of the rocess, that is to say a series (x n )

More information

Combinatorics: The Fine Art of Counting

Combinatorics: The Fine Art of Counting Combinatorics: The Fine Art of Counting Week 7 Lecture Notes Discrete Probability Continued Note Binomial coefficients are written horizontally. The symbol ~ is used to mean approximately equal. The Bernoulli

More information

Variations on the Gambler s Ruin Problem

Variations on the Gambler s Ruin Problem Variations on the Gambler s Ruin Problem Mat Willmott December 6, 2002 Abstract. This aer covers the history and solution to the Gambler s Ruin Problem, and then exlores the odds for each layer to win

More information

Large Sample Theory. Consider a sequence of random variables Z 1, Z 2,..., Z n. Convergence in probability: Z n

Large Sample Theory. Consider a sequence of random variables Z 1, Z 2,..., Z n. Convergence in probability: Z n Large Samle Theory In statistics, we are interested in the roerties of articular random variables (or estimators ), which are functions of our data. In ymtotic analysis, we focus on describing the roerties

More information

Confidence Intervals for Capture-Recapture Data With Matching

Confidence Intervals for Capture-Recapture Data With Matching Confidence Intervals for Cature-Recature Data With Matching Executive summary Cature-recature data is often used to estimate oulations The classical alication for animal oulations is to take two samles

More information

Population Genetics I

Population Genetics I Poulation Genetics I The material resented here considers a single diloid genetic locus, ith to alleles A and a; their relative frequencies in the oulation ill be denoted as and q (ith q 1 ). Hardy-Weinberg

More information

1 Interest rates, and risk-free investments

1 Interest rates, and risk-free investments Interest rates, and risk-free investments Copyright c 2005 by Karl Sigman. Interest and compounded interest Suppose that you place x 0 ($) in an account that offers a fixed (never to change over time)

More information

ABSOLUTELY ABNORMAL NUMBERS GREG MARTIN

ABSOLUTELY ABNORMAL NUMBERS GREG MARTIN ABSOLUTELY ABNORMAL NUMBERS GREG MARTIN arxiv:math/0006089v2 [math.nt] 7 Nov 2000. Introduction A normal number is one whose decimal exansion or exansion to some base other than 0 contains all ossible

More information

Problem Set 6 Solutions

Problem Set 6 Solutions ( Introduction to Algorithms Aril 16, 2004 Massachusetts Institute of Technology 6046J/18410J Professors Erik Demaine and Shafi Goldwasser Handout 21 Problem Set 6 Solutions This roblem set is due in recitation

More information

The Online Freeze-tag Problem

The Online Freeze-tag Problem The Online Freeze-tag Problem Mikael Hammar, Bengt J. Nilsson, and Mia Persson Atus Technologies AB, IDEON, SE-3 70 Lund, Sweden mikael.hammar@atus.com School of Technology and Society, Malmö University,

More information

SOME PROPERTIES OF EXTENSIONS OF SMALL DEGREE OVER Q. 1. Quadratic Extensions

SOME PROPERTIES OF EXTENSIONS OF SMALL DEGREE OVER Q. 1. Quadratic Extensions SOME PROPERTIES OF EXTENSIONS OF SMALL DEGREE OVER Q TREVOR ARNOLD Abstract This aer demonstrates a few characteristics of finite extensions of small degree over the rational numbers Q It comrises attemts

More information

15 Markov Chains: Limiting Probabilities

15 Markov Chains: Limiting Probabilities MARKOV CHAINS: LIMITING PROBABILITIES 67 Markov Chains: Limiting Probabilities Example Assume that the transition matrix is given by 7 2 P = 6 Recall that the n-step transition probabilities are given

More information

EE302 Division 1 Homework 2 Solutions.

EE302 Division 1 Homework 2 Solutions. EE302 Division Homework 2 Solutions. Problem. Prof. Pollak is flying from L to Paris with two lane changes, in New York and London. The robability to lose a iece of luggage is the same,, in L, NY, and

More information

Week 4: Gambler s ruin and bold play

Week 4: Gambler s ruin and bold play Week 4: Gambler s ruin and bold play Random walk and Gambler s ruin. Imagine a walker moving along a line. At every unit of time, he makes a step left or right of exactly one of unit. So we can think that

More information

POISSON PROCESSES. Chapter 2. 2.1 Introduction. 2.1.1 Arrival processes

POISSON PROCESSES. Chapter 2. 2.1 Introduction. 2.1.1 Arrival processes Chater 2 POISSON PROCESSES 2.1 Introduction A Poisson rocess is a simle and widely used stochastic rocess for modeling the times at which arrivals enter a system. It is in many ways the continuous-time

More information

Wald s Identity. by Jeffery Hein. Dartmouth College, Math 100

Wald s Identity. by Jeffery Hein. Dartmouth College, Math 100 Wald s Identity by Jeffery Hein Dartmouth College, Math 100 1. Introduction Given random variables X 1, X 2, X 3,... with common finite mean and a stopping rule τ which may depend upon the given sequence,

More information

PRIME NUMBERS AND THE RIEMANN HYPOTHESIS

PRIME NUMBERS AND THE RIEMANN HYPOTHESIS PRIME NUMBERS AND THE RIEMANN HYPOTHESIS CARL ERICKSON This minicourse has two main goals. The first is to carefully define the Riemann zeta function and exlain how it is connected with the rime numbers.

More information

Stochastic Derivation of an Integral Equation for Probability Generating Functions

Stochastic Derivation of an Integral Equation for Probability Generating Functions Journal of Informatics and Mathematical Sciences Volume 5 (2013), Number 3,. 157 163 RGN Publications htt://www.rgnublications.com Stochastic Derivation of an Integral Equation for Probability Generating

More information

x if x 0, x if x < 0.

x if x 0, x if x < 0. Chapter 3 Sequences In this chapter, we discuss sequences. We say what it means for a sequence to converge, and define the limit of a convergent sequence. We begin with some preliminary results about the

More information

Lecture 13: Martingales

Lecture 13: Martingales Lecture 13: Martingales 1. Definition of a Martingale 1.1 Filtrations 1.2 Definition of a martingale and its basic properties 1.3 Sums of independent random variables and related models 1.4 Products of

More information

Markov Chains REGULAR MARKOV CHAINS ABSORBING MARKOV CHAINS. Properties of Markov Chains

Markov Chains REGULAR MARKOV CHAINS ABSORBING MARKOV CHAINS. Properties of Markov Chains Markov Chains ROERTIES REGULAR MARKOV CHAINS ABSORBING MARKOV CHAINS roperties of Markov Chains Introduction Transition & State Matrices owers of Matrices Applications Andrei Markov 86 -- 9 Examples of

More information

CHAPTER III - MARKOV CHAINS

CHAPTER III - MARKOV CHAINS CHAPTER III - MARKOV CHAINS JOSEPH G. CONLON 1. General Theory of Markov Chains We have already discussed the standard random walk on the integers Z. A Markov Chain can be viewed as a generalization of

More information

HOMEWORK (due Fri, Nov 19): Chapter 12: #62, 83, 101

HOMEWORK (due Fri, Nov 19): Chapter 12: #62, 83, 101 Today: Section 2.2, Lesson 3: What can go wrong with hyothesis testing Section 2.4: Hyothesis tests for difference in two roortions ANNOUNCEMENTS: No discussion today. Check your grades on eee and notify

More information

MARTINGALES AND GAMBLING Louis H. Y. Chen Department of Mathematics National University of Singapore

MARTINGALES AND GAMBLING Louis H. Y. Chen Department of Mathematics National University of Singapore MARTINGALES AND GAMBLING Louis H. Y. Chen Department of Mathematics National University of Singapore 1. Introduction The word martingale refers to a strap fastened belween the girth and the noseband of

More information

10.2 Series and Convergence

10.2 Series and Convergence 10.2 Series and Convergence Write sums using sigma notation Find the partial sums of series and determine convergence or divergence of infinite series Find the N th partial sums of geometric series and

More information

An introduction to Markov chains

An introduction to Markov chains An introduction to Markov chains Jie Xiong Department of Mathematics The University of Tennessee, Knoxville [NIMBioS, March 16, 2011] Mathematical biology (WIKIPEDIA) Markov chains also have many applications

More information

Random access protocols for channel access. Markov chains and their stability. Laurent Massoulié.

Random access protocols for channel access. Markov chains and their stability. Laurent Massoulié. Random access protocols for channel access Markov chains and their stability laurent.massoulie@inria.fr Aloha: the first random access protocol for channel access [Abramson, Hawaii 70] Goal: allow machines

More information

Loglikelihood and Confidence Intervals

Loglikelihood and Confidence Intervals Stat 504, Lecture 3 Stat 504, Lecture 3 2 Review (contd.): Loglikelihood and Confidence Intervals The likelihood of the samle is the joint PDF (or PMF) L(θ) = f(x,.., x n; θ) = ny f(x i; θ) i= Review:

More information

Point Location. Preprocess a planar, polygonal subdivision for point location queries. p = (18, 11)

Point Location. Preprocess a planar, polygonal subdivision for point location queries. p = (18, 11) Point Location Prerocess a lanar, olygonal subdivision for oint location ueries. = (18, 11) Inut is a subdivision S of comlexity n, say, number of edges. uild a data structure on S so that for a uery oint

More information

Chapter 9. Quadratic Residues. 9.1 Introduction. 9.2 Prime moduli

Chapter 9. Quadratic Residues. 9.1 Introduction. 9.2 Prime moduli Chater 9 Quadratic Residues 9.1 Introduction Definition 9.1. We say that a Z is a quadratic residue mod n if there exists b Z such that a b 2 mod n. If there is no such b we say that a is a quadratic non-residue

More information

arxiv:1112.0829v1 [math.pr] 5 Dec 2011

arxiv:1112.0829v1 [math.pr] 5 Dec 2011 How Not to Win a Million Dollars: A Counterexample to a Conjecture of L. Breiman Thomas P. Hayes arxiv:1112.0829v1 [math.pr] 5 Dec 2011 Abstract Consider a gambling game in which we are allowed to repeatedly

More information

Markov Chains, part I

Markov Chains, part I Markov Chains, part I December 8, 2010 1 Introduction A Markov Chain is a sequence of random variables X 0, X 1,, where each X i S, such that P(X i+1 = s i+1 X i = s i, X i 1 = s i 1,, X 0 = s 0 ) = P(X

More information

M.1 Markov Processes. If I asked you to find the distribution in 1970 what would you have to do?

M.1 Markov Processes. If I asked you to find the distribution in 1970 what would you have to do? M.1 Markov Processes Introductory Example: At the beginning of 1950, 55% of a certain state were Republicans and 45% were Democrats. Based on past trends, it is expected that every decade, 10% of Republicans

More information

The Cubic Equation. Urs Oswald. 11th January 2009

The Cubic Equation. Urs Oswald. 11th January 2009 The Cubic Equation Urs Oswald th January 009 As is well known, equations of degree u to 4 can be solved in radicals. The solutions can be obtained, aart from the usual arithmetic oerations, by the extraction

More information

TRANSCENDENTAL NUMBERS

TRANSCENDENTAL NUMBERS TRANSCENDENTAL NUMBERS JEREMY BOOHER. Introduction The Greeks tried unsuccessfully to square the circle with a comass and straightedge. In the 9th century, Lindemann showed that this is imossible by demonstrating

More information

Lectures on Stochastic Processes. William G. Faris

Lectures on Stochastic Processes. William G. Faris Lectures on Stochastic Processes William G. Faris November 8, 2001 2 Contents 1 Random walk 7 1.1 Symmetric simple random walk................... 7 1.2 Simple random walk......................... 9 1.3

More information

Markov Chains. Chapter 4. 4.1 Stochastic Processes

Markov Chains. Chapter 4. 4.1 Stochastic Processes Chapter 4 Markov Chains 4 Stochastic Processes Often, we need to estimate probabilities using a collection of random variables For example, an actuary may be interested in estimating the probability that

More information

The Magnus-Derek Game

The Magnus-Derek Game The Magnus-Derek Game Z. Nedev S. Muthukrishnan Abstract We introduce a new combinatorial game between two layers: Magnus and Derek. Initially, a token is laced at osition 0 on a round table with n ositions.

More information

Math 312 Lecture Notes Markov Chains

Math 312 Lecture Notes Markov Chains Math 312 Lecture Notes Markov Chains Warren Weckesser Department of Mathematics Colgate University Updated, 30 April 2005 Markov Chains A (finite) Markov chain is a process with a finite number of states

More information

An Analysis of Reliable Classifiers through ROC Isometrics

An Analysis of Reliable Classifiers through ROC Isometrics An Analysis of Reliable Classifiers through ROC Isometrics Stijn Vanderlooy s.vanderlooy@cs.unimaas.nl Ida G. Srinkhuizen-Kuyer kuyer@cs.unimaas.nl Evgueni N. Smirnov smirnov@cs.unimaas.nl MICC-IKAT, Universiteit

More information

Number Theory Naoki Sato

Number Theory Naoki Sato <ensato@hotmail.com> Number Theory Naoki Sato 0 Preface This set of notes on number theory was originally written in 1995 for students at the IMO level. It covers the basic background material that an

More information

8.7 Mathematical Induction

8.7 Mathematical Induction 8.7. MATHEMATICAL INDUCTION 8-135 8.7 Mathematical Induction Objective Prove a statement by mathematical induction Many mathematical facts are established by first observing a pattern, then making a conjecture

More information

NUMBER SYSTEMS. Number theory is the study of the integers. We denote the set of integers by Z:

NUMBER SYSTEMS. Number theory is the study of the integers. We denote the set of integers by Z: NUMBER SYSTEMS Number theory is the study of the integers. We denote the set of integers by Z: Z = {..., 3, 2, 1, 0, 1, 2, 3,... }. The integers have two oerations defined on them, addition and multilication,

More information

Section 1.3 P 1 = 1 2. = 1 4 2 8. P n = 1 P 3 = Continuing in this fashion, it should seem reasonable that, for any n = 1, 2, 3,..., = 1 2 4.

Section 1.3 P 1 = 1 2. = 1 4 2 8. P n = 1 P 3 = Continuing in this fashion, it should seem reasonable that, for any n = 1, 2, 3,..., = 1 2 4. Difference Equations to Differential Equations Section. The Sum of a Sequence This section considers the problem of adding together the terms of a sequence. Of course, this is a problem only if more than

More information

Spring 2012 Exam MLC Solutions

Spring 2012 Exam MLC Solutions Sring 0 Exam MLC Solutions Question # Key: D l [75] + = q[75] + = 0.95q75+ l[75] + l [75] l77 96,4 + = = 96,4 96,4 = 0.95 98,5 l [75] + l = [75] + 98,050 Question # Key: A 0.5qx ( xx, + ) U.D.D. 0.5 x+

More information

UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure

UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure UNIT 2 MATRICES - I Matrices - I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress

More information

Worked examples Random Processes

Worked examples Random Processes Worked examples Random Processes Example 1 Consider patients coming to a doctor s office at random points in time. Let X n denote the time (in hrs) that the n th patient has to wait before being admitted

More information

P (A) = lim P (A) = N(A)/N,

P (A) = lim P (A) = N(A)/N, 1.1 Probability, Relative Frequency and Classical Definition. Probability is the study of random or non-deterministic experiments. Suppose an experiment can be repeated any number of times, so that we

More information

Risk in Revenue Management and Dynamic Pricing

Risk in Revenue Management and Dynamic Pricing OPERATIONS RESEARCH Vol. 56, No. 2, March Aril 2008,. 326 343 issn 0030-364X eissn 1526-5463 08 5602 0326 informs doi 10.1287/ore.1070.0438 2008 INFORMS Risk in Revenue Management and Dynamic Pricing Yuri

More information

5 =5. Since 5 > 0 Since 4 7 < 0 Since 0 0

5 =5. Since 5 > 0 Since 4 7 < 0 Since 0 0 a p p e n d i x e ABSOLUTE VALUE ABSOLUTE VALUE E.1 definition. The absolute value or magnitude of a real number a is denoted by a and is defined by { a if a 0 a = a if a

More information

LTL: Linear-time logic. LTL is another important temporal logic. In LTL the future is seen as a sequence of states, so the future is seen as a path.

LTL: Linear-time logic. LTL is another important temporal logic. In LTL the future is seen as a sequence of states, so the future is seen as a path. LTL: Linear-time logic LTL is another imortant temoral logic. In LTL the future is seen as a sequence of states, so the future is seen as a ath. We can consider a set of aths, however we don t have an

More information

It is important to be very clear about our definitions of probabilities.

It is important to be very clear about our definitions of probabilities. Use Bookmarks for electronic content links 7.6 Bayesian odds 7.6.1 Introduction The basic ideas of robability have been introduced in Unit 7.3 in the book, leading to the concet of conditional robability.

More information

Introduction to NP-Completeness Written and copyright c by Jie Wang 1

Introduction to NP-Completeness Written and copyright c by Jie Wang 1 91.502 Foundations of Comuter Science 1 Introduction to Written and coyright c by Jie Wang 1 We use time-bounded (deterministic and nondeterministic) Turing machines to study comutational comlexity of

More information

An important observation in supply chain management, known as the bullwhip effect,

An important observation in supply chain management, known as the bullwhip effect, Quantifying the Bullwhi Effect in a Simle Suly Chain: The Imact of Forecasting, Lead Times, and Information Frank Chen Zvi Drezner Jennifer K. Ryan David Simchi-Levi Decision Sciences Deartment, National

More information

Stochastic Processes and Advanced Mathematical Finance. Ruin Probabilities

Stochastic Processes and Advanced Mathematical Finance. Ruin Probabilities Steven R. Dunbar Department of Mathematics 203 Avery Hall University of Nebraska-Lincoln Lincoln, NE 68588-0130 http://www.math.unl.edu Voice: 402-472-3731 Fax: 402-472-8466 Stochastic Processes and Advanced

More information

HOMEWORK 5 SOLUTIONS. n!f n (1) lim. ln x n! + xn x. 1 = G n 1 (x). (2) k + 1 n. (n 1)!

HOMEWORK 5 SOLUTIONS. n!f n (1) lim. ln x n! + xn x. 1 = G n 1 (x). (2) k + 1 n. (n 1)! Math 7 Fall 205 HOMEWORK 5 SOLUTIONS Problem. 2008 B2 Let F 0 x = ln x. For n 0 and x > 0, let F n+ x = 0 F ntdt. Evaluate n!f n lim n ln n. By directly computing F n x for small n s, we obtain the following

More information

Probability and Statistics

Probability and Statistics CHAPTER 2: RANDOM VARIABLES AND ASSOCIATED FUNCTIONS 2b - 0 Probability and Statistics Kristel Van Steen, PhD 2 Montefiore Institute - Systems and Modeling GIGA - Bioinformatics ULg kristel.vansteen@ulg.ac.be

More information

Markov Chains: An Introduction/Review

Markov Chains: An Introduction/Review Markov Chains: An Introduction/Review David Sirl dsirl@maths.uq.edu.au http://www.maths.uq.edu.au/ dsirl/ AUSTRALIAN RESEARCH COUNCIL Centre of Excellence for Mathematics and Statistics of Complex Systems

More information

1 Portfolio mean and variance

1 Portfolio mean and variance Copyright c 2005 by Karl Sigman Portfolio mean and variance Here we study the performance of a one-period investment X 0 > 0 (dollars) shared among several different assets. Our criterion for measuring

More information

, for x = 0, 1, 2, 3,... (4.1) (1 + 1/n) n = 2.71828... b x /x! = e b, x=0

, for x = 0, 1, 2, 3,... (4.1) (1 + 1/n) n = 2.71828... b x /x! = e b, x=0 Chapter 4 The Poisson Distribution 4.1 The Fish Distribution? The Poisson distribution is named after Simeon-Denis Poisson (1781 1840). In addition, poisson is French for fish. In this chapter we will

More information

Long Time Properties of Reducible Markov Chains

Long Time Properties of Reducible Markov Chains Long Time Properties of Reducible Markov Chains Tuesday, March 04, 2014 2:03 PM Homework 2 due Friday, March 7. Last time we showed how to prove existence of stationary distributions for irreducible finite

More information

c 2007 Je rey A. Miron 6. The Meaning of Zero Pro ts: Fixed Factors and Economic Rent

c 2007 Je rey A. Miron 6. The Meaning of Zero Pro ts: Fixed Factors and Economic Rent Lecture : Industr Sul. c Je re A. Miron Outline. Introduction. Short-Run Industr Sul. Industr Equilibrium in the Short Run. Industr Equilibrium in the Long Run. The Long-Run Sul Curve. The Meaning of Zero

More information

LECTURE 4. Last time: Lecture outline

LECTURE 4. Last time: Lecture outline LECTURE 4 Last time: Types of convergence Weak Law of Large Numbers Strong Law of Large Numbers Asymptotic Equipartition Property Lecture outline Stochastic processes Markov chains Entropy rate Random

More information

Exercises with solutions (1)

Exercises with solutions (1) Exercises with solutions (). Investigate the relationship between independence and correlation. (a) Two random variables X and Y are said to be correlated if and only if their covariance C XY is not equal

More information

United Arab Emirates University College of Sciences Department of Mathematical Sciences HOMEWORK 1 SOLUTION. Section 10.1 Vectors in the Plane

United Arab Emirates University College of Sciences Department of Mathematical Sciences HOMEWORK 1 SOLUTION. Section 10.1 Vectors in the Plane United Arab Emirates University College of Sciences Deartment of Mathematical Sciences HOMEWORK 1 SOLUTION Section 10.1 Vectors in the Plane Calculus II for Engineering MATH 110 SECTION 0 CRN 510 :00 :00

More information

2016 Physics 250: Worksheet 02 Name

2016 Physics 250: Worksheet 02 Name 06 Physics 50: Worksheet 0 Name Concets: Electric Field, lines of force, charge density, diole moment, electric diole () An equilateral triangle with each side of length 0.0 m has identical charges of

More information

agents (believe they) cannot influence the market price agents have all relevant information What happens when neither (i) nor (ii) holds?

agents (believe they) cannot influence the market price agents have all relevant information What happens when neither (i) nor (ii) holds? Information and strategic interaction Assumtions of erfect cometition: (i) (ii) agents (believe they) cannot influence the market rice agents have all relevant information What haens when neither (i) nor

More information

Modeling and Performance Evaluation of Computer Systems Security Operation 1

Modeling and Performance Evaluation of Computer Systems Security Operation 1 Modeling and Performance Evaluation of Computer Systems Security Operation 1 D. Guster 2 St.Cloud State University 3 N.K. Krivulin 4 St.Petersburg State University 5 Abstract A model of computer system

More information

Chapter 3. Special Techniques for Calculating Potentials. r ( r ' )dt ' ( ) 2

Chapter 3. Special Techniques for Calculating Potentials. r ( r ' )dt ' ( ) 2 Chater 3. Secial Techniues for Calculating Potentials Given a stationary charge distribution r( r ) we can, in rincile, calculate the electric field: E ( r ) Ú Dˆ r Dr r ( r ' )dt ' 2 where Dr r '-r. This

More information

Classical Fourier Series Introduction: Real Fourier Series

Classical Fourier Series Introduction: Real Fourier Series Math 344 May 1, 1 Classical Fourier Series Introduction: Real Fourier Series The orthogonality roerties of the sine and cosine functions make them good candidates as basis functions when orthogonal eansions

More information

Markov Chains, Stochastic Processes, and Advanced Matrix Decomposition

Markov Chains, Stochastic Processes, and Advanced Matrix Decomposition Markov Chains, Stochastic Processes, and Advanced Matrix Decomposition Jack Gilbert Copyright (c) 2014 Jack Gilbert. Permission is granted to copy, distribute and/or modify this document under the terms

More information

1. Markov chains. 1.1 Specifying and simulating a Markov chain. Page 5

1. Markov chains. 1.1 Specifying and simulating a Markov chain. Page 5 1. Markov chains Page 5 Section 1. What is a Markov chain? How to simulate one. Section 2. The Markov property. Section 3. How matrix multiplication gets into the picture. Section 4. Statement of the Basic

More information

where a, b, c, and d are constants with a 0, and x is measured in radians. (π radians =

where a, b, c, and d are constants with a 0, and x is measured in radians. (π radians = Introduction to Modeling 3.6-1 3.6 Sine and Cosine Functions The general form of a sine or cosine function is given by: f (x) = asin (bx + c) + d and f(x) = acos(bx + c) + d where a, b, c, and d are constants

More information

Piracy and Network Externality An Analysis for the Monopolized Software Industry

Piracy and Network Externality An Analysis for the Monopolized Software Industry Piracy and Network Externality An Analysis for the Monoolized Software Industry Ming Chung Chang Deartment of Economics and Graduate Institute of Industrial Economics mcchang@mgt.ncu.edu.tw Chiu Fen Lin

More information

Joint Distributions. Lecture 5. Probability & Statistics in Engineering. 0909.400.01 / 0909.400.02 Dr. P. s Clinic Consultant Module in.

Joint Distributions. Lecture 5. Probability & Statistics in Engineering. 0909.400.01 / 0909.400.02 Dr. P. s Clinic Consultant Module in. -3σ -σ -σ +σ +σ +3σ Joint Distributions Lecture 5 0909.400.01 / 0909.400.0 Dr. P. s Clinic Consultant Module in Probabilit & Statistics in Engineering Toda in P&S -3σ -σ -σ +σ +σ +3σ Dealing with multile

More information

Stochastic Processes and Advanced Mathematical Finance. Duration of the Gambler s Ruin

Stochastic Processes and Advanced Mathematical Finance. Duration of the Gambler s Ruin Steven R. Dunbar Department of Mathematics 203 Avery Hall University of Nebraska-Lincoln Lincoln, NE 68588-0130 http://www.math.unl.edu Voice: 402-472-3731 Fax: 402-472-8466 Stochastic Processes and Advanced

More information

GROUPS SUBGROUPS. Definition 1: An operation on a set G is a function : G G G.

GROUPS SUBGROUPS. Definition 1: An operation on a set G is a function : G G G. Definition 1: GROUPS An operation on a set G is a function : G G G. Definition 2: A group is a set G which is equipped with an operation and a special element e G, called the identity, such that (i) the

More information

Introduction to Stationary Distributions

Introduction to Stationary Distributions 13 Introduction to Stationary Distributions We first briefly review the classification of states in a Markov chain with a quick example and then begin the discussion of the important notion of stationary

More information

A Note on the Stock Market Trend Analysis Using Markov-Switching EGARCH Models

A Note on the Stock Market Trend Analysis Using Markov-Switching EGARCH Models ( 251 ) Note A Note on the Stock Market Trend Analysis Using Markov-Switching EGARCH Models MITSUI, Hidetoshi* 1) 1 Introduction The stock marketʼs fluctuations follow a continuous trend whether or not

More information

0.1 Markov Chains Generalities 0.1. MARKOV CHAINS 1

0.1 Markov Chains Generalities 0.1. MARKOV CHAINS 1 0.1. MARKOV CHAINS 1 0.1 Markov Chains 0.1.1 Generalities A Markov Chain consists of a countable (possibly finite) set S (called the state space) together with a countable family of random variables X,

More information

Goal Problems in Gambling and Game Theory. Bill Sudderth. School of Statistics University of Minnesota

Goal Problems in Gambling and Game Theory. Bill Sudderth. School of Statistics University of Minnesota Goal Problems in Gambling and Game Theory Bill Sudderth School of Statistics University of Minnesota 1 Three problems Maximizing the probability of reaching a goal. Maximizing the probability of reaching

More information

Large firms and heterogeneity: the structure of trade and industry under oligopoly

Large firms and heterogeneity: the structure of trade and industry under oligopoly Large firms and heterogeneity: the structure of trade and industry under oligooly Eddy Bekkers University of Linz Joseh Francois University of Linz & CEPR (London) ABSTRACT: We develo a model of trade

More information