Continuous-time Markov Chains

Size: px
Start display at page:

Download "Continuous-time Markov Chains"

Transcription

1 Continuous-time Markov Chains Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester October 31, 2016 Introduction to Random Processes Continuous-time Markov Chains 1

2 Continuous-time Markov chains Continuous-time Markov chains Transition probability function Determination of transition probability function Limit probabilities and ergodicity Introduction to Random Processes Continuous-time Markov Chains 2

3 Definition Continuous-time positive variable t [0, ) Time-dependent random state X (t) takes values on a countable set In general denote states as i = 0, 1, 2,..., i.e., here the state space is N If X (t) = i we say the process is in state i at time t Def: Process X (t) is a continuous-time Markov chain (CTMC) if P ( X (t + s) = j X (s) = i, X (u) = x(u), u < s ) = P ( X (t + s) = j X (s) = i ) Markov property Given the present state X (s) Future X (t + s) is independent of the past X (u) = x(u), u < s In principle need to specify functions P ( X (t + s) = j X (s) = i ) For all times t and s, for all pairs of states (i, j) Introduction to Random Processes Continuous-time Markov Chains 3

4 Notation and homogeneity Notation X [s : t] state values for all times s u t, includes borders X (s : t) values for all times s < u < t, borders excluded X (s : t] values for all times s < u t, exclude left, include right X [s : t) values for all times s u < t, include left, exclude right Homogeneous CTMC if P ( X (t + s) = j ) X (s) = i invariant for all s We restrict consideration to homogeneous CTMCs Still need P ij (t) := P ( X (t + s) = j ) X (s) = i for all t and pairs (i, j) P ij (t) is known as the transition probability function. More later Markov property and homogeneity make description somewhat simpler Introduction to Random Processes Continuous-time Markov Chains 4

5 Transition times T i = time until transition out of state i into any other state j Def: T i is a random variable called transition time with ccdf P (T i > t) = P ( X (0 : t] = i X (0) = i ) Probability of T i > t + s given that T i > s? Use cdf expression P ( T i > t + s Ti > s ) = P ( X (0 : t + s] = i X [0 : s] = i ) = P ( X (s : t + s] = i X [0 : s] = i ) = P ( X (s : t + s] = i X (s) = i ) = P ( X (0 : t] = i X (0) = i ) Used that X [0 : s] = i given, Markov property, and homogeneity From definition of T i P ( T i > t + s Ti > s ) = P (T i > t) Transition times are exponential random variables Introduction to Random Processes Continuous-time Markov Chains 5

6 Alternative definition Exponential transition times is a fundamental property of CTMCs Can be used as algorithmic definition of CTMCs Continuous-time random process X (t) is a CTMC if (a) Transition times T i are exponential random variables with mean 1/ν i (b) When they occur, transition from state i to j with probability P ij P ij = 1, P ii = 0 j=1 (c) Transition times T i and transitioned state j are independent Define matrix P grouping transition probabilities P ij CTMC states evolve as in a discrete-time Markov chain State transitions occur at exponential intervals T i exp(ν i ) As opposed to occurring at fixed intervals Introduction to Random Processes Continuous-time Markov Chains 6

7 Embedded discrete-time Markov chain Consider a CTMC with transition matrix P and rates ν i Def: CTMC s embedded discrete-time MC has transition matrix P Transition probabilities P describe a discrete-time MC No self-transitions (P ii = 0, P s diagonal null) Can use underlying discrete-time MCs to study CTMCs Def: State j accessible from i if accessible in the embedded MC Def: States i and j communicate if they do so in the embedded MC Communication is a class property Recurrence, transience, ergodicity. Class properties... More later Introduction to Random Processes Continuous-time Markov Chains 7

8 Transition rates Expected value of transition time T i is E [T i ] = 1/ν i Can interpret ν i as the rate of transition out of state i Of these transitions, a fraction P ij are into state j Def: Transition rate from i to j is q ij := ν i P ij Transition rates offer yet another specification of CTMCs If q ij are given can recover ν i as ν i = ν i P ij = ν i P ij = j=1 j=1 j=1 ( ) 1 Can also recover P ij as P ij = q ij /ν i = q ij q ij j=1 q ij Introduction to Random Processes Continuous-time Markov Chains 8

9 Birth and death process example State X (t) = 0, 1,... Interpret as number of individuals Birth and deaths occur at state-dependent rates. When X (t) = i Births Individuals added at exponential times with mean 1/λ i Birth or arrival rate = λ i births per unit of time Deaths Individuals removed at exponential times with rate 1/µ i Death or departure rate = µ i deaths per unit of time Birth and death times are independent Birth and death (BD) processes are then CTMCs Introduction to Random Processes Continuous-time Markov Chains 9

10 Transition times and probabilities Q: Transition times T i? Leave state i 0 when birth or death occur If T B and T D are times to next birth and death, T i = min(t B, T D ) Since T B and T D are exponential, so is T i with rate ν i = λ i + µ i When leaving state i can go to i + 1 (birth first) or i 1 (death first) Birth occurs before death with probability = P i,i+1 λ i + µ i µ i Death occurs before birth with probability = P i,i 1 λ i + µ i Leave state 0 only if a birth occurs, then ν 0 = λ 0, P 01 = 1 If CTMC leaves 0, goes to 1 with probability 1 Might not leave 0 if λ 0 = 0 (e.g., to model extinction) Introduction to Random Processes Continuous-time Markov Chains 10 λ i

11 Transition rates Rate of transition from i to i + 1 is (recall definition q ij = ν i P ij ) q i,i+1 = ν i P i,i+1 = (λ i + µ i ) = λ i λ i + µ i Likewise, rate of transition from i to i 1 is q i,i 1 = ν i P i,i 1 = (λ i + µ i ) = µ i λ i + µ i λ i µ i For i = 0 q 01 = ν 0 P 01 = λ 0 0 λ 0 λ i 1 λ i λ i+1... i 1 i i µ 1 µ i µ i+1 Somewhat more natural representation. Similar to discrete-time MCs Introduction to Random Processes Continuous-time Markov Chains 11

12 Poisson process example A Poisson process is a BD process with λ i = λ and µ i = 0 constant State N(t) counts the total number of events (arrivals) by time t Arrivals occur a rate of λ per unit time Transition times are the i.i.d. exponential interarrival times 0 λ λ λ λ... i 1 i i The Poisson process is a CTMC Introduction to Random Processes Continuous-time Markov Chains 12

13 M/M/1 queue example An M/M/1 queue is a BD process with λ i = λ and µ i = µ constant State Q(t) is the number of customers in the system at time t Customers arrive for service at a rate of λ per unit time They are serviced at a rate of µ customers per unit time 0 λ λ λ λ... i 1 i i µ µ µ The M/M is for Markov arrivals/markov departures Implies a Poisson arrival process, exponential services times The 1 is because there is only one server Introduction to Random Processes Continuous-time Markov Chains 13

14 Transition probability function Continuous-time Markov chains Transition probability function Determination of transition probability function Limit probabilities and ergodicity Introduction to Random Processes Continuous-time Markov Chains 14

15 Transition probability function Two equivalent ways of specifying a CTMC 1) Transition time averages 1/ν i + transition probabilities P ij Easier description Typical starting point for CTMC modeling 2) Transition probability function P ij (t) := P ( X (t + s) = j ) X (s) = i More complete description for all t 0 Similar in spirit to Pij n for discrete-time Markov chains Goal: compute P ij (t) from transition times and probabilities Notice two obvious properties P ij (0) = 0, P ii (0) = 1 Introduction to Random Processes Continuous-time Markov Chains 15

16 Roadmap to determine P ij (t) Goal is to obtain a differential equation whose solution is P ij (t) Study change in P ij (t) when time changes slightly Separate in two subproblems (divide and conquer) Transition probabilities for small time h, P ij (h) Transition probabilities in t + h as function of those in t and h We can combine both results in two different ways 1) Jump from 0 to t then to t + h Process runs a little longer Changes where the process is going to Forward equations 2) Jump from 0 to h then to t + h Process starts a little later Changes where the process comes from Backward equations Introduction to Random Processes Continuous-time Markov Chains 16

17 Transition probability in infinitesimal time Theorem The transition probability functions P ii (t) and P ij (t) satisfy the following limits as t approaches 0 P ij (t) 1 P ii (t) lim = q ij, lim = ν i t 0 t t 0 t Since P ij (0) = 0, P ii (0) = 1 above limits are derivatives at t = 0 P ij (t) P ii (t) t = q ij, t=0 t = ν i t=0 Limits also imply that for small h (recall Taylor series) P ij (h) = q ij h + o(h), P ii (h) = 1 ν i h + o(h) Transition rates q ij are instantaneous transition probabilities Transition probability coefficient for small time h Introduction to Random Processes Continuous-time Markov Chains 17

18 Probability of event in infinitesimal time (reminder) Q: Probability of an event happening in infinitesimal time h? Want P (T < h) for small h Equivalent to P (T < h) = P (T < t) t h 0 λe λt dt λh = λ t=0 Sometimes also write P (T < h) = λh + o(h) o(h) o(h) implies lim = 0 h 0 h Read as negligible with respect to h Q: Two independent events in infinitesimal time h? P (T 1 h, T 2 h) (λ 1 h)(λ 2 h) = λ 1 λ 2 h 2 = o(h) Introduction to Random Processes Continuous-time Markov Chains 18

19 Transition probability in infinitesimal time (proof) Proof. Consider a small time h, and recall T i exp(ν i ) Since 1 P ii (h) is the probability of transitioning out of state i 1 P ii (h) = P (T i < h) = ν i h + o(h) Divide by h and take limit to establish the second identity For P ij (t) notice that since two or more transitions have o(h) prob. P ij (h) = P ( X (h) = j X (0) = i ) = P ij P (T i < h) + o(h) Again, since T i is exponential P (T i < h) = ν i h + o(h). Then P ij (h) = ν i P ij h + o(h) = q ij h + o(h) Divide by h and take limit to establish the first identity Introduction to Random Processes Continuous-time Markov Chains 19

20 Chapman-Kolmogorov equations Theorem For all times s and t the transition probability functions P ij (t + s) are obtained from P ik (t) and P kj (s) as P ij (t + s) = P ik (t)p kj (s) k=0 As for discrete-time MCs, to go from i to j in time t + s Go from i to some state k in time t P ik (t) In the remaining time s go from k to j Sum over all possible intermediate states k P kj (s) Introduction to Random Processes Continuous-time Markov Chains 20

21 Chapman-Kolmogorov equations (proof) Proof. P ij (t + s) = P ( X (t + s) = j X (0) = i ) Definition of P ij (t + s) = P ( X (t + s) = j ) ( ) X (t) = k, X (0) = i P X (t) = k X (0) = i k=0 Law of total probability = P ( X (t + s) = j X (t) = k ) P ik (t) Markov property of CTMC k=0 and definition of P ik (t) = P kj (s)p ik (t) Definition of P kj (s) k=0 Introduction to Random Processes Continuous-time Markov Chains 21

22 Combining both results Let us combine the last two results to express P ij (t + h) Use Chapman-Kolmogorov s equations for 0 t h P ij (t + h) = P ik (t)p kj (h) = P ij (t)p jj (h) + k=0 k=0,k j P ik (t)p kj (h) Substitute infinitesimal time expressions for P jj (h) and P kj (h) P ij (t + h) = P ij (t)(1 ν j h) + k=0,k j P ik (t)q kj h + o(h) Subtract P ij (t) from both sides and divide by h P ij (t + h) P ij (t) = ν j P ij (t) + P ik (t)q kj + o(h) h h k=0,k j Right-hand side equals a derivative ratio. Let h 0 to prove... Introduction to Random Processes Continuous-time Markov Chains 22

23 Kolmogorov s forward equations Theorem The transition probability functions P ij (t) of a CTMC satisfy the system of differential equations (for all pairs i, j) P ij (t) t = k=0,k j q kj P ik (t) ν j P ij (t) Interpret each summand in Kolmogorov s forward equations Pij (t)/ t = rate of change of P ij (t) qkj P ik (t) = (transition into k in 0 t) (rate of moving into j in next instant) νj P ij (t) = (transition into j in 0 t) (rate of leaving j in next instant) Change in P ij (t) = k (moving into j from k) (leaving j) Kolmogorov s forward equations valid in most cases, but not always Introduction to Random Processes Continuous-time Markov Chains 23

24 Kolmogorov s backward equations For forward equations used Chapman-Kolmogorov s for 0 t h For backward equations we use 0 h t to express P ij (t + h) as P ij (t + h) = P ik (h)p kj (t) = P ii (h)p ij (t) + k=0 k=0,k i Substitute infinitesimal time expression for P ii (h) and P ik (h) P ij (t + h) = (1 ν i h)p ij (t) + k=0,k i P ik (h)p kj (t) q ik hp kj (t) + o(h) Subtract P ij (t) from both sides and divide by h P ij (t + h) P ij (t) = ν i P ij (t) + q ik P kj (t) + o(h) h h k=0,k i Right-hand side equals a derivative ratio. Let h 0 to prove... Introduction to Random Processes Continuous-time Markov Chains 24

25 Kolmogorov s backward equations Theorem The transition probability functions P ij (t) of a CTMC satisfy the system of differential equations (for all pairs i, j) P ij (t) t = k=0,k i q ik P kj (t) ν i P ij (t) Interpret each summand in Kolmogorov s backward equations Pij (t)/ t = rate of change of P ij (t) qik P kj (t) = (transition into j in h t) (rate of transition into k in initial instant) νi P ij (t) = (transition into j in h t) (rate of leaving i in initial instant) Forward equations change in P ij (t) if finish h later Backward equations change in P ij (t) if start h earlier Where process goes (forward) vs. where process comes from (backward) Introduction to Random Processes Continuous-time Markov Chains 25

26 Determination of transition probability function Continuous-time Markov chains Transition probability function Determination of transition probability function Limit probabilities and ergodicity Introduction to Random Processes Continuous-time Markov Chains 26

27 A CTMC with two states Ex: Simplest possible CTMC has only two states. Say 0 and 1 Transition rates are q 01 and q 10 Given q 01 and q 10 can find rates of transitions out of {0, 1} q ν 0 = j q 0j = q 01, ν 1 = j q 1j = q 10 q 10 Use Kolmogorov s equations to find transition probability functions P 00 (t), P 01 (t), P 10 (t), P 11 (t) Transition probabilities out of each state sum up to one P 00 (t) + P 01 (t) = 1, P 10 (t) + P 11 (t) = 1 Introduction to Random Processes Continuous-time Markov Chains 27

28 Kolmogorov s forward equations Kolmogorov s forward equations (process runs a little longer) P ij(t) = q kj P ik (t) ν j P ij (t) For the two state CTMC k=0,k j P 00(t) = q 10 P 01 (t) ν 0 P 00 (t), P 10(t) = q 10 P 11 (t) ν 0 P 10 (t), P 01(t) = q 01 P 00 (t) ν 1 P 01 (t) P 11(t) = q 01 P 10 (t) ν 1 P 11 (t) Probabilities out of 0 sum up to 1 eqs. in first row are equivalent Probabilities out of 1 sum up to 1 eqs. in second row are equivalent Pick the equations for P 00 (t) and P 11 (t) Introduction to Random Processes Continuous-time Markov Chains 28

29 Solution of forward equations Use Relation between transition rates: ν 0 = q 01 and ν 1 = q 10 Probs. sum 1: P 01 (t) = 1 P 00 (t) and P 10 (t) = 1 P 11 (t) P 00(t) = q 10 [ 1 P00 (t) ] q 01 P 00 (t) = q 10 (q 10 + q 01 )P 00 (t) P 11(t) = q 01 [ 1 P11 (t) ] q 10 P 11 (t) = q 01 (q 10 + q 01 )P 11 (t) Can obtain exact same pair of equations from backward equations First-order linear differential equations Solutions are exponential For P 00 (t) propose candidate solution (just differentiate to check) P 00 (t) = q 10 q 10 + q 01 + ce (q10+q01)t To determine c use initial condition P 00 (0) = 1 Introduction to Random Processes Continuous-time Markov Chains 29

30 Solution of forward equations (continued) Evaluation of candidate solution at initial condition P 00 (0) = 1 yields 1 = q 10 q 10 + q 01 + c c = Finally transition probability function P 00 (t) P 00 (t) = q 01 q 10 + q 01 q 10 q 01 + e (q10+q01)t q 10 + q 01 q 10 + q 01 Repeat for P 11 (t). Same exponent, different constants P 11 (t) = q 01 q 10 + e (q10+q01)t q 10 + q 01 q 10 + q 01 As time goes to infinity, P 00 (t) and P 11 (t) converge exponentially Convergence rate depends on magnitude of q 10 + q 01 Introduction to Random Processes Continuous-time Markov Chains 30

31 Convergence of transition probabilities Recall P 01 (t) = 1 P 00 (t) and P 10 (t) = 1 P 11 (t) Limiting (steady-state) probabilities are lim P 00(t) = t lim P 11(t) = t q 10, q 10 + q 01 q 01, q 10 + q 01 lim P 01(t) = t lim P 10(t) = t q 01 q 10 + q 01 q 10 q 10 + q 01 Limit distribution exists and is independent of initial condition Compare across diagonals Introduction to Random Processes Continuous-time Markov Chains 31

32 Kolmogorov s forward equations in matrix form Restrict attention to finite CTMCs with N states Define matrix R R N N with elements r ij = q ij, r ii = ν i Rewrite Kolmogorov s forward eqs. as (process runs a little longer) N N P ij(t) = q kj P ik (t) ν j P ij (t) = r kj P ik (t) k=1,k j k=1 Right-hand side defines elements of a matrix product P(t) = P 11(t) P 1k (t) P 1N (t) P i1 (t) P ik (t) P in (t) P N1 (t) P Nk (t) P JN (t) r 1j P i1 (t) r kj P ik (t) r Nj P in (t) r 11 r 1j r 1N r k1 r kj r kn r N1 r Nj r NN s 11 s 1j s 1N s i1 s ij s in s N1 s Nk s NN = R = P(t)R = P (t) Introduction to Random Processes Continuous-time Markov Chains 32

33 Kolmogorov s backward equations in matrix form Similarly, Kolmogorov s backward eqs. (process starts a little later) N N P ij(t) = q ik P kj (t) ν i P ij (t) = r ik P kj (t) k=1,k i k=1 Right-hand side also defines a matrix product R = r i1 P 1j (t) r ik P kj (t) r 11 r 1k r 1N r i1 r ik r in r N1 r Nk r JN r in P Nj (t) P 11(t) P 1j (t) P 1N (t) P k1 (t) P kj (t) P kn (t) P N1 (t) P Nj (t) P NN (t) s 11 s 1j s 1N s i1 s ij s in s N1 s Nk s NN = P(t) = RP(t) = P (t) Introduction to Random Processes Continuous-time Markov Chains 33

34 Kolmogorov s equations in matrix form Matrix form of Kolmogorov s forward equation P (t) = P(t)R Matrix form of Kolmogorov s backward equation P (t) = RP(t) More similar than apparent But not equivalent because matrix product not commutative Notwithstanding both equations have to accept the same solution Introduction to Random Processes Continuous-time Markov Chains 34

35 Matrix exponential Kolmogorov s equations are first-order linear differential equations They are coupled, P ij (t) depends on P kj(t) for all k Accepts exponential solution Define matrix exponential Def: The matrix exponential e At of matrix At is the series e At (At) n = n! n=0 = I + At + (At)2 2 + (At) Derivative of matrix exponential with respect to t e At t = 0 + A + A 2 t + A3 t 2 2 ) +... = A (I + At + (At) = Ae At 2 Putting A on right side of product shows that eat t = e At A Introduction to Random Processes Continuous-time Markov Chains 35

36 Solution of Kolmogorov s equations Propose solution of the form P(t) = e Rt P(t) solves backward equations, since derivative is P(t) t = ert t = Re Rt = RP(t) It also solves forward equations P(t) t = ert t = e Rt R = P(t)R Notice that P(0) = I, as it should (P ii (0) = 1, and P ij (0) = 0) Introduction to Random Processes Continuous-time Markov Chains 36

37 Computing the matrix exponential Suppose A R n n is diagonalizable, i.e., A = UDU 1 Diagonal matrix D = diag(λ 1,..., λ n ) collects eigenvalues λ i Matrix U has the corresponding eigenvectors as columns We have the following neat identity ( e At (UDU 1 t) n ) (Dt) n = = U U 1 = Ue Dt U 1 n! n! n=0 n=0 But since D is diagonal, then e Dt (Dt) n = n! n=0 e λ1t... 0 = e λnt Introduction to Random Processes Continuous-time Markov Chains 37

38 Two state CTMC example Ex: Simplest CTMC with two states 0 and 1 Transition rates are q 01 = 3 and q 10 = 1 q Recall transition time rates are ν 0 = q 01 = 3, ν 1 = q 10 = 1, hence ( ) ( ) ν0 q R = = q 10 ν Eigenvalues of R are 0, 4, eigenvectors [1, 1] T and [ 3, 1] T. Thus ( ) ( ) ( ) 1 3 U =, U 1 1/4 3/4 =, e Dt 1 0 = 1 1 1/4 1/1 0 e 4t The solution to the forward equations is ( ) P(t) = e Rt = Ue Dt U 1 1/4 + (3/4)e 4t 3/4 (3/4)e = 4t 1/4 (1/4)e 4t 3/4 + (1/4)e 4t q 10 Introduction to Random Processes Continuous-time Markov Chains 38

39 Unconditional probabilities P(t) is transition prob. from states at time 0 to states at time t Define unconditional probs. at time t, p j (t) := P (X (t) = j) Group in vector p(t) = [p 1 (t), p 2 (t),..., p j (t),...] T Given initial distribution p(0), find p j (t) conditioning on initial state p j (t) = P ( X (t) = j ) X (0) = i P (X (0) = i) = P ij (t)p i (0) i=0 i=0 Using compact matrix-vector notation p(t) = P T (t)p(0) Compare with discrete-time MC p(n) = (P n ) T p(0) Introduction to Random Processes Continuous-time Markov Chains 39

40 Limit probabilities and ergodicity Continuous-time Markov chains Transition probability function Determination of transition probability function Limit probabilities and ergodicity Introduction to Random Processes Continuous-time Markov Chains 40

41 Recurrent and transient states Recall the embedded discrete-time MC associated with any CTMC Transition probs. of MC form the matrix P of the CTMC No self transitions (P ii = 0, P s diagonal null) States i j communicate in the CTMC if i j in the MC Communication partitions MC in classes Induces CTMC partition as well Def: CTMC is irreducible if embedded MC contains a single class State i is recurrent if it is recurrent in the embedded MC Likewise, define transience and positive recurrence for CTMCs Transience and recurrence shared by elements of a MC class Transience and recurrence are class properties of CTMCs Periodicity not possible in CTMCs Introduction to Random Processes Continuous-time Markov Chains 41

42 Limiting probabilities Theorem Consider irreducible, positive recurrent CTMC with transition rates ν i and q ij. Then, lim P ij(t) exists and is independent of the initial state i, i.e., t P j = lim t P ij(t) exists for all (i, j) Furthermore, steady-state probabilities P j 0 are the unique nonnegative solution of the system of linear equations ν j P j = q kj P k, k=0,k j P j = 1 j=0 Limit distribution exists and is independent of initial condition Obtained as solution of system of linear equations Like discrete-time MCs, but equations slightly different Introduction to Random Processes Continuous-time Markov Chains 42

43 Algebraic relation to determine limit probabilities As with MCs difficult part is to prove that P j = lim t P ij(t) exists Algebraic relations obtained from Kolmogorov s forward equations P ij (t) t = k=0,k j q kj P ik (t) ν j P ij (t) If limit distribution exists we have, independent of initial state i P ij (t) lim = 0, lim t t P ij(t) = P j t Considering the limit of Kolomogorov s forward equations yields 0 = k=0,k j q kj P k ν j P j Reordering terms the limit distribution equations follow Introduction to Random Processes Continuous-time Markov Chains 43

44 Two state CTMC example Ex: Simplest CTMC with two states 0 and 1 Transition rates are q 01 and q From transition rates find mean transition times ν 0 = q 01, ν 1 = q 10 Stationary distribution equations q 01 q 10 ν 0 P 0 = q 10 P 1, ν 1 P 1 = q 01 P 0, P 0 + P 1 = 1, q 01 P 0 = q 10 P 1, q 10 P 1 = q 01 P 0 Solution yields P 0 = q 10 q 10 + q 01, P 1 = q 01 q 10 + q 01 Larger rate q 10 of entering 0 Larger prob. P 0 of being at 0 Larger rate q 01 of entering 1 Larger prob. P 1 of being at 1 Introduction to Random Processes Continuous-time Markov Chains 44

45 Ergodicity Def: Fraction of time T i (t) spent in state i by time t T i (t) := 1 t t T i (t) a time/ergodic average, 0 I {X (τ) = i}dτ lim T i(t) is an ergodic limit t If CTMC is irreducible, positive recurrent, the ergodic theorem holds P i = lim T 1 t i(t) = lim I {X (τ) = i}dτ t t t 0 a.s. Ergodic limit coincides with limit probabilities (almost surely) Introduction to Random Processes Continuous-time Markov Chains 45

46 Function s ergodic limit Consider function f (i) associated with state i. Can write f ( X (t) ) as f ( X (t) ) = f (i)i {X (t) = i} i=1 Consider the time average of f ( X (t) ) 1 t lim f ( X (τ) ) 1 t dτ = lim t t 0 t t 0 f (i)i {X (τ) = i}dτ i=1 Interchange summation with integral and limit to say 1 t lim f ( X (τ) ) dτ = t t 0 i=1 1 t f (i) lim I {X (τ) = i}dτ = t t 0 f (i)p i Function s ergodic limit = Function s expectation under limiting dist. i=1 Introduction to Random Processes Continuous-time Markov Chains 46

47 Limit distribution equations as balance equations Recall limit distribution equations ν j P j = q kj P k k=0,k j P j = fraction of time spent in state j ν j = rate of transition out of state j given CTMC is in state j ν j P j = rate of transition out of state j (unconditional) q kj = rate of transition from k to j given CTMC is in state k q kj P k = rate of transition from k to j (unconditional) q kj P k = rate of transition into j, from all states k=0,k j Rate of transition out of state j = Rate of transition into state j Balance equations Balance nr. of transitions in and out of state j Introduction to Random Processes Continuous-time Markov Chains 47

48 Limit distribution for birth and death process Birth/deaths occur at state-dependent rates. When X (t) = i Births Individuals added at exponential times with mean 1/λ i Birth rate = upward transition rate = q i,i+1 = λ i Deaths Individuals removed at exponential times with mean 1/µ i Death rate = downward transition rate = q i,i 1 = µ i Transition time rates ν i = λ i + µ i, i > 0 and ν 0 = λ 0 0 λ 0 λ i 1 λ i λ i+1... i 1 i i µ 1 µ i µ i+1 Limit distribution/balance equations: Rate out of j = Rate into j (λ i + µ i )P i = λ i 1 P i 1 + µ i+1 P i+1 λ 0 P 0 = µ 1 P 1 Introduction to Random Processes Continuous-time Markov Chains 48

49 Finding solution of balance equations Start expressing all probabilities in terms of P 0 Equation for P 0 λ0p 0 = µ 1P 1 Sum eqs. for P 1 λ 0P 0 = and P 0 (λ 1 + µ 1)P 1 = Sum result and λ 1P 1 = eq. for P 2 (λ 2 + µ 2)P 2 =. µ 1P 1 λ 0P 0 + µ 2P 2 λ 1P 1 = µ 2P 2 µ 2P 2 λ 1P 1 + µ 3P 3 λ 2P 2 = µ 3P 3 Sum result and eq. for P i λ i 1 P i 1 = (λ i + µ i )P i = µ i P i λ i 1 P i 1 + µ i+1 P i+1 λ i P i = µ i+1p i+1 Introduction to Random Processes Continuous-time Markov Chains 49

50 Finding solution of balance equations (continued) Recursive substitutions on red equations on the right P 1 = λ 0 µ 1 P 0 P 2 = λ 1 µ 2 P 1 = λ 1λ 0 µ 2 µ 1 P 0. P i+1 = λ i µ i+1 P i = λ iλ i 1... λ 0 µ i+1 µ i... µ 1 P 0 To find P 0 use i=0 P i = 1 1 = P 0 + P 0 = [ 1 + i=1 λ i λ i 1... λ 0 µ i+1 µ i... µ 1 ] 1 i=1 λ i λ i 1... λ 0 µ i+1 µ i... µ 1 P 0 Introduction to Random Processes Continuous-time Markov Chains 50

51 Glossary Continuous-time Markov chain Markov property Time-homogeneous CTMC Transition probability function Exponential transition time Transition probabilities Embedded discrete-time MC Transition rates Birth and death process Poisson process M/M/1 queue Chapman-Kolmogorov equations Kolmogorov s forward equations Kolmogorov s backward equations Limiting probabilities Matrix exponential Unconditional probabilities Recurrent and transient states Ergodicity Balance equations Introduction to Random Processes Continuous-time Markov Chains 51

IEOR 6711: Stochastic Models, I Fall 2012, Professor Whitt, Final Exam SOLUTIONS

IEOR 6711: Stochastic Models, I Fall 2012, Professor Whitt, Final Exam SOLUTIONS IEOR 6711: Stochastic Models, I Fall 2012, Professor Whitt, Final Exam SOLUTIONS There are four questions, each with several parts. 1. Customers Coming to an Automatic Teller Machine (ATM) (30 points)

More information

M/M/1 and M/M/m Queueing Systems

M/M/1 and M/M/m Queueing Systems M/M/ and M/M/m Queueing Systems M. Veeraraghavan; March 20, 2004. Preliminaries. Kendall s notation: G/G/n/k queue G: General - can be any distribution. First letter: Arrival process; M: memoryless - exponential

More information

9.2 Summation Notation

9.2 Summation Notation 9. Summation Notation 66 9. Summation Notation In the previous section, we introduced sequences and now we shall present notation and theorems concerning the sum of terms of a sequence. We begin with a

More information

The Exponential Distribution

The Exponential Distribution 21 The Exponential Distribution From Discrete-Time to Continuous-Time: In Chapter 6 of the text we will be considering Markov processes in continuous time. In a sense, we already have a very good understanding

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

Homework set 4 - Solutions

Homework set 4 - Solutions Homework set 4 - Solutions Math 495 Renato Feres Problems R for continuous time Markov chains The sequence of random variables of a Markov chain may represent the states of a random system recorded at

More information

4 The M/M/1 queue. 4.1 Time-dependent behaviour

4 The M/M/1 queue. 4.1 Time-dependent behaviour 4 The M/M/1 queue In this chapter we will analyze the model with exponential interarrival times with mean 1/λ, exponential service times with mean 1/µ and a single server. Customers are served in order

More information

SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis

More information

Analysis of a Production/Inventory System with Multiple Retailers

Analysis of a Production/Inventory System with Multiple Retailers Analysis of a Production/Inventory System with Multiple Retailers Ann M. Noblesse 1, Robert N. Boute 1,2, Marc R. Lambrecht 1, Benny Van Houdt 3 1 Research Center for Operations Management, University

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Hydrodynamic Limits of Randomized Load Balancing Networks

Hydrodynamic Limits of Randomized Load Balancing Networks Hydrodynamic Limits of Randomized Load Balancing Networks Kavita Ramanan and Mohammadreza Aghajani Brown University Stochastic Networks and Stochastic Geometry a conference in honour of François Baccelli

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

Solution to Homework 2

Solution to Homework 2 Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if

More information

Random access protocols for channel access. Markov chains and their stability. Laurent Massoulié.

Random access protocols for channel access. Markov chains and their stability. Laurent Massoulié. Random access protocols for channel access Markov chains and their stability laurent.massoulie@inria.fr Aloha: the first random access protocol for channel access [Abramson, Hawaii 70] Goal: allow machines

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

More information

SPECTRAL POLYNOMIAL ALGORITHMS FOR COMPUTING BI-DIAGONAL REPRESENTATIONS FOR PHASE TYPE DISTRIBUTIONS AND MATRIX-EXPONENTIAL DISTRIBUTIONS

SPECTRAL POLYNOMIAL ALGORITHMS FOR COMPUTING BI-DIAGONAL REPRESENTATIONS FOR PHASE TYPE DISTRIBUTIONS AND MATRIX-EXPONENTIAL DISTRIBUTIONS Stochastic Models, 22:289 317, 2006 Copyright Taylor & Francis Group, LLC ISSN: 1532-6349 print/1532-4214 online DOI: 10.1080/15326340600649045 SPECTRAL POLYNOMIAL ALGORITHMS FOR COMPUTING BI-DIAGONAL

More information

QUEUING THEORY. 1. Introduction

QUEUING THEORY. 1. Introduction QUEUING THEORY RYAN BERRY Abstract. This paper defines the building blocks of and derives basic queuing systems. It begins with a review of some probability theory and then defines processes used to analyze

More information

1. (First passage/hitting times/gambler s ruin problem:) Suppose that X has a discrete state space and let i be a fixed state. Let

1. (First passage/hitting times/gambler s ruin problem:) Suppose that X has a discrete state space and let i be a fixed state. Let Copyright c 2009 by Karl Sigman 1 Stopping Times 1.1 Stopping Times: Definition Given a stochastic process X = {X n : n 0}, a random time τ is a discrete random variable on the same probability space as

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

LECTURE 4. Last time: Lecture outline

LECTURE 4. Last time: Lecture outline LECTURE 4 Last time: Types of convergence Weak Law of Large Numbers Strong Law of Large Numbers Asymptotic Equipartition Property Lecture outline Stochastic processes Markov chains Entropy rate Random

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Matrix Calculations: Applications of Eigenvalues and Eigenvectors; Inner Products

Matrix Calculations: Applications of Eigenvalues and Eigenvectors; Inner Products Matrix Calculations: Applications of Eigenvalues and Eigenvectors; Inner Products H. Geuvers Institute for Computing and Information Sciences Intelligent Systems Version: spring 2015 H. Geuvers Version:

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form Section 1.3 Matrix Products A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form (scalar #1)(quantity #1) + (scalar #2)(quantity #2) +...

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

6.263/16.37: Lectures 5 & 6 Introduction to Queueing Theory

6.263/16.37: Lectures 5 & 6 Introduction to Queueing Theory 6.263/16.37: Lectures 5 & 6 Introduction to Queueing Theory Massachusetts Institute of Technology Slide 1 Packet Switched Networks Messages broken into Packets that are routed To their destination PS PS

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

Fluid models in performance analysis

Fluid models in performance analysis Fluid models in performance analysis Marco Gribaudo 1, Miklós Telek 2 1 Dip. di Informatica, Università di Torino, marcog@di.unito.it 2 Dept. of Telecom., Technical University of Budapest telek@hitbme.hu

More information

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

More information

Performance Analysis of a Telephone System with both Patient and Impatient Customers

Performance Analysis of a Telephone System with both Patient and Impatient Customers Performance Analysis of a Telephone System with both Patient and Impatient Customers Yiqiang Quennel Zhao Department of Mathematics and Statistics University of Winnipeg Winnipeg, Manitoba Canada R3B 2E9

More information

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all. 1. Differentiation The first derivative of a function measures by how much changes in reaction to an infinitesimal shift in its argument. The largest the derivative (in absolute value), the faster is evolving.

More information

Load Balancing and Switch Scheduling

Load Balancing and Switch Scheduling EE384Y Project Final Report Load Balancing and Switch Scheduling Xiangheng Liu Department of Electrical Engineering Stanford University, Stanford CA 94305 Email: liuxh@systems.stanford.edu Abstract Load

More information

Row Echelon Form and Reduced Row Echelon Form

Row Echelon Form and Reduced Row Echelon Form These notes closely follow the presentation of the material given in David C Lay s textbook Linear Algebra and its Applications (3rd edition) These notes are intended primarily for in-class presentation

More information

Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients. y + p(t) y + q(t) y = g(t), g(t) 0.

Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients. y + p(t) y + q(t) y = g(t), g(t) 0. Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients We will now turn our attention to nonhomogeneous second order linear equations, equations with the standard

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

5 Numerical Differentiation

5 Numerical Differentiation D. Levy 5 Numerical Differentiation 5. Basic Concepts This chapter deals with numerical approximations of derivatives. The first questions that comes up to mind is: why do we need to approximate derivatives

More information

Big Data Technology Motivating NoSQL Databases: Computing Page Importance Metrics at Crawl Time

Big Data Technology Motivating NoSQL Databases: Computing Page Importance Metrics at Crawl Time Big Data Technology Motivating NoSQL Databases: Computing Page Importance Metrics at Crawl Time Edward Bortnikov & Ronny Lempel Yahoo! Labs, Haifa Class Outline Link-based page importance measures Why

More information

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

More information

Memorandum No. 1486. Families of birth-death processes with similar time-dependent behaviour

Memorandum No. 1486. Families of birth-death processes with similar time-dependent behaviour Faculty of Mathematical Sciences University of Twente University for Technical and Social Sciences P.O. Box 217 7500 AE Enschede The Netherlands Phone: +31-53-4893400 Fax: +31-53-4893114 Email: memo@math.utwente.nl

More information

Fluid Approximation of Smart Grid Systems: Optimal Control of Energy Storage Units

Fluid Approximation of Smart Grid Systems: Optimal Control of Energy Storage Units Fluid Approximation of Smart Grid Systems: Optimal Control of Energy Storage Units by Rasha Ibrahim Sakr, B.Sc. A thesis submitted to the Faculty of Graduate and Postdoctoral Affairs in partial fulfillment

More information

Bounding of Performance Measures for Threshold-Based Queuing Systems: Theory and Application to Dynamic Resource Management in Video-on-Demand Servers

Bounding of Performance Measures for Threshold-Based Queuing Systems: Theory and Application to Dynamic Resource Management in Video-on-Demand Servers IEEE TRANSACTIONS ON COMPUTERS, VOL 51, NO 3, MARCH 2002 1 Bounding of Performance Measures for Threshold-Based Queuing Systems: Theory and Application to Dynamic Resource Management in Video-on-Demand

More information

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited Physics 116A Winter 2011 The Matrix Elements of a 3 3 Orthogonal Matrix Revisited 1. Introduction In a class handout entitled, Three-Dimensional Proper and Improper Rotation Matrices, I provided a derivation

More information

Linear Programming. March 14, 2014

Linear Programming. March 14, 2014 Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

More information

UNCOUPLING THE PERRON EIGENVECTOR PROBLEM

UNCOUPLING THE PERRON EIGENVECTOR PROBLEM UNCOUPLING THE PERRON EIGENVECTOR PROBLEM Carl D Meyer INTRODUCTION Foranonnegative irreducible matrix m m with spectral radius ρ,afundamental problem concerns the determination of the unique normalized

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

Simple Markovian Queueing Systems

Simple Markovian Queueing Systems Chapter 4 Simple Markovian Queueing Systems Poisson arrivals and exponential service make queueing models Markovian that are easy to analyze and get usable results. Historically, these are also the models

More information

4.3 Lagrange Approximation

4.3 Lagrange Approximation 206 CHAP. 4 INTERPOLATION AND POLYNOMIAL APPROXIMATION Lagrange Polynomial Approximation 4.3 Lagrange Approximation Interpolation means to estimate a missing function value by taking a weighted average

More information

5 Homogeneous systems

5 Homogeneous systems 5 Homogeneous systems Definition: A homogeneous (ho-mo-jeen -i-us) system of linear algebraic equations is one in which all the numbers on the right hand side are equal to : a x +... + a n x n =.. a m

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Lectures 5-6: Taylor Series

Lectures 5-6: Taylor Series Math 1d Instructor: Padraic Bartlett Lectures 5-: Taylor Series Weeks 5- Caltech 213 1 Taylor Polynomials and Series As we saw in week 4, power series are remarkably nice objects to work with. In particular,

More information

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

Chapter 2 Discrete-Time Markov Models

Chapter 2 Discrete-Time Markov Models Chapter Discrete-Time Markov Models.1 Discrete-Time Markov Chains Consider a system that is observed at times 0;1;;:::: Let X n be the state of the system at time n for n D 0;1;;:::: Suppose we are currently

More information

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method 578 CHAPTER 1 NUMERICAL METHODS 1. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after

More information

1.2 Solving a System of Linear Equations

1.2 Solving a System of Linear Equations 1.. SOLVING A SYSTEM OF LINEAR EQUATIONS 1. Solving a System of Linear Equations 1..1 Simple Systems - Basic De nitions As noticed above, the general form of a linear system of m equations in n variables

More information

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i. Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

More information

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

Solution of Linear Systems

Solution of Linear Systems Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start

More information

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Recall that two vectors in are perpendicular or orthogonal provided that their dot Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

More information

Tutorial: Stochastic Modeling in Biology Applications of Discrete- Time Markov Chains. NIMBioS Knoxville, Tennessee March 16-18, 2011

Tutorial: Stochastic Modeling in Biology Applications of Discrete- Time Markov Chains. NIMBioS Knoxville, Tennessee March 16-18, 2011 Tutorial: Stochastic Modeling in Biology Applications of Discrete- Time Markov Chains Linda J. S. Allen Texas Tech University Lubbock, Texas U.S.A. NIMBioS Knoxville, Tennessee March 16-18, 2011 OUTLINE

More information

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION 4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

3.1 State Space Models

3.1 State Space Models 31 State Space Models In this section we study state space models of continuous-time linear systems The corresponding results for discrete-time systems, obtained via duality with the continuous-time models,

More information

3.2 Roulette and Markov Chains

3.2 Roulette and Markov Chains 238 CHAPTER 3. DISCRETE DYNAMICAL SYSTEMS WITH MANY VARIABLES 3.2 Roulette and Markov Chains In this section we will be discussing an application of systems of recursion equations called Markov Chains.

More information

Modeling and Performance Evaluation of Computer Systems Security Operation 1

Modeling and Performance Evaluation of Computer Systems Security Operation 1 Modeling and Performance Evaluation of Computer Systems Security Operation 1 D. Guster 2 St.Cloud State University 3 N.K. Krivulin 4 St.Petersburg State University 5 Abstract A model of computer system

More information

DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II. Matrix Algorithms DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

More information

PHYSICAL REVIEW LETTERS

PHYSICAL REVIEW LETTERS PHYSICAL REVIEW LETTERS VOLUME 86 28 MAY 21 NUMBER 22 Mathematical Analysis of Coupled Parallel Simulations Michael R. Shirts and Vijay S. Pande Department of Chemistry, Stanford University, Stanford,

More information

3. Reaction Diffusion Equations Consider the following ODE model for population growth

3. Reaction Diffusion Equations Consider the following ODE model for population growth 3. Reaction Diffusion Equations Consider the following ODE model for population growth u t a u t u t, u 0 u 0 where u t denotes the population size at time t, and a u plays the role of the population dependent

More information

Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

More information

Solutions to Linear First Order ODE s

Solutions to Linear First Order ODE s First Order Linear Equations In the previous session we learned that a first order linear inhomogeneous ODE for the unknown function x = x(t), has the standard form x + p(t)x = q(t) () (To be precise we

More information

Some Polynomial Theorems. John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom.

Some Polynomial Theorems. John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom. Some Polynomial Theorems by John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom.com This paper contains a collection of 31 theorems, lemmas,

More information

7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization 7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

More information

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS Sensitivity Analysis 3 We have already been introduced to sensitivity analysis in Chapter via the geometry of a simple example. We saw that the values of the decision variables and those of the slack and

More information

Chapter 4 One Dimensional Kinematics

Chapter 4 One Dimensional Kinematics Chapter 4 One Dimensional Kinematics 41 Introduction 1 4 Position, Time Interval, Displacement 41 Position 4 Time Interval 43 Displacement 43 Velocity 3 431 Average Velocity 3 433 Instantaneous Velocity

More information

Principle of Data Reduction

Principle of Data Reduction Chapter 6 Principle of Data Reduction 6.1 Introduction An experimenter uses the information in a sample X 1,..., X n to make inferences about an unknown parameter θ. If the sample size n is large, then

More information

1.4 Fast Fourier Transform (FFT) Algorithm

1.4 Fast Fourier Transform (FFT) Algorithm 74 CHAPTER AALYSIS OF DISCRETE-TIME LIEAR TIME-IVARIAT SYSTEMS 4 Fast Fourier Transform (FFT Algorithm Fast Fourier Transform, or FFT, is any algorithm for computing the -point DFT with a computational

More information

1 Gambler s Ruin Problem

1 Gambler s Ruin Problem Coyright c 2009 by Karl Sigman 1 Gambler s Ruin Problem Let N 2 be an integer and let 1 i N 1. Consider a gambler who starts with an initial fortune of $i and then on each successive gamble either wins

More information

2.3 Convex Constrained Optimization Problems

2.3 Convex Constrained Optimization Problems 42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 7, JULY 2005 2475. G. George Yin, Fellow, IEEE, and Vikram Krishnamurthy, Fellow, IEEE

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 7, JULY 2005 2475. G. George Yin, Fellow, IEEE, and Vikram Krishnamurthy, Fellow, IEEE IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 7, JULY 2005 2475 LMS Algorithms for Tracking Slow Markov Chains With Applications to Hidden Markov Estimation and Adaptive Multiuser Detection G.

More information

Structure Preserving Model Reduction for Logistic Networks

Structure Preserving Model Reduction for Logistic Networks Structure Preserving Model Reduction for Logistic Networks Fabian Wirth Institute of Mathematics University of Würzburg Workshop on Stochastic Models of Manufacturing Systems Einhoven, June 24 25, 2010.

More information

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August

More information

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of

More information

From Binomial Trees to the Black-Scholes Option Pricing Formulas

From Binomial Trees to the Black-Scholes Option Pricing Formulas Lecture 4 From Binomial Trees to the Black-Scholes Option Pricing Formulas In this lecture, we will extend the example in Lecture 2 to a general setting of binomial trees, as an important model for a single

More information

Introduction to Markov Chain Monte Carlo

Introduction to Markov Chain Monte Carlo Introduction to Markov Chain Monte Carlo Monte Carlo: sample from a distribution to estimate the distribution to compute max, mean Markov Chain Monte Carlo: sampling using local information Generic problem

More information

General Theory of Differential Equations Sections 2.8, 3.1-3.2, 4.1

General Theory of Differential Equations Sections 2.8, 3.1-3.2, 4.1 A B I L E N E C H R I S T I A N U N I V E R S I T Y Department of Mathematics General Theory of Differential Equations Sections 2.8, 3.1-3.2, 4.1 Dr. John Ehrke Department of Mathematics Fall 2012 Questions

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

More information

On the mathematical theory of splitting and Russian roulette

On the mathematical theory of splitting and Russian roulette On the mathematical theory of splitting and Russian roulette techniques St.Petersburg State University, Russia 1. Introduction Splitting is an universal and potentially very powerful technique for increasing

More information

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system 1. Systems of linear equations We are interested in the solutions to systems of linear equations. A linear equation is of the form 3x 5y + 2z + w = 3. The key thing is that we don t multiply the variables

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

General Framework for an Iterative Solution of Ax b. Jacobi s Method

General Framework for an Iterative Solution of Ax b. Jacobi s Method 2.6 Iterative Solutions of Linear Systems 143 2.6 Iterative Solutions of Linear Systems Consistent linear systems in real life are solved in one of two ways: by direct calculation (using a matrix factorization,

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES Contents 1. Random variables and measurable functions 2. Cumulative distribution functions 3. Discrete

More information

Essentials of Stochastic Processes

Essentials of Stochastic Processes i Essentials of Stochastic Processes Rick Durrett 70 60 50 10 Sep 10 Jun 10 May at expiry 40 30 20 10 0 500 520 540 560 580 600 620 640 660 680 700 Almost Final Version of the 2nd Edition, December, 2011

More information