Homework set 4  Solutions


 May Nicholson
 3 years ago
 Views:
Transcription
1 Homework set 4  Solutions Math 495 Renato Feres Problems R for continuous time Markov chains The sequence of random variables of a Markov chain may represent the states of a random system recorded at a succession of time steps For a full description of the process we often need to specify another sequence of random variables, T 0 < T 1 < < T N, giving the random times at which the state transitions occur A continuous time Markov chain with state space D consists of a sequence X 0, X 1,, X N taking values in D together with a sequence T 0,T 1,,T N in [0, ), where T i is the random time when the chain jumps to the state X i A more precise definition will be given shortly It will be helpful to keep in mind the following diagram as we discuss some of the main ideas related to continuous time Markov chains, at least in the case of discrete D The nodes (circles) stand for the elements of D In this discrete case, they may be labeled by integers: D = {1,2,,k} The black dot (a token) indicates which state is presently occupied, and the arrows represent state transitions The significance of the labels λ i j, called transition rates (they are not probabilities; they can take on any positive value, possibly greater than 1) will be explained below in the context of Theorem 01 They are numbers that encode information specifying both the transition probabilities and the waiting times between state transitions The resulting process can be imagined as a kind of random walk of the token around the diagram The walk starts at X 0 D at time T 0, jumping at time T i to state X i for each i = 1,2, Figure 1: Diagram representing a continuous time Markov chain system A useful identity The following general observation will be used in the proof of Theorem 01 Suppose that X and Y are random variables where Y is of the continuous type, having probability density function f Y (y) Then P(X A) = P(X A Y = y)f Y (y)d y
2 This can be derived as follows First observe that P(X A) = E(1 A (X )), where 1 A (x) is the indicator function of the set A (Recall that, by definition, 1 A (x) equals 1 if x A and 0 if x A) This is clear since E(1 A (X )) = 0 P(1 A (X ) = 0) + 1 P(1 A (X ) = 1) = P(1 A (X ) = 1) = P(X A) On the other hand we know that E[X 2 ] = E[E[X 2 X 1 ]] for any random variables X 1, X 2 Therefore, P(X A) = E(1 A (X )) = E [E[1 A (X ) Y ]] = E [ 1 A (X ) Y = y ] f Y (y)d y = P(X A Y = y)f Y (y)d y The same argument shows, more generally, that given another random variable Z, P(X A Z = z,y = y) = P(X A Z = z)f Y (y)d y (1) An example of the use of this identity will appear in the proof of Theorem 01, below Exponential random variables Exponential random variables will be used as probabilistic models of waiting times between successive events in a random sequence We recall here their main properties 1 The exponential distribution A random variable T of the continuous type is said to be exponential with parameter λ if its probability density function is f (t) = λe λt for all t 0 and 0 elsewhere probability density lambda=05 lambda=10 lambda= waiting time Figure 2: Graphs of exponential densities for different values of the parameter λ If T is an exponentially distributed random waiting for some random event to happen, we will think of P(T t) = 1 e λt as the probability that the event will have happened by time t 2
3 By simple integration it follows that F T (t) = 1 e λt, E(T ) = 1 λ, Var(T ) = 1 λ 2 The four functions in R associated with the exponential distribution (random numbers, pdf, cdf and quantile) are rexp, dexp, pexp, and qexp The graph of Figure 2 was obtained with the following script: x=seq(0,5,by=005) y0=dexp(x,05) y1=dexp(x,10) y2=dexp(x,20) plot(x,y0,type= l,lty=1,ylim=range(c(0,2)),xlab= waiting time,ylab= probability density ) lines(x,y1,lty=2) lines(x,y2,lty=4) legend(x=30,y=2, #place a legend at an appropriate place on the graph c( lambda=05, lambda=10, lambda=20 ), #legend text lty=c(1,2,4),#define the line types (dashed, solid, etc) bty= n ) grid() 2 The memoryless property In all the examples discussed in this assignment, random events of some kind will happen in succession at random times I will use the term waiting time to refer to the time difference between a random event and its successor (I am using the term event here in its ordinary sense, as some sort of discrete occurrence, and not in the technical sense as sets of the σalgebra of a probability space) Waiting times will be modeled using exponential random variables The property of exponential random variables that makes them natural mathematical models of a random waiting time is that they have the memoryless property This means that, if T is exponentially distributed waiting time, P (T > s + t T > s) = P (T > t) In words, if the random event with exponentially distributed waiting time has not happened by time s (T > s) and if the distribution of T has parameter λ, then your best prediction of how much longer to wait is still E[T ] = 1/λ The underlying mechanism that causes the event to happen cannot have any sort of internal clock telling the time s already elapsed Imagine, for example, that you and a friend regularly meet 3 hours past noon to study and that your friend is perfectly reliable (deterministic) Then you know at 2:00 PM that you will need to wait for him exactly one more hour But if 3 hours is only the expected waiting time and your friend is perfectly unreliable (memoryless), then the time you have already waited for him does not affect your estimation of how much longer you ll still have to wait, which is 3 hours (A much better example is the decay process of radioactive atoms The time of decay of an unstable atomic nucleus is, by quantum theory, exactly exponentially distributed You may have seen elsewhere the concept of halflife, which is the median waiting time till decay of a radioactive atom) To see that the memoryless property holds for an exponential random variable T, set A = {T > t + s} and B = {T > s}, so A B = A Therefore, P(A B) = P(A B)/P(B) = P(A)/P(B) and the proof reduces to showing P(T > t + s) = P(T > t)p(t > s) 3
4 Note that P(T > t) = 1 P(T t) = 1 F T (t) = e λt Therefore, the memoryless property reduces to noting the relation e λ(t+s) = e λt e λs, which is, of course, a property of the exponential function It is useful to keep in mind this characterization of an exponential random variable: The probability that the random event has not happened by time t decreases exponentially as e λt if the waiting time is exponential with parameter λ 3 Sums of independent exponential random variables Suppose you meet on campus at random and independently with each of 3 friends once every two hours on average The respective waiting times T 1,T 2,T 3 of meeting each friend are assumed exponentially distributed The waiting time to meet any one of the three friends is naturally M = min{t 1,T 2,T 3 } What is the expected value of M? It may be intuitively clear that the answer should be two thirds of an hour This is indeed true due to property (a) in the following theorem Theorem 01 (Independent exponential random variables) Let T 1,,T k be independent, exponentially distributed random variables with parameters λ 1,,λ k, respectively Let M = min{t 1,,T k } Then (a) M is exponentially distributed with parameter λ λ k (b) M is independent of which T i is minimum In other words, P(M > t T i = M) = P(M > t) (c) The probability that T i is the minimum is P(T i = M) = λ i /(λ λ k ) Proof Due to independence and noting that P(T i > t) = 1 F Ti (t) = e λ i t, we have P(M > t) = P(T 1 > t,,t k > t) = P(T 1 > t) P(T k > t) = e λ 1t e λ k = e (λ 1+ +λ k )t But this means that M is exponentially distributed with parameter λ λ k, proving (a) For property (b) note: P(M > t T i = M) = P(T i > t T i = M) = We have used here identity 1 given at the beginning of this tutorial Now, 1 if s > t P(T i > t T i = s, M = s) = 0 if s t Therefore, P(M > t T i = M) = t 0 P(T i > t T i = s, M = s)f M (s)d s f M (s)d s = P(M > t) But this is the claim of part (b) Now for part (c), using again identity 1, P(T i = M) = 0 P(T i = M T i = t)f Ti (t)d t = Using the assumption that the T 1,,T k are independent, 0 P(T j t for all j i T i = t)λ i e λ i t d t (2) P(T j t for all j i T i = t) = P(T j t for all j i ) = P(T j t) = e λ j t = e (λ 1+ +λ k )t+λ i t j i j i (3) Putting together identities 2 and 3, P(T i = M) = 0 λ i e λ i t e (λ 1+ +λ k )t+λ i t d t = 0 λ i e (λ 1+ +λ k )t d t = λ i λ λ k This is what we wanted to prove 4
5 4 Interpretation of the theorem and the diagram We are now ready to interpret diagram 1 shown above In diagram 3, below, I have simplified the first one by eliminating the nodes that are not connected to the one containing the token, labeled by i, as well as the arrows that are not directed from i to one of the other nodes This is the part of the diagram needed for determining the next state transition Figure 3: Part of diagram 1 indicating the current state i (which contains the moving token) and the states to which the token can jump in the next step Only the arrows issuing from i are shown The mechanism of state transition is as follows Suppose that there are k arrows issuing from i, which I indicate by the pairs (i,1),,(i,k) Among these is the pair (i, j ) with the label λ i,j shown on the diagram Think of each arrow (i, j ) as a possible action, or event, happening at an exponentially distributed waiting time T i,j with parameter λ i,j The times T i,1,,t i,k are assumed to be independent Then, according to Theorem 01, the transition goes as follows (a) Generate a random number t 0 from an exponential distribution with parameter λ i = λ i λ i k This is the time when the next transition will happen (Part (a) of the theorem) (b) Now, independently of the value obtained for the minimal time M = t, generate an integer between 1 and k with pmf p(j ) = λ i j λ i λ i k This integer is then the index of the node to which the token is moved and t is the new time (Part (c) of the theorem) 5 Example: random switching The simplest example is defined by diagram 4 Figure 4: Diagram representing an exponential random variable with parameter λ The set of states is {0,1} and the system is initially in state 0 After an exponentially distributed waiting time with parameter λ, the state switches to 1 and remains there forever Here the chain has only two steps: X 0 = 0 and X 1 = 1 The only quantity of interest is the random time when the switch occurs So, in a sense, the diagram represents an exponential random variable with parameter λ 5
6 6 Example: branching out In this case the chain starts at X 0 = 0, jumps to X 1 {1,2,3}, then stops there The information of interest here is the transition time T and the value of the state X 1 Figure 5: Switching to one of several possible states in one step According to Theorem 01, the transition time is exponentially distributed with parameter λ 1 +λ 2 +λ 3, and the new state is taken from {1,2,3} with pmf p(i ) = λ i λ 1 + λ 2 + λ 3 7 Example: reversible switching This process is described in Figure 6 The chain is X 0 = 0, X 1 = 1, X 2 = 0, X 3 = 1, The states alternate between 0 and 1 in a perfectly deterministic way The quantities of interest are the random times of the back and forth state jumps: if the current state is 0, the system will next switch to 1 after an exponentially distributed waiting time with parameter λ; and if the current state is 1, the next switch to 0 will happen after an exponentially distributed waiting time with parameter µ Figure 6: A reversible switching process For a concrete example, suppose λ = 2 and µ = 1 A typical sample history of the process from time 0 to 10 can be obtained as follows (The use of stepfun is a somewhat tricky You should look at?stepfun for the details Note that the vector times has length one unit less that the vector states ) #We generate a sample history of the process up to time tmax tmax=20 #The exponential parameters are: lambda=2 mu=1 #Initialize the current time to 0 t=0 #The vector times will record the state transition times 6
7 times=matrix(0,1,1) #The vector states will record the sequence of states, #which may be either 0 or 1 states=matrix(0,1,1) #The step index is n n=1 while (t<tmax) { if (states[n]==0) { dt=rexp(1,lambda) states[n+1]=1} else { dt=rexp(1,mu) states[n+1]=0 } t=t+dt times[n]=t n=n+1 } #We can visualize the history by plotting states #against times Note the use of the step function operation #Check?stepfun for the details of how to use it F=stepfun(times,states,f=0) plot(f,xlab= Time,xlim=range(c(0,tmax)),ylab= State,main= A sample history ) grid() A sample history State Time Figure 7: A sample history of the Markov chain defined by the diagram of Figure 6 Each circle indicates the value of the state (0 or 1) over the time interval to the right of the circle 8 Example: Poisson processes The Poisson process is introduced in Section 32 of the textbook It can be defined by the diagram of Figure 8 The token is initially in the circle node representing state X 0 = 0; if at 7
8 any given time t the token is in state n, it will jump to state n + 1 at time t + T, where T is an exponentially distributed waiting time with parameter λ Let N (t) denote the state of the process at time t This is a piecewise constant discontinuous function of t that counts the total number of transitions up to time t Figure 8: Diagram defining a Poission process with rate parameter λ The following graph shows one sample history of a Poisson process with rate λ = 1 (per minute, say) over 10 minutes Sample history of a Poisson process State Time Figure 9: A sample history of the Poisson process with λ = 1 over the time interval [0,10] The graph was generated by the following R script lambda=1 #Initialize the current time to 0 t=0 #The vector times will record the state transition times times=matrix(0,1,1) #The vector states will record the sequence of states #from the set {0, 1, 2, } states=matrix(0,1,1) #The step index is n n=1 while (t<tmax) { dt=rexp(1,lambda) 8
9 t=t+dt times[n]=t states[n+1]=n #The states vector is one unit longer than times n=n+1 } #We now plot the sample history F=stepfun(times,states,f=0) plot(f,xlab= Time,xlim=range(c(0,tmax)),ylab= State, main= Sample history of a Poisson process ) grid() The following definition gives an alternative way to introduce the Poisson process Definition 02 A Poisson process with rate λ > 0 is a random process N (t) {0,1,2,}, which we interpret as the number of random events occurring between times 0 and t, such that (a) The process starts at 0 That is, N (0) = 0 (b) The numbers of events occurring over disjoint time intervals are independent That is, if (a, b] and (c, d] are disjoint time intervals then N (b) N (a) and N (d) N (c) are independent random variables (c) The process is time homogeneous This means that N (b) N (a) and N (b + s) N (a + s), which are the numbers of events over an interval (a,b] and over this interval s time translate (a + s,b + s], have the same probability distribution for all s 0 (d) The probability of transition from N (0) = 0 to N (h) = 1 for a small time h > 0 is, up to first order in h, given by λh More precisely, P(N (h) = 1) lim = λ h 0 h (e) The probability of more than one transition over a small time interval is 0 up to first order in the length of the interval That is, P(N (h) 2) lim = 0 h 0 h From this definition, we can recover the characterization of the Poisson process in terms of the diagram This is shown in the next theorem (The details of the proof of Theorem 03 are not needed for solving the homework problems It is OK to simply skim through it, at least for now I hope to return to some of this in class when covering chapter 3) Theorem 03 The Poisson process N (t) with rate λ > 0 satisfies the following properties: (a) For each nonnegative integer n P(N (t) = n) = (λt)n e λt (b) Let T 1 < T 2 < be the state transition times of N (t) and D j = T j T j 1 for j = 1,2, the waiting times between transitions Then D 1,D 2, are independent, exponentially distributed random variables with parameter λ Proof We begin by observing that, by properties (a) and (b), n! P(N (s + t) = n N (s) = m) = P(N (s + t) N (s) = n m N (s) N (0) = m) = P(N (s + t) N (s) = n m) 9
10 By property (c), P(N (s + t) = n N (s) = m) = P(N (t) N (0) = n m) = P(N (t) = n m) Defining p i (t) = P(N (t) = i ) we obtain n n p n (t + s) = P(N (s + t) = n) = P(N (s + t) = n N (s) = m)p(n (s) = m) = p n m (t)p m (s) m=0 m=0 Note: if we define a matrix P(t) = ( p i j (t) ) whose elements are p i j (t) = p j i (t), the expression just proved can be written in matrix form as P(t + s) = P(t)P(s) Note that P(t)P(s) = P(t + s) = P(s + t) = P(s)P(t) Properties (a), (d), and (e) imply that P(t) converges toward the identity matrix as t approaches 0 That is, 1 if m = n p mn (t) = P(N (t) = n m) 0 if m n as t 0 Let I denote the identity matrix (Since the number of states is infinite, P(t) is an infinite matrix This, however, does not create any essential difficulties) We have shown P(s) I as s converges towards 0 The matrixvalued function P(t) can then be shown to be continuous: P(t + s) P(t) = P(t)P(s) P(t) = P(t)[P(s) I ] 0, therefore P(t + s) P(t) as s converges towards 0 We can also show that P(t) is a differentiable function of t First observe the following consequence of properties (d) and (e): 0 if n m + 2 p mn (h) p mn (0) p lim = lim 1 (h) h 0 h h 0 h = λ if n = m + 1 p lim 0 (h) 1 P(N (h) 1) h 0 h = lim h 0 h = λ if n = m Therefore, p mn (0) = λ mn where 0 if n m + 2 λ if n = m + 1 λ mn = λ if n = m 0 if n < m We conclude that P P(t + h) P(t) P(h)P(t) P(t) (P(h) I )P(t) (t) = lim = lim = lim = ΛP(t) h 0 h h 0 h h 0 h where Λ = (λ mn ) The matrix differential equation P (t) = ΛP(t) can be written out explicitly as a system of ordinary differential equations in the entries of P(t) Each equation has the form p i (t) = j =1 λ i j p j (t) 10
11 As most entries of Λ are zero, the above sum only has finitely many terms for each k Explicitly, p 0 (t) = λp 0(t) p 1 (t) = λp 1(t) + λp 0 (t) p 2 (t) = λp 2(t) + λp 1 (t) p n (t) = λp n(t) + λp n 1 (t) It is now a simple induction argument to check that p n (t) = (λt)n e λt n! is a solution of the system of ordinary differential equations with the correct initial condition We can finally appeal to the uniqueness property of solutions of systems of differential equations to conclude that part (a) of the theorem holds It remains to show part (b): the waiting times D 1,D 2, are independent and exponentially distributed with parameter λ Note the equivalences of events (keep in mind that N (T n ) = n): N (T n 1 + t) N (T n 1 ) = 0 N (T n 1 + t) = n 1 T n 1 + t < T n D n > t Therefore, P(D n > t) = P (N (T n 1 + t) N (T n 1 ) = 0) = P(N (t) = 0) = p 0 (t) = e λt This means that the D n are all exponentially distributed with parameter λ To prove independence, let m < n and suppose that D m = s Then T m 1 +s = T m T n 1 So the intervals (T m 1,T m 1 +s] and (T n 1, ) are disjoint Thus for all t 0, we have by property (b) that the random variables N (T n 1 + t) N (T n 1 ) and N (T m 1 + s) N (T m 1 ) are independent Independence of D m and D n now results from the observation that (1) the event D n > t is the same as N (T n 1 + t) N (T n 1 ) = 0 and (2) D m = s is the same as N (T m 1 + s) N (T m 1 ) = 1 and N (T m 1 + s ) N (T m 1 ) = 0 for all s < s 9 General continuous type Markov chains Arguments used in the proof of Theorem 03 also prove a much more general result Going back to the diagram of Figure 1, let λ i j be the rate constant for the arrow connecting state i to state j Define λ i i = j λ i j and the nbyn matrix Λ = (λ i j ) Note that the elements of each row of Λ add up to 0 For the Poisson process this matrix is λ λ λ λ 0 Λ = 0 0 λ λ Also define the matrixvalued function P(t) = (p i j (t)) where each element p i j (t) gives the probability that the process started at time 0 in state i will be in state j at time t Then it can be shown that (a) P(0) = I, where I is the identity matrix; (b) P(t + s) = P(t)P(s) for all nonnegative t and s; (c) P (t) = ΛP(t) = P(t)Λ, where P (t) is the derivative of P(t) in t If you have taken a course in ordinary differential equations or matrix algebra, you may have learned that 11
12 these conditions characterize the matrix exponential: P(t) = e Λt In the proof of Theorem 03 we have effectively computed a matrix exponential by solving a system of differential equations The resulting matrix was in that case e Λt = e λt λte λt (λt)2 e λt 2! 0 e λt λte λt 0 0 e λt Methods for finding matrix exponentials are studied in matrix algebra courses I simply note here that the Taylor series expansion P(t) = I + Λt + + Λn t n + n! makes sense for matrices in general, and indeed holds true for finite matrices It is possible to prove this fact for infinitely many states in many cases 10 The Poisson distribution A random variable Y is said to have the Poisson distribution with parameter λ if Y = N (1), where N (t) is the Poisson process with rate parameter λ discussed above From Theorem 03 it follows that Y has probability mass function supported on the nonnegative integers 0,1,2, and given by the function p(x) = λx e λ Therefore, a Poisson random variable gives the random number of transition events occurring in the time interval [0,1] for the process described by the diagram of Figure 8 The R functions associated to Poisson random variables are dpois(x, lambda) ppois(q, lambda) qpois(p, lambda) rpois(n, lambda) x! More generally, Theorem 03 says that the random variable N (t), t > 0, is also supported on the set of nonnegative integers, and has pmf P(N (t) = x) = (λt)x e λt x! 11 The gamma distribution Recall that T n is the time of the nth transition event for the process represented by the diagram of Figure 8 From Theorem 03 we obtain the cumulative distribution function of T n : (λt) j e λt F Tn (t) = P (T n t) = P (N (t) n) = p j (t) = j! The probability density function of T n is obtained by taking the derivative of F Tn (t) with respect to t In j =n j =n 12
13 problem 1, below, you will show that the pdf is λt (λt)n 1 f Tn (t) = λe (n 1)! (4) Definition 04 (The Gamma distribution) A continuous random variable X has the gamma distribution with parameters α,β (or X has the Γ(α,β) distribution) for α > 0,β > 0, if the probability density function of X is f X (x) = xα 1 e x/β Γ(α)β α supported on the interval (0, ) The parameter α is called the shape, and β the rate of the gamma distribution; 1/β is called the scale Recall that the gamma function Γ(α) equals (α 1)! if α is a positive integer For values of α > 0 that are not necessarily integer, the gamma function is defined by Γ(α) = 0 y α 1 e y d y It follows from the above discussion that T n is a gamma random variable with a Γ(n,1/λ) distribution This shows the close relationship among the exponential, Poisson, and gamma distributions dgamma(x, shape=a, scale = b) pgamma(q, shape=a, scale = b) qgamma(p, shape=a, scale = b) rgamma(n, shape=a, scale = b) 12 Example: birthanddeath processes A widely studied Markov chain goes by the name birthanddeath chain It may be defined by the following diagram Figure 10: The continuous time birthanddeath Markov chain, with birth rate λ and death rate µ This chain can serve as a crude probabilistic model of a queueing system, in which new customers join the queue at a rate λ and are served at a rate µ The random process N (t), giving the number of customers in line at time t, has time dependent pmf p n (t) = P(N (t) = n) that can be computed using matrix exponentiation Writing P(t) = (p i j (t)), where p i j (t) = p j i (t) we have, as indicated in the remark about general continuous time Markov chain, that P(t) = e Λt Here Λ is the matrix λ λ µ (µ + λ) λ 0 0 Λ = 0 µ (µ + λ) λ 0 You will simulate this chain in one of the below problems 13
14 A random service line In an idealized waiting line (or queue), the time for a new person to join the line is exponentially distributed with parameter λ The service time for the person at the front of the line is exponential with parameter µ (I assume that µ and λ are measured in reciprocal of minute) We wish to simulate the process for 30 minutes with λ = 1 and µ = 1 and graph the length of the line as a function of time Sample history of a birth and death process State Time Figure 11: Sample history of the birthanddeath process of problem 3 It was produced by the following script #Birth and death chain #We generate a sample history of the process up to time tmax tmax=30 #The parameters are: lambda=1 mu=1 #Initialize the current time to 0 t=0 #The vector times will record the state transition times times=matrix(0,1,1) #The vector states will record the sequence of states #from the set {0, 1, 2, } states=matrix(0,1,1) #I will let the first state be 0 #The step index is n n=1 while (t<tmax) { if (states[n]==0) { 14
15 dt=rexp(1,lambda) states[n+1]=1 }else{ dt=rexp(1,lambda+mu) s=states[n] s=sample(c(s1,s+1),1,prob=c(mu/(mu+lambda),lambda/(mu+lambda))) states[n+1]=s #The states vector is one unit longer than times } t=t+dt times[n]=t n=n+1 } #We now plot the sample history F=stepfun(times,states,f=0) plot(f,xlab= Time,xlim=range(c(0,tmax)),ylab= State, main= Sample history of a birth and death process ) grid() Growth in a cell culture This is an example of a branching process We wish to simulate the growth in the number of cells by the following algorithm When a cell is born, draw sample exponential random times T b and T d with rates λ and µ, respectively If T b < T d, then the simulated cell divides at T b into two new cells (and T d is discarded) If T b < T b, then the cell dies (and T b is discarded) We simulate this process for 20 minutes starting from a single cell with µ = 1 (per minute) and 1 λ = 100 per minute 2 λ = 105 per minute 3 λ = 110 per minute Observe that if there are n cells presently in the culture, the rate at which a new division or death happens is nλ and nµ, respectively This is due to Theorem 01 Figure 12: Diagram for problem The Rfunction NumberCells(tmax,lambda,mu,k) (see below) gives the number of cells at the end of tmax (here k is the initial number of cells) Prior to showing the program, think about the following questions: 1 If the current number of cells is n, what is the probability distribution of the waiting time till the next event (birth or division)? 2 Let p b and p d be the probabilities that the next event is a birth (cell division) or a death What are the values of p b and p d in terms of λ and µ? 15
16 Note: If the number of cells at the end of the 20 minute period is recorded in a vector Cells of length N = 10 4, then the mean and standard deviation of the data are given in R by mean(cells) and sd(cells) If you have some problem with the sd function, try this: sd(asnumeric(cells)) The function NumberCells can be as follows NumberCells=function(tmax,lambda,mu,k,plotgraph) { #tmax is the final time #lambda is the division rate #mu is the death rate #k is the initial number of cells #plotgraph is 1 if a sample history of the process is to #be plotted, and 0 if not #Initialize the current time to 0 t=0 #The vector times will record the state transition times times=matrix(0,1,1) #The vector states will record the sequence of states #from the set {0, 1, 2, } states=matrix(0,1,1) #Initial state: states[1]=k #The step index is n n=1 while (t<tmax) { if (states[n]==0) { dt=tmax states[n+1]=0 }else{ s=states[n] #The next event happens at a exponential waiting time with rate rate=s*(lambda+mu) dt=rexp(1,rate) #Probability that the next event is a division (birth) pb=lambda/(lambda+mu) #Probability that the next event is a death pd=mu/(lambda+mu) s=sample(c(s1,s+1),1,prob=c(pd,pb)) states[n+1]=s #The states vector is one unit longer than times } t=t+dt times[n]=t n=n+1 } #The final number of cells is if (plotgraph==1) { 16
17 } #We now plot the sample history F=stepfun(times,states,f=0) plot(f,xlab= Time,xlim=range(c(0,tmax)),ylab= Number of cells, main= Sample history of population growth process ) grid() } states[n] We can now perform the experiments asked for in the problem N=10^4 tmax=20 lambda=100 mu=1 k=1 plotgraph=0 Cells=matrix(0,1,N) for (i in 1:N){ Cells[i]=NumberCells(tmax,lambda,mu,k,plotgraph) } mean(asnumeric(cells)) sd(asnumeric(cells)) ######################### lambda=1 > mean(asnumeric(cells)) [1] > sd(asnumeric(cells)) [1] ######################### lambda=105 > mean(asnumeric(cells)) [1] > sd(asnumeric(cells)) [1] ######################### lambda=11 > mean(asnumeric(cells)) [1] > sd(asnumeric(cells)) [1] Each sample history can end in extinction (number of cells equal to 0) or not Here is a typical looking graph for the number of cells when extinction does not occur, for λ = 13 Note that the final number of cells can vary drastically since the standard deviation can be very large 17
18 Sample history of population growth process Number of cells Time Figure 13: Sample history of the cell culture process with λ = 13 and µ = 1 Problems 1 (Text, Exercise 23, page 58) Consider the Markov chain with state space S = {0,1,2,} and transition probabilities p(x, x + 1) = 2/3; p(x,0) = 1/3 Show that the chain is positive recurrent and give the limiting probability π Solution The chain is clearly irreducible To show that the chain is positive recurrence it is sufficient to show that it admits an invariant probability measure The equation for an invariant measure is π(x) = x S π(y)p(y, x) We obtain the equations: π(x) = 2 ( ) 2 n 3 π(x 1) = = π(0) for x 1 and π(0) = 1 π(x) = x=0 Then ( ) 2 n for n 0 π(x) = (Text, Exercise 213, page 60) Consider a population of animals with the following rule for (asexual) reproduction: an individual that is born has probability q of surviving long enough to produce offspring If the individual 18
19 does produce offspring, she produces one or two offspring, each with equal probability After this the individual no longer reproduces and eventually dies Suppose the population starts with four individuals (a) For which values of q is it guaranteed that the population will eventually die out? (b) If q = 09, what is the probability that the population survives forever? Solution (a) Let Y be the random variable giving the number of offspring of one individual It is given that 1 q if k = 0 P(Y = k) = q/2 if k = 1 q/2 if k = 2 The expected value of Y is µ := E(Y ) = 0(1 q) + 1q/2 + 2q/2 = 3q/2 We know that if µ < 1 then the population will eventually die out with probability 1 q < 2/3 This corresponds to (b) Let φ(s) be the generating function of Y This is the function φ(s) = p k s k = (1 q) + (q/2)s + (q/2)s 2 The extinction probability, defined by k=0 a = P 1 (population eventually dies out), is the smallest positive root of φ(s) = s Now s = (1 q) + (q/2)(s + s 2 ) Equivalently, a is the smallest positive root of When q = 09, the equation becomes s 2 2 q 2(1 q) s + = 0 q q s s = 0 This equation has roots s = 1,2/9 Therefore, the extinction probability is a = 2/9 The extinction probability for initial population size X 0 = 4 is a 4 = (2/9) 4 and the probability of not going extinct is probability of population surviving forever if q = 09 is 1 (2/9) (Text, Exercise 31, page 82) Suppose that the number of calls per hour arriving at an answering service follows a Poisson process with λ = 4 (a) What is the probability that fewer than two calls come in the first hour? 19
20 (b) Suppose that six calls arrive in the first hour What is the probability that at least two calls will arrive in the second hour? (c) The person answering the phones waits until fifteen phone calls have arrived before going to lunch What is the expected amount of time that the person will wait? (d) Suppose it is known that exactly eight calls arrived in the first two hours What is the probability that exactly five of them arrived in the first hour? (e) Suppose it is known that exactly k calls arrived in the first four hours What is the probability that exactly j of them arrived in the first hour? Solution (a) We know that the probability of k calls by time t is P(X t = k) = e The probability of fewer than 2 calls in the first hour is λt (λt)k k! P(X 1 = 0) + P(X 2 = 1) = e 4 + 4e 4 = 5/e (b) From the result of (a), the probability of at least 2 calls in the first hour is 1 5/e Because of the independence of X 2 X 1 and X 1 X 0, the probability that at least two calls arrive in the second hour is the same as in the first hour So this probability is approximately 0908 (c) Let T 1,,T 15 be the times between consecutive calls We know that these are independent exponentially distributed random variables with parameter λ = 4 Thus the expected time is E(T T 15 ) = 15E(T ) = 15/λ = 15/4 Therefore, the expected time is 15/4 hours (d) We want to find the probability P(X 1 = 5 X 2 = 8) The definition of conditional probability implies that P(X 1 = 5 X 2 = 8) = P(X 1 = 5, X 2 = 8) P(X 2 = 8) = P(X 2 = 8 X 1 = 5) P(X 1 = 5) P(X 2 = 8) This gives Therefore, P(X 1 = 5 X 2 = 8) = P(X 1 = 3)P(X 1 = 5) P(X 2 = 8) (4 1) (4 1)5 3! e 5! e 4 1 ( ) 8 1 = = (4 2) 8 8! e = 7 32 P(X 1 = 5 X 2 = 8) = 7 32 (e) We now want the conditional probability P(X 1 = j X 4 = k) for k j The argument is the same as in (d) The result is ( ) ( )( ) k 3 1 j P(X 1 = j X 4 = k) = j Do a stochastic simulation of the Markov chain of exercise 35, page 84 of the textbook: X t is the Markov chain with state space {1, 2} and rates α(1, 2) = 1, α(2, 1) = 4 Then 20
21 (a) Plot the graph of a sample history of the process over the time interval [0,20] (b) Give the mean (sample) waiting time (c) What should be the exact value of the mean waiting time for a very long sample history? (Check whether your experimentally obtained result is close to the exact value For better precision, you may take a longer time interval, say [0,500]) Solution (a) The graph of the sample history over the time interval from 0 to 20 is shown below: A sample history State Time Figure 14: Sample history of the process of problem 4 This graph can be obtained using the program #We generate a sample history of the process up to time tmax tmax=20 #The exponential parameters are: lambda=1 mu=4 #Initialize the current time to 0 t=0 #The vector times will record the state transition times times=matrix(0,1,1) #The vector states will record the sequence of states, #which may be either 0 or 1 states=matrix(0,1,1) #The step index is n n=1 while (t<tmax) { if (states[n]==0) { dt=rexp(1,lambda) 21
22 states[n+1]=1} else { dt=rexp(1,mu) states[n+1]=0 } t=t+dt times[n]=t n=n+1 } #We can visualize the history by plotting states #against times Note the use of the step function operation #Check?stepfun for the details of how to use it F=stepfun(times,states,f=0) plot(f,xlab= Time,xlim=range(c(0,tmax)),ylab= State,main= A sample history ) grid() (b) To obtain the mean waiting time, we may do as follows: run the previous program with a larger value of tmax, say tmax=500 Then n=length(times) times_shift=c(0,times[1:n1]) difference=timestimes_shift mean(difference) gives the mean time I obtained the value 0621 (c) The exact value can be obtained as follows The mean time between transitions from 1 to 2 is 1 and from 2 to 1 is 1/4 Because these states alternate, the mean onestep time is the average (1 + 1/4)/2 = 0625 So the value obtained in (b) is reasonable 22
The Exponential Distribution
21 The Exponential Distribution From DiscreteTime to ContinuousTime: In Chapter 6 of the text we will be considering Markov processes in continuous time. In a sense, we already have a very good understanding
More informationExample: 1. You have observed that the number of hits to your web site follow a Poisson distribution at a rate of 2 per day.
16 The Exponential Distribution Example: 1. You have observed that the number of hits to your web site follow a Poisson distribution at a rate of 2 per day. Let T be the time (in days) between hits. 2.
More informationIEOR 6711: Stochastic Models, I Fall 2012, Professor Whitt, Final Exam SOLUTIONS
IEOR 6711: Stochastic Models, I Fall 2012, Professor Whitt, Final Exam SOLUTIONS There are four questions, each with several parts. 1. Customers Coming to an Automatic Teller Machine (ATM) (30 points)
More informationProbabilities and Random Variables
Probabilities and Random Variables This is an elementary overview of the basic concepts of probability theory. 1 The Probability Space The purpose of probability theory is to model random experiments so
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES Contents 1. Random variables and measurable functions 2. Cumulative distribution functions 3. Discrete
More informationMaster s Theory Exam Spring 2006
Spring 2006 This exam contains 7 questions. You should attempt them all. Each question is divided into parts to help lead you through the material. You should attempt to complete as much of each problem
More informationProbability Generating Functions
page 39 Chapter 3 Probability Generating Functions 3 Preamble: Generating Functions Generating functions are widely used in mathematics, and play an important role in probability theory Consider a sequence
More informationLECTURE 4. Last time: Lecture outline
LECTURE 4 Last time: Types of convergence Weak Law of Large Numbers Strong Law of Large Numbers Asymptotic Equipartition Property Lecture outline Stochastic processes Markov chains Entropy rate Random
More informatione.g. arrival of a customer to a service station or breakdown of a component in some system.
Poisson process Events occur at random instants of time at an average rate of λ events per second. e.g. arrival of a customer to a service station or breakdown of a component in some system. Let N(t) be
More information1. (First passage/hitting times/gambler s ruin problem:) Suppose that X has a discrete state space and let i be a fixed state. Let
Copyright c 2009 by Karl Sigman 1 Stopping Times 1.1 Stopping Times: Definition Given a stochastic process X = {X n : n 0}, a random time τ is a discrete random variable on the same probability space as
More informationContinued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
More informationImportant Probability Distributions OPRE 6301
Important Probability Distributions OPRE 6301 Important Distributions... Certain probability distributions occur with such regularity in reallife applications that they have been given their own names.
More informationModern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh
Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Peter Richtárik Week 3 Randomized Coordinate Descent With Arbitrary Sampling January 27, 2016 1 / 30 The Problem
More informationMAS108 Probability I
1 QUEEN MARY UNIVERSITY OF LONDON 2:30 pm, Thursday 3 May, 2007 Duration: 2 hours MAS108 Probability I Do not start reading the question paper until you are instructed to by the invigilators. The paper
More informationQUEUEING THEORY WITH APPLICATIONS AND SPECIAL CONSIDERATION TO EMERGENCY CARE
QUEUEING THEORY WITH APPLICATIONS AND SPECIAL CONSIDERATION TO EMERGENCY CARE JAMES KEESLING. Introduction Much that is essential in modern life would not be possible without queueing theory. All communication
More informationLecture Notes 1. Brief Review of Basic Probability
Probability Review Lecture Notes Brief Review of Basic Probability I assume you know basic probability. Chapters 3 are a review. I will assume you have read and understood Chapters 3. Here is a very
More informationSupplement to Call Centers with Delay Information: Models and Insights
Supplement to Call Centers with Delay Information: Models and Insights Oualid Jouini 1 Zeynep Akşin 2 Yves Dallery 1 1 Laboratoire Genie Industriel, Ecole Centrale Paris, Grande Voie des Vignes, 92290
More informationP(a X b) = f X (x)dx. A p.d.f. must integrate to one: f X (x)dx = 1. Z b
Continuous Random Variables The probability that a continuous random variable, X, has a value between a and b is computed by integrating its probability density function (p.d.f.) over the interval [a,b]:
More informationIntroduction to Probability
Introduction to Probability EE 179, Lecture 15, Handout #24 Probability theory gives a mathematical characterization for experiments with random outcomes. coin toss life of lightbulb binary data sequence
More informationIntroduction to Probability Theory for Graduate Economics. Fall 2008
Introduction to Probability Theory for Graduate Economics Fall 2008 Yiğit Sağlam December 1, 2008 CHAPTER 5  STOCHASTIC PROCESSES 1 Stochastic Processes A stochastic process, or sometimes a random process,
More informationWald s Identity. by Jeffery Hein. Dartmouth College, Math 100
Wald s Identity by Jeffery Hein Dartmouth College, Math 100 1. Introduction Given random variables X 1, X 2, X 3,... with common finite mean and a stopping rule τ which may depend upon the given sequence,
More informationNotes on Continuous Random Variables
Notes on Continuous Random Variables Continuous random variables are random quantities that are measured on a continuous scale. They can usually take on any value over some interval, which distinguishes
More informationRandom variables, probability distributions, binomial random variable
Week 4 lecture notes. WEEK 4 page 1 Random variables, probability distributions, binomial random variable Eample 1 : Consider the eperiment of flipping a fair coin three times. The number of tails that
More informationChapter 4 Lecture Notes
Chapter 4 Lecture Notes Random Variables October 27, 2015 1 Section 4.1 Random Variables A random variable is typically a realvalued function defined on the sample space of some experiment. For instance,
More informationMath 431 An Introduction to Probability. Final Exam Solutions
Math 43 An Introduction to Probability Final Eam Solutions. A continuous random variable X has cdf a for 0, F () = for 0 <
More informationProbability and Statistics
CHAPTER 2: RANDOM VARIABLES AND ASSOCIATED FUNCTIONS 2b  0 Probability and Statistics Kristel Van Steen, PhD 2 Montefiore Institute  Systems and Modeling GIGA  Bioinformatics ULg kristel.vansteen@ulg.ac.be
More information4. Joint Distributions
Virtual Laboratories > 2. Distributions > 1 2 3 4 5 6 7 8 4. Joint Distributions Basic Theory As usual, we start with a random experiment with probability measure P on an underlying sample space. Suppose
More informationOn the mathematical theory of splitting and Russian roulette
On the mathematical theory of splitting and Russian roulette techniques St.Petersburg State University, Russia 1. Introduction Splitting is an universal and potentially very powerful technique for increasing
More informationCARDINALITY, COUNTABLE AND UNCOUNTABLE SETS PART ONE
CARDINALITY, COUNTABLE AND UNCOUNTABLE SETS PART ONE With the notion of bijection at hand, it is easy to formalize the idea that two finite sets have the same number of elements: we just need to verify
More informationSome Polynomial Theorems. John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom.
Some Polynomial Theorems by John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom.com This paper contains a collection of 31 theorems, lemmas,
More information1 Short Introduction to Time Series
ECONOMICS 7344, Spring 202 Bent E. Sørensen January 24, 202 Short Introduction to Time Series A time series is a collection of stochastic variables x,.., x t,.., x T indexed by an integer value t. The
More informationMATH4427 Notebook 2 Spring 2016. 2 MATH4427 Notebook 2 3. 2.1 Definitions and Examples... 3. 2.2 Performance Measures for Estimators...
MATH4427 Notebook 2 Spring 2016 prepared by Professor Jenny Baglivo c Copyright 20092016 by Jenny A. Baglivo. All Rights Reserved. Contents 2 MATH4427 Notebook 2 3 2.1 Definitions and Examples...................................
More information6 Scalar, Stochastic, Discrete Dynamic Systems
47 6 Scalar, Stochastic, Discrete Dynamic Systems Consider modeling a population of sandhill cranes in year n by the firstorder, deterministic recurrence equation y(n + 1) = Ry(n) where R = 1 + r = 1
More informationA simple criterion on degree sequences of graphs
Discrete Applied Mathematics 156 (2008) 3513 3517 Contents lists available at ScienceDirect Discrete Applied Mathematics journal homepage: www.elsevier.com/locate/dam Note A simple criterion on degree
More informationMath 313 Lecture #10 2.2: The Inverse of a Matrix
Math 1 Lecture #10 2.2: The Inverse of a Matrix Matrix algebra provides tools for creating many useful formulas just like real number algebra does. For example, a real number a is invertible if there is
More informationYou know from calculus that functions play a fundamental role in mathematics.
CHPTER 12 Functions You know from calculus that functions play a fundamental role in mathematics. You likely view a function as a kind of formula that describes a relationship between two (or more) quantities.
More informationThe sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1].
Probability Theory Probability Spaces and Events Consider a random experiment with several possible outcomes. For example, we might roll a pair of dice, flip a coin three times, or choose a random real
More informationJoint Probability Distributions and Random Samples (Devore Chapter Five)
Joint Probability Distributions and Random Samples (Devore Chapter Five) 101634501 Probability and Statistics for Engineers Winter 20102011 Contents 1 Joint Probability Distributions 1 1.1 Two Discrete
More informationLectures 56: Taylor Series
Math 1d Instructor: Padraic Bartlett Lectures 5: Taylor Series Weeks 5 Caltech 213 1 Taylor Polynomials and Series As we saw in week 4, power series are remarkably nice objects to work with. In particular,
More informationDecember 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in twodimensional space (1) 2x y = 3 describes a line in twodimensional space The coefficients of x and y in the equation
More informationRandom access protocols for channel access. Markov chains and their stability. Laurent Massoulié.
Random access protocols for channel access Markov chains and their stability laurent.massoulie@inria.fr Aloha: the first random access protocol for channel access [Abramson, Hawaii 70] Goal: allow machines
More informationInformation Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay
Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture  17 ShannonFanoElias Coding and Introduction to Arithmetic Coding
More informationExponential Distribution
Exponential Distribution Definition: Exponential distribution with parameter λ: { λe λx x 0 f(x) = 0 x < 0 The cdf: F(x) = x Mean E(X) = 1/λ. f(x)dx = Moment generating function: φ(t) = E[e tx ] = { 1
More informationU.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, 2009. Notes on Algebra
U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, 2009 Notes on Algebra These notes contain as little theory as possible, and most results are stated without proof. Any introductory
More informationThe Basics of Interest Theory
Contents Preface 3 The Basics of Interest Theory 9 1 The Meaning of Interest................................... 10 2 Accumulation and Amount Functions............................ 14 3 Effective Interest
More informationSPECTRAL POLYNOMIAL ALGORITHMS FOR COMPUTING BIDIAGONAL REPRESENTATIONS FOR PHASE TYPE DISTRIBUTIONS AND MATRIXEXPONENTIAL DISTRIBUTIONS
Stochastic Models, 22:289 317, 2006 Copyright Taylor & Francis Group, LLC ISSN: 15326349 print/15324214 online DOI: 10.1080/15326340600649045 SPECTRAL POLYNOMIAL ALGORITHMS FOR COMPUTING BIDIAGONAL
More information5. Continuous Random Variables
5. Continuous Random Variables Continuous random variables can take any value in an interval. They are used to model physical characteristics such as time, length, position, etc. Examples (i) Let X be
More informationMath 370/408, Spring 2008 Prof. A.J. Hildebrand. Actuarial Exam Practice Problem Set 2 Solutions
Math 70/408, Spring 2008 Prof. A.J. Hildebrand Actuarial Exam Practice Problem Set 2 Solutions About this problem set: These are problems from Course /P actuarial exams that I have collected over the years,
More informationSTT315 Chapter 4 Random Variables & Probability Distributions KM. Chapter 4.5, 6, 8 Probability Distributions for Continuous Random Variables
Chapter 4.5, 6, 8 Probability Distributions for Continuous Random Variables Discrete vs. continuous random variables Examples of continuous distributions o Uniform o Exponential o Normal Recall: A random
More informationLecture 7: Continuous Random Variables
Lecture 7: Continuous Random Variables 21 September 2005 1 Our First Continuous Random Variable The back of the lecture hall is roughly 10 meters across. Suppose it were exactly 10 meters, and consider
More informationMatrix Norms. Tom Lyche. September 28, Centre of Mathematics for Applications, Department of Informatics, University of Oslo
Matrix Norms Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 28, 2009 Matrix Norms We consider matrix norms on (C m,n, C). All results holds for
More informationTenth Problem Assignment
EECS 40 Due on April 6, 007 PROBLEM (8 points) Dave is taking a multiplechoice exam. You may assume that the number of questions is infinite. Simultaneously, but independently, his conscious and subconscious
More informationLectures on Stochastic Processes. William G. Faris
Lectures on Stochastic Processes William G. Faris November 8, 2001 2 Contents 1 Random walk 7 1.1 Symmetric simple random walk................... 7 1.2 Simple random walk......................... 9 1.3
More informationThe Heat Equation. Lectures INF2320 p. 1/88
The Heat Equation Lectures INF232 p. 1/88 Lectures INF232 p. 2/88 The Heat Equation We study the heat equation: u t = u xx for x (,1), t >, (1) u(,t) = u(1,t) = for t >, (2) u(x,) = f(x) for x (,1), (3)
More informationSums of Independent Random Variables
Chapter 7 Sums of Independent Random Variables 7.1 Sums of Discrete Random Variables In this chapter we turn to the important question of determining the distribution of a sum of independent random variables
More information3.2 Roulette and Markov Chains
238 CHAPTER 3. DISCRETE DYNAMICAL SYSTEMS WITH MANY VARIABLES 3.2 Roulette and Markov Chains In this section we will be discussing an application of systems of recursion equations called Markov Chains.
More informationPoisson Processes. Chapter 5. 5.1 Exponential Distribution. The gamma function is defined by. Γ(α) = t α 1 e t dt, α > 0.
Chapter 5 Poisson Processes 5.1 Exponential Distribution The gamma function is defined by Γ(α) = t α 1 e t dt, α >. Theorem 5.1. The gamma function satisfies the following properties: (a) For each α >
More informationLOGNORMAL MODEL FOR STOCK PRICES
LOGNORMAL MODEL FOR STOCK PRICES MICHAEL J. SHARPE MATHEMATICS DEPARTMENT, UCSD 1. INTRODUCTION What follows is a simple but important model that will be the basis for a later study of stock prices as
More information9.2 Summation Notation
9. Summation Notation 66 9. Summation Notation In the previous section, we introduced sequences and now we shall present notation and theorems concerning the sum of terms of a sequence. We begin with a
More informationCommon probability distributionsi Math 217/218 Probability and Statistics Prof. D. Joyce, 2016
Introduction. ommon probability distributionsi Math 7/8 Probability and Statistics Prof. D. Joyce, 06 I summarize here some of the more common distributions used in probability and statistics. Some are
More information4 The M/M/1 queue. 4.1 Timedependent behaviour
4 The M/M/1 queue In this chapter we will analyze the model with exponential interarrival times with mean 1/λ, exponential service times with mean 1/µ and a single server. Customers are served in order
More informationMath 55: Discrete Mathematics
Math 55: Discrete Mathematics UC Berkeley, Spring 2012 Homework # 9, due Wednesday, April 11 8.1.5 How many ways are there to pay a bill of 17 pesos using a currency with coins of values of 1 peso, 2 pesos,
More informationTHE CENTRAL LIMIT THEOREM TORONTO
THE CENTRAL LIMIT THEOREM DANIEL RÜDT UNIVERSITY OF TORONTO MARCH, 2010 Contents 1 Introduction 1 2 Mathematical Background 3 3 The Central Limit Theorem 4 4 Examples 4 4.1 Roulette......................................
More informationTEST 2 STUDY GUIDE. 1. Consider the data shown below.
2006 by The Arizona Board of Regents for The University of Arizona All rights reserved Business Mathematics I TEST 2 STUDY GUIDE 1 Consider the data shown below (a) Fill in the Frequency and Relative Frequency
More informationInduction. Margaret M. Fleck. 10 October These notes cover mathematical induction and recursive definition
Induction Margaret M. Fleck 10 October 011 These notes cover mathematical induction and recursive definition 1 Introduction to induction At the start of the term, we saw the following formula for computing
More informationCITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION
No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August
More informationExploratory Data Analysis
Exploratory Data Analysis Johannes Schauer johannes.schauer@tugraz.at Institute of Statistics Graz University of Technology Steyrergasse 17/IV, 8010 Graz www.statistics.tugraz.at February 12, 2008 Introduction
More informationFEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL
FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL STATIsTICs 4 IV. RANDOm VECTORs 1. JOINTLY DIsTRIBUTED RANDOm VARIABLEs If are two rom variables defined on the same sample space we define the joint
More informationSECTION 102 Mathematical Induction
73 0 Sequences and Series 6. Approximate e 0. using the first five terms of the series. Compare this approximation with your calculator evaluation of e 0.. 6. Approximate e 0.5 using the first five terms
More informationA Tutorial on Probability Theory
Paola Sebastiani Department of Mathematics and Statistics University of Massachusetts at Amherst Corresponding Author: Paola Sebastiani. Department of Mathematics and Statistics, University of Massachusetts,
More informationEigenvalues and eigenvectors of a matrix
Eigenvalues and eigenvectors of a matrix Definition: If A is an n n matrix and there exists a real number λ and a nonzero column vector V such that AV = λv then λ is called an eigenvalue of A and V is
More informationManual for SOA Exam MLC.
Chapter 11. Poisson processes. Section 11.4. Superposition and decomposition of a Poisson process. Extract from: Arcones Fall 2009 Edition, available at http://www.actexmadriver.com/ 1/18 Superposition
More informationWhat is Statistics? Lecture 1. Introduction and probability review. Idea of parametric inference
0. 1. Introduction and probability review 1.1. What is Statistics? What is Statistics? Lecture 1. Introduction and probability review There are many definitions: I will use A set of principle and procedures
More informationMultistate transition models with actuarial applications c
Multistate transition models with actuarial applications c by James W. Daniel c Copyright 2004 by James W. Daniel Reprinted by the Casualty Actuarial Society and the Society of Actuaries by permission
More informationBNG 202 Biomechanics Lab. Descriptive statistics and probability distributions I
BNG 202 Biomechanics Lab Descriptive statistics and probability distributions I Overview The overall goal of this short course in statistics is to provide an introduction to descriptive and inferential
More information6. Jointly Distributed Random Variables
6. Jointly Distributed Random Variables We are often interested in the relationship between two or more random variables. Example: A randomly chosen person may be a smoker and/or may get cancer. Definition.
More information15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
More informationECE302 Spring 2006 HW3 Solutions February 2, 2006 1
ECE302 Spring 2006 HW3 Solutions February 2, 2006 1 Solutions to HW3 Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in
More informationPoisson processes (and mixture distributions)
Poisson processes (and mixture distributions) James W. Daniel Austin Actuarial Seminars www.actuarialseminars.com June 26, 2008 c Copyright 2007 by James W. Daniel; reproduction in whole or in part without
More informationLecture 6: Discrete & Continuous Probability and Random Variables
Lecture 6: Discrete & Continuous Probability and Random Variables D. Alex Hughes Math Camp September 17, 2015 D. Alex Hughes (Math Camp) Lecture 6: Discrete & Continuous Probability and Random September
More informationST 371 (IV): Discrete Random Variables
ST 371 (IV): Discrete Random Variables 1 Random Variables A random variable (rv) is a function that is defined on the sample space of the experiment and that assigns a numerical variable to each possible
More informationSolved Problems. Chapter 14. 14.1 Probability review
Chapter 4 Solved Problems 4. Probability review Problem 4.. Let X and Y be two N 0 valued random variables such that X = Y + Z, where Z is a Bernoulli random variable with parameter p (0, ), independent
More informationStochastic Processes and Queueing Theory used in Cloud Computer Performance Simulations
56 Stochastic Processes and Queueing Theory used in Cloud Computer Performance Simulations Stochastic Processes and Queueing Theory used in Cloud Computer Performance Simulations FlorinCătălin ENACHE
More informationChapter 3. Distribution Problems. 3.1 The idea of a distribution. 3.1.1 The twentyfold way
Chapter 3 Distribution Problems 3.1 The idea of a distribution Many of the problems we solved in Chapter 1 may be thought of as problems of distributing objects (such as pieces of fruit or pingpong balls)
More informationBonusmalus systems and Markov chains
Bonusmalus systems and Markov chains Dutch car insurance bonusmalus system class % increase new class after # claims 0 1 2 >3 14 30 14 9 5 1 13 32.5 14 8 4 1 12 35 13 8 4 1 11 37.5 12 7 3 1 10 40 11
More informationThe Basics of Graphical Models
The Basics of Graphical Models David M. Blei Columbia University October 3, 2015 Introduction These notes follow Chapter 2 of An Introduction to Probabilistic Graphical Models by Michael Jordan. Many figures
More information, for x = 0, 1, 2, 3,... (4.1) (1 + 1/n) n = 2.71828... b x /x! = e b, x=0
Chapter 4 The Poisson Distribution 4.1 The Fish Distribution? The Poisson distribution is named after SimeonDenis Poisson (1781 1840). In addition, poisson is French for fish. In this chapter we will
More informationPoisson Models for Count Data
Chapter 4 Poisson Models for Count Data In this chapter we study loglinear models for count data under the assumption of a Poisson error structure. These models have many applications, not only to the
More informationPull versus Push Mechanism in Large Distributed Networks: Closed Form Results
Pull versus Push Mechanism in Large Distributed Networks: Closed Form Results Wouter Minnebo, Benny Van Houdt Dept. Mathematics and Computer Science University of Antwerp  iminds Antwerp, Belgium Wouter
More informationAnalysis of a Production/Inventory System with Multiple Retailers
Analysis of a Production/Inventory System with Multiple Retailers Ann M. Noblesse 1, Robert N. Boute 1,2, Marc R. Lambrecht 1, Benny Van Houdt 3 1 Research Center for Operations Management, University
More informationCollatz Sequence. Fibbonacci Sequence. n is even; Recurrence Relation: a n+1 = a n + a n 1.
Fibonacci Roulette In this game you will be constructing a recurrence relation, that is, a sequence of numbers where you find the next number by looking at the previous numbers in the sequence. Your job
More information11 Ideals. 11.1 Revisiting Z
11 Ideals The presentation here is somewhat different than the text. In particular, the sections do not match up. We have seen issues with the failure of unique factorization already, e.g., Z[ 5] = O Q(
More informationOverview of Monte Carlo Simulation, Probability Review and Introduction to Matlab
Monte Carlo Simulation: IEOR E4703 Fall 2004 c 2004 by Martin Haugh Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab 1 Overview of Monte Carlo Simulation 1.1 Why use simulation?
More informationProbability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur
Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Module No. #01 Lecture No. #15 Special DistributionsVI Today, I am going to introduce
More informationQuestion: What is the probability that a fivecard poker hand contains a flush, that is, five cards of the same suit?
ECS20 Discrete Mathematics Quarter: Spring 2007 Instructor: John Steinberger Assistant: Sophie Engle (prepared by Sophie Engle) Homework 8 Hints Due Wednesday June 6 th 2007 Section 6.1 #16 What is the
More informationWorked examples Basic Concepts of Probability Theory
Worked examples Basic Concepts of Probability Theory Example 1 A regular tetrahedron is a body that has four faces and, if is tossed, the probability that it lands on any face is 1/4. Suppose that one
More informationLecture 4: BK inequality 27th August and 6th September, 2007
CSL866: Percolation and Random Graphs IIT Delhi Amitabha Bagchi Scribe: Arindam Pal Lecture 4: BK inequality 27th August and 6th September, 2007 4. Preliminaries The FKG inequality allows us to lower bound
More informationHow Useful Is Old Information?
6 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 11, NO. 1, JANUARY 2000 How Useful Is Old Information? Michael Mitzenmacher AbstractÐWe consider the problem of load balancing in dynamic distributed
More informationWHERE DOES THE 10% CONDITION COME FROM?
1 WHERE DOES THE 10% CONDITION COME FROM? The text has mentioned The 10% Condition (at least) twice so far: p. 407 Bernoulli trials must be independent. If that assumption is violated, it is still okay
More informationECE302 Spring 2006 HW7 Solutions March 11, 2006 1
ECE32 Spring 26 HW7 Solutions March, 26 Solutions to HW7 Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in italics where
More information