Markov Chains. Chapter Introduction and Definitions 110SOR201(2002)
|
|
- Neil West
- 7 years ago
- Views:
Transcription
1 page 5 SOR() Chapter Markov Chains. Introduction and Definitions Consider a sequence of consecutive times ( or trials or stages): n =,,,... Suppose that at each time a probabilistic experiment is performed, the outcome of which determines the state of the system at that time. For convenience, denote (or label) the states of the system by {,,,...}, a discrete (finite or countably infinite) state space: at each time these states are mutually exclusive and exhaustive. Denote the event the system is in state k at time n by X n = k. For a given value of n, X n is a random variable and has a probability distribution {P(X n = k) : k =,,,...} termed the absolute probability distribution at time n. We refer to the sequence {X n } as a discrete-time stochastic process. Example Consider a sequence of independent Bernoulli trials, each with the same probability of success. Let X n be the number of successes obtained after n trials (n =,,...). We might find Trial n Outcome S S S F S S F F... X n (Here we ignore n = or define X = ). This is a particular realization of the stochastic process {X n }. We know that P(X n = k) is binomial. The distinguishing features of a stochastic process {X n } are the relationships between X, X, X,... Suppose we have observed the system at times,,..., n and we wish to make a probability statement about the state of the system at time n. known State i i i i n? Time n n The most general model has the state of the system at time n being dependent on the entire past history of the system. We would then be interested in conditional probabilities P(X n = i n X = i, X = i,..., X n = i n ).
2 page 5 SOR() The simplest model assumes that the states at different times are independent events, so that P(X n = i n X = i, X = i,..., X n = i n ) = P(X n = i n ), n =,,,... : i =,,,..., etc. The next simplest (Markov) model introduces the simplest form of dependence. Definition A sequence of r.v.s X, X, X,... is said to be a (discrete) Markov chain if P(X n = j X = i, X = i,..., X n = i) = P(X n = j X n = i) for all possible n, i, j, i,..., i n. (.) Hence in a Markov chain we do not require knowledge of what happened at times,,..., (n ) but only what happened at time (n ) in order to make a conditional probability statement about the state of the system at time n. We describe the system as making a transition from state i to state j at time n if X n = i and X n = j with (one-step) transition probability P(X n = j X n = i). We shall only consider systems for which the transition probabilities are independent of time, so that P(X n = j X n = i) = p ij for n and all i, j. (.) This is the stationarity assumption and {X n : n =,,...} is then termed a (time)-homogeneous Markov chain. The transition probabilities {p ij } form the transition probability matrix P : The {p ij } have the properties p p p... p p p... P = p p p and all j p ij, all i, j p ij =, all i (.3) (note that i i transitions are possible). Such a matrix is termed a stochastic matrix. It is often convenient to show P on a state (or transition) diagram: each vertex (or node) in the diagram corresponds to a state and each arrow to a non-zero p ij. For example,..3.5 P = is represented by
3 page 5 SOR() Knowledge of P and the initial distribution {P(X = k), k =,,,...} enables us, at least in theory, to calculate all probabilities of interest, e.g. absolute probabilities at time n, P(X n = k), k =,,,... and conditional probabilities (or m-step transition probabilities) P(X m+n = j X n = i), m. Notes: (i) Some systems may be more appropriately defined for n =,,... and/or a state space which is a subset of {,,,...}: the above discussion is readily modified to take account of such variations. (ii) If the system is known to be in state l at time, the initial distribution is P(X = l) = P(X = k) =, for k l.. Some simple examples Example. An occupancy problem. Balls are distributed, one after the other, at random among cells. Let X n be the number of empty cells remaining after the nth ball has been distributed. The stages are : n =,, 3,... The state space is: {,,, 3}. The possible transitions are: Transition Conditional X n X n Probability Other transitions are impossible (i.e. have zero probabilities).
4 page 53 SOR() The probability distribution over states at stage n can be found from a knowledge of the state at stage (n ), the information from earlier stages not being required. Hence {X n } is a Markov chain. Furthermore, the transition probabilities are not functions of n, so the chain is homogeneous. The transition probability matrix is and the transition diagram is P = 3 3 Example. 3 Random walk with absorbing barriers. A random walk is the path traced out by the motion of a particle which takes repeatedly a step of one unit in some direction, the direction being randomly chosen. Consider a one-dimensional random walk on the x-axis, where there are absorbing barriers at x = and x = M (a positive integer), with 3 P(particle moves one unit to the right) = p P(particle moves one unit to the left) = p = q, except that if the particle is at x = or x = M it stays there. The steps or times are: n =,,,... Let X n = position of particle after step (or at time) n. The state space is: {,,,..., M}. The transition probabilities are homogeneous and given by: all other p ij being zero. {X n } is a homogeneous Markov chain with p i,i+ = p, p i,i = q; i =,,..., M p =, p M,M =,..... q p..... q p P = q p...
5 page 5 SOR() and the transition diagram is p p p p M- M q q q q Example.3 Discrete-time queue Customers arrive for service and take their place in a waiting line. During each time period a single customer is served, provided there is at least one customer waiting. During a service period new customers may arrive: the number of customers arriving in a time period is denoted by Y, with probability distribution P(Y = k) = a k, k =,,,... We assume that the numbers of arrivals in different periods are independent. Here the times (each marking the end of its respective period) are:,,,... Let X n = number of customers waiting at time n. The state space is: {,,,...} The situation at time n can be pictured as follows: n X n = i or X n = n X n = i + k = j X n = k time k arrivals in (n, n], prob. a k : k =,,... X n is a homogeneous Markov chain (why?) with a a a a a a a a a a a.... P = a a
6 page 55 SOR().3 Calculation of probabilities The absolute probabilities at time n can be calculated from those at time (n ) by invoking the law of total probability. Conditioning on the state at time n, we have: P(X n = j) = i = i P(X n = j X n = i)p(x n = i) p ij P(X n = i) (since {X n = i : i =,,,...} are mutually exclusive and exhaustive events). In matrix notation: p (n) = p (n ) P, n (.) where p (n) = (P(X n = ), P(X n = ),...) (row vector). Repeated application of (.) gives p (n) = p (r) P n r, r n (.5) and in particular i.e. P(X n = j) = i p (n) = p () P n (.6) p (n) ij P(X = i) (.7) where p (n) ij denotes the (i, j) element of P n. To obtain the conditional probabilities (or n-step transition probabilities), we condition on the initial state: P(X n = j) = P(X n = j X = i)p(x = i). (.8) i Then, comparing with (.7), we deduce that Also, because the Markov chain is homogeneous, So P(X n = j X = i) = p (n) ij. (.9) P(X r+n = j X r = i) = p (n) ij. P(X t = j X s = i) = p (t s) ij, t > s. (.) The main labour in actual calculations is the evaluation of P n. If the elements of P are numerical, P n may be computed by a suitable numerical method when the number of states is finite: if they are algebraic, there are special methods for some forms of P.
7 page 56 SOR(). Classification of States [Note: this is an extensive subject, and only a limited account is given here.] Suppose the system is in state k at time n =. Let f kk = P(X n = k for some n X = k), (.) i.e. f kk is the probability that the system returns at some time to the state k. The state k is called persistent or recurrent if f kk =, i.e. a return to k is certain. Otherwise (f kk < ) the state k is called transient (there is a positive probability that k is never re-entered). If p kk =, state k is termed an absorbing state. The state k is periodic with period t k > if and p (n) kk > when n is a multiple of t k p (n) kk = otherwise. Thus t k = gcd{n : p (n) kk > }: e.g. if p (n) kk > only for n =, 8,,.., then t k =. State k is termed aperiodic if no such t k > exists. State j is accessible (or reachable) from state i (i j) if p (n) ij > for some n >. If i j and j i, then states i and j are said to communicate (i j). It can be shown that if i j then states i and j (i) are both transient or both recurrent; (ii) have the same period. A set C of states is called irreducible if i j for all i, j C, so all the states in an irreducible set have the same period and are either all transient or all recurrent. A set C of states is said to be closed if no state outside C is accessible from any state in C, i.e. p ij = for all i C, j C. (Thus, an absorbing state is a closed set with just one state.) It can be shown that the entire state space can be uniquely partitioned as follows: T C C (.) where T is the set of transient states and C, C,... are irreducible closed sets of recurrent states (some of the C i may be absorbing states). Quite often, the entire state space is irreducible, so the terms irreducible, aperiodic etc. can be applied to the Markov chain as a whole. An irreducible chain contains at most one closed set of states. In a finite chain, it is impossible for all states to be transient: if the chain is irreducible, the states are recurrent.
8 page 57 SOR() Let s consider some examples. Example. (i) State space S = {,, }, with P =. The possible direct transitions are:, ;, ;,. So i j for all i, j S. S is the only closed set, and the Markov chain is irreducible, which in turn implies that all 3 states are recurrent Also p =, p () >, p(3) >,..., p(n) >. So state is aperiodic, which implies that all states are aperiodic. (ii) State space S = {,,, 3}, with P =. The possible direct transitions are:, 3 : : : 3, so again i j for all i, j S. S is the only closed set, the Markov chain is irreducible, and all states are recurrent Also p =, p () 3 =, p(3) = >... Thus state is periodic with period t = 3, and so all states have period 3. (iii) State space S = {,,, 3, }, with P =. The possible direct transitions are:, ;, ;, 3; 3, 3;,,. We conclude that {, } is a closed set, irreducible and aperiodic, and its states are recurrent; similarly for {, 3}: state is transient and aperiodic. Thus S = T C C, where T = {}, C = {, }, C = {, 3}.
9 page 58 SOR().5 The Limiting Distribution Markov s Theorem Consider a finite, aperiodic, irreducible Markov chain with states,,..., M. Then p (n) ij π j as n, for all i, j; (.3a) or π π π M P n π π π M as n. (.3b) π π π M The limiting probabilities {π j : j =,..., M} are the unique solution of the equations M π j = π i p ij, j =,..., M (.a) satisfying the normalisation condition i= M π i =. i= The equations (.a) may be written compactly as (.b) where Also π = πp, (.5) π = (π, π,..., π M ). p (n) = p () P n π as n. (.6) Example.5 Consider the Markov chain (i) in Example. above. Since it is finite, aperiodic and irreducible, there is a unique limiting distribution π which satisfies π = πp, i.e. π = π p + π p + π p, i.e. π = π + π π = π p + π p + π p, i.e. π = π + π π = π p + π p + π p, i.e. π = π + π together with the normalisation condition π + π + π = The best general approach to solving such equations is to set one π i =, deduce the other values and then normalise at the end: note that one of the equations in (.a) is redundant and can be used as a check at the end. So here, set π = : then the first two equations in (.a) yield π = π =. Since i π i = 3, normalisation yields π = π = π = 3.
10 page 59 SOR().6 Absorption in a finite Markov chain Consider a finite Markov chain consisting of a set T of transient states and a set A of absorbing states (termed an absorbing Markov chain). Let A k denote the event of absorption in state k and f ik the probability of A k when starting from the transient state i. A k T p ik i p ij f jk j Conditioning on the first transition (first step analysis) as indicated on the diagram, and using the law of total probability, we have P(A k X = i) = j A T P(A k X = i, X = j)p(x = j X = i) =.P(X = k X = i)+ j T P(A k X = j)p(x = j X = i), i.e. f ik = p ik + j T p ij f jk. (.7) Also, let T A denote the time to absorption (in any k A), and let µ i be the mean time to absorption starting from state i. Then, again conditioning on the first transition, we obtain µ i = E(T A X = i) = j A T E(T A X = i, X = j)p(x = j X = i) =.p ij + j A j T E(T A X = j)p ij = p ij + j A j T { + E(T A X = j)}p ij, i.e. µ i = + j T p ij µ j. (.8) (Note that in both (.7) and (.8) the summation j T includes j = i.)
11 page 6 SOR().6. Absorbing random walk and Gambler s Ruin Consider the random walk with absorbing barriers (at, M) introduced in Example. above. The states {,..., M } are transient. Then f im = p im + j T p ij f jm, i =,..., M i.e. f M = pf M f im = p i,i f i,m + p i,i+ f i+,m = qf i,m + pf i+,m, i =,..., M f M,M = p + qf M,M For convenience, define f M =, f MM =. Then f im = qf i,m + pf i+,m, i =,..., M. (.9) If we define d im = f im f i,m, i =,..., M (.) these difference equations can be written as simple -step recursions: It follows that Now so pd i+,m = qd im, i =,..., M. (.) d im = (q/p) i d M, i =,..., M. (.) i d jm = (f M f M ) + (f M f M ) + + (f im f i,m ) j= = f im f M = f im. f im = i (q/p) j d M j= = { + (q/p) + (q/p) + + (q/p) i }f M (q/p) i = (q/p) f M, if p. if M, if p = Using the fact that f MM =, we deduce that (q/p) f M = (q/p) M, if p M, if p = and so (q/p) i f im = (q/p) M, if p i M, if p = By a similar argument (or appealing to symmetry): (p/q) M i f i = (p/q) M, if q M i M, if q =,. (.3). (.)
12 page 6 SOR() We deduce that f i + f im = i.e. absorption at either or M is certain to occur sooner or later. Note that, as M, f im f i { (q/p) i, if p >, if p : { (q/p) i, if p >, if p. (.5) Similarly we have with µ = µ M =. The solution is µ i = + pµ i+ + qµ i, i M (.6) ( M ) (q/p)i µ i = p q (q/p) M i if p i(m i) if p =. (.7) We have in fact solved the famous Gambler s Ruin problem. Two players, A and B, start with a and (M a) respectively (so that their total capital is M. A coin is flipped repeatedly, giving heads with probability p and tails with probability q = p. Each time heads occurs, B gives to A, otherwise A gives to B. The game continues until one or other player runs out of money. After each flip the state of the system is A s current capital, and it is clear that this executes a random walk precisely as discussed above. Then (i) f am (f a ) is the probability that A (B) wins: (ii) µ a is the expected number of flips of the coin before one of the players becomes bankrupt and the game ends. From the result (.5) above, we see that if a gambler is playing against an infinitely rich adversary, then if p > there is a positive probability that the gambler s fortune will increase indefinitely, while if p the gambler is certain to go broke sooner or later.
1 Gambler s Ruin Problem
Coyright c 2009 by Karl Sigman 1 Gambler s Ruin Problem Let N 2 be an integer and let 1 i N 1. Consider a gambler who starts with an initial fortune of $i and then on each successive gamble either wins
More information1. (First passage/hitting times/gambler s ruin problem:) Suppose that X has a discrete state space and let i be a fixed state. Let
Copyright c 2009 by Karl Sigman 1 Stopping Times 1.1 Stopping Times: Definition Given a stochastic process X = {X n : n 0}, a random time τ is a discrete random variable on the same probability space as
More informationMarkov Chains. Chapter 4. 4.1 Stochastic Processes
Chapter 4 Markov Chains 4 Stochastic Processes Often, we need to estimate probabilities using a collection of random variables For example, an actuary may be interested in estimating the probability that
More information3.2 Roulette and Markov Chains
238 CHAPTER 3. DISCRETE DYNAMICAL SYSTEMS WITH MANY VARIABLES 3.2 Roulette and Markov Chains In this section we will be discussing an application of systems of recursion equations called Markov Chains.
More informationLECTURE 4. Last time: Lecture outline
LECTURE 4 Last time: Types of convergence Weak Law of Large Numbers Strong Law of Large Numbers Asymptotic Equipartition Property Lecture outline Stochastic processes Markov chains Entropy rate Random
More informationBasic Probability Concepts
page 1 Chapter 1 Basic Probability Concepts 1.1 Sample and Event Spaces 1.1.1 Sample Space A probabilistic (or statistical) experiment has the following characteristics: (a) the set of all possible outcomes
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES Contents 1. Random variables and measurable functions 2. Cumulative distribution functions 3. Discrete
More informationTutorial: Stochastic Modeling in Biology Applications of Discrete- Time Markov Chains. NIMBioS Knoxville, Tennessee March 16-18, 2011
Tutorial: Stochastic Modeling in Biology Applications of Discrete- Time Markov Chains Linda J. S. Allen Texas Tech University Lubbock, Texas U.S.A. NIMBioS Knoxville, Tennessee March 16-18, 2011 OUTLINE
More informationWald s Identity. by Jeffery Hein. Dartmouth College, Math 100
Wald s Identity by Jeffery Hein Dartmouth College, Math 100 1. Introduction Given random variables X 1, X 2, X 3,... with common finite mean and a stopping rule τ which may depend upon the given sequence,
More informationSection 1.3 P 1 = 1 2. = 1 4 2 8. P n = 1 P 3 = Continuing in this fashion, it should seem reasonable that, for any n = 1, 2, 3,..., = 1 2 4.
Difference Equations to Differential Equations Section. The Sum of a Sequence This section considers the problem of adding together the terms of a sequence. Of course, this is a problem only if more than
More informationREPEATED TRIALS. The probability of winning those k chosen times and losing the other times is then p k q n k.
REPEATED TRIALS Suppose you toss a fair coin one time. Let E be the event that the coin lands heads. We know from basic counting that p(e) = 1 since n(e) = 1 and 2 n(s) = 2. Now suppose we play a game
More informationFEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL
FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL STATIsTICs 4 IV. RANDOm VECTORs 1. JOINTLY DIsTRIBUTED RANDOm VARIABLEs If are two rom variables defined on the same sample space we define the joint
More informationMARTINGALES AND GAMBLING Louis H. Y. Chen Department of Mathematics National University of Singapore
MARTINGALES AND GAMBLING Louis H. Y. Chen Department of Mathematics National University of Singapore 1. Introduction The word martingale refers to a strap fastened belween the girth and the noseband of
More informationChapter 2 Discrete-Time Markov Models
Chapter Discrete-Time Markov Models.1 Discrete-Time Markov Chains Consider a system that is observed at times 0;1;;:::: Let X n be the state of the system at time n for n D 0;1;;:::: Suppose we are currently
More informationIEOR 6711: Stochastic Models, I Fall 2012, Professor Whitt, Final Exam SOLUTIONS
IEOR 6711: Stochastic Models, I Fall 2012, Professor Whitt, Final Exam SOLUTIONS There are four questions, each with several parts. 1. Customers Coming to an Automatic Teller Machine (ATM) (30 points)
More informationPractice Problems for Homework #8. Markov Chains. Read Sections 7.1-7.3. Solve the practice problems below.
Practice Problems for Homework #8. Markov Chains. Read Sections 7.1-7.3 Solve the practice problems below. Open Homework Assignment #8 and solve the problems. 1. (10 marks) A computer system can operate
More informationProbability Generating Functions
page 39 Chapter 3 Probability Generating Functions 3 Preamble: Generating Functions Generating functions are widely used in mathematics, and play an important role in probability theory Consider a sequence
More information1/3 1/3 1/3 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0 1 2 3 4 5 6 7 8 0.6 0.6 0.6 0.6 0.6 0.6 0.6
HOMEWORK 4: SOLUTIONS. 2. A Markov chain with state space {, 2, 3} has transition probability matrix /3 /3 /3 P = 0 /2 /2 0 0 Show that state 3 is absorbing and, starting from state, find the expected
More informationM/M/1 and M/M/m Queueing Systems
M/M/ and M/M/m Queueing Systems M. Veeraraghavan; March 20, 2004. Preliminaries. Kendall s notation: G/G/n/k queue G: General - can be any distribution. First letter: Arrival process; M: memoryless - exponential
More informationHomework set 4 - Solutions
Homework set 4 - Solutions Math 495 Renato Feres Problems R for continuous time Markov chains The sequence of random variables of a Markov chain may represent the states of a random system recorded at
More informationINDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS
INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS STEVEN P. LALLEY AND ANDREW NOBEL Abstract. It is shown that there are no consistent decision rules for the hypothesis testing problem
More informationHow to Gamble If You Must
How to Gamble If You Must Kyle Siegrist Department of Mathematical Sciences University of Alabama in Huntsville Abstract In red and black, a player bets, at even stakes, on a sequence of independent games
More informationMaster s Theory Exam Spring 2006
Spring 2006 This exam contains 7 questions. You should attempt them all. Each question is divided into parts to help lead you through the material. You should attempt to complete as much of each problem
More informationRandom variables, probability distributions, binomial random variable
Week 4 lecture notes. WEEK 4 page 1 Random variables, probability distributions, binomial random variable Eample 1 : Consider the eperiment of flipping a fair coin three times. The number of tails that
More informationEssentials of Stochastic Processes
i Essentials of Stochastic Processes Rick Durrett 70 60 50 10 Sep 10 Jun 10 May at expiry 40 30 20 10 0 500 520 540 560 580 600 620 640 660 680 700 Almost Final Version of the 2nd Edition, December, 2011
More informationStochastic Processes and Advanced Mathematical Finance. Duration of the Gambler s Ruin
Steven R. Dunbar Department of Mathematics 203 Avery Hall University of Nebraska-Lincoln Lincoln, NE 68588-0130 http://www.math.unl.edu Voice: 402-472-3731 Fax: 402-472-8466 Stochastic Processes and Advanced
More informationApplication of Markov chain analysis to trend prediction of stock indices Milan Svoboda 1, Ladislav Lukáš 2
Proceedings of 3th International Conference Mathematical Methods in Economics 1 Introduction Application of Markov chain analysis to trend prediction of stock indices Milan Svoboda 1, Ladislav Lukáš 2
More informationLecture 1 Introduction Properties of Probability Methods of Enumeration Asrat Temesgen Stockholm University
Lecture 1 Introduction Properties of Probability Methods of Enumeration Asrat Temesgen Stockholm University 1 Chapter 1 Probability 1.1 Basic Concepts In the study of statistics, we consider experiments
More informationRandom access protocols for channel access. Markov chains and their stability. Laurent Massoulié.
Random access protocols for channel access Markov chains and their stability laurent.massoulie@inria.fr Aloha: the first random access protocol for channel access [Abramson, Hawaii 70] Goal: allow machines
More informationChapter 4 Lecture Notes
Chapter 4 Lecture Notes Random Variables October 27, 2015 1 Section 4.1 Random Variables A random variable is typically a real-valued function defined on the sample space of some experiment. For instance,
More informationBinomial lattice model for stock prices
Copyright c 2007 by Karl Sigman Binomial lattice model for stock prices Here we model the price of a stock in discrete time by a Markov chain of the recursive form S n+ S n Y n+, n 0, where the {Y i }
More informationQuestion: What is the probability that a five-card poker hand contains a flush, that is, five cards of the same suit?
ECS20 Discrete Mathematics Quarter: Spring 2007 Instructor: John Steinberger Assistant: Sophie Engle (prepared by Sophie Engle) Homework 8 Hints Due Wednesday June 6 th 2007 Section 6.1 #16 What is the
More informationIntroduction to Markov Chain Monte Carlo
Introduction to Markov Chain Monte Carlo Monte Carlo: sample from a distribution to estimate the distribution to compute max, mean Markov Chain Monte Carlo: sampling using local information Generic problem
More informationWeek 4: Gambler s ruin and bold play
Week 4: Gambler s ruin and bold play Random walk and Gambler s ruin. Imagine a walker moving along a line. At every unit of time, he makes a step left or right of exactly one of unit. So we can think that
More informationST 371 (IV): Discrete Random Variables
ST 371 (IV): Discrete Random Variables 1 Random Variables A random variable (rv) is a function that is defined on the sample space of the experiment and that assigns a numerical variable to each possible
More informationDiscrete Math in Computer Science Homework 7 Solutions (Max Points: 80)
Discrete Math in Computer Science Homework 7 Solutions (Max Points: 80) CS 30, Winter 2016 by Prasad Jayanti 1. (10 points) Here is the famous Monty Hall Puzzle. Suppose you are on a game show, and you
More information6.3 Conditional Probability and Independence
222 CHAPTER 6. PROBABILITY 6.3 Conditional Probability and Independence Conditional Probability Two cubical dice each have a triangle painted on one side, a circle painted on two sides and a square painted
More informationDiscrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 13. Random Variables: Distribution and Expectation
CS 70 Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 3 Random Variables: Distribution and Expectation Random Variables Question: The homeworks of 20 students are collected
More information1 Representation of Games. Kerschbamer: Commitment and Information in Games
1 epresentation of Games Kerschbamer: Commitment and Information in Games Game-Theoretic Description of Interactive Decision Situations This lecture deals with the process of translating an informal description
More informationUNCOUPLING THE PERRON EIGENVECTOR PROBLEM
UNCOUPLING THE PERRON EIGENVECTOR PROBLEM Carl D Meyer INTRODUCTION Foranonnegative irreducible matrix m m with spectral radius ρ,afundamental problem concerns the determination of the unique normalized
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More informationThe Binomial Distribution
The Binomial Distribution James H. Steiger November 10, 00 1 Topics for this Module 1. The Binomial Process. The Binomial Random Variable. The Binomial Distribution (a) Computing the Binomial pdf (b) Computing
More informationLectures on Stochastic Processes. William G. Faris
Lectures on Stochastic Processes William G. Faris November 8, 2001 2 Contents 1 Random walk 7 1.1 Symmetric simple random walk................... 7 1.2 Simple random walk......................... 9 1.3
More information1. Prove that the empty set is a subset of every set.
1. Prove that the empty set is a subset of every set. Basic Topology Written by Men-Gen Tsai email: b89902089@ntu.edu.tw Proof: For any element x of the empty set, x is also an element of every set since
More informationMulti-state transition models with actuarial applications c
Multi-state transition models with actuarial applications c by James W. Daniel c Copyright 2004 by James W. Daniel Reprinted by the Casualty Actuarial Society and the Society of Actuaries by permission
More information6.042/18.062J Mathematics for Computer Science December 12, 2006 Tom Leighton and Ronitt Rubinfeld. Random Walks
6.042/8.062J Mathematics for Comuter Science December 2, 2006 Tom Leighton and Ronitt Rubinfeld Lecture Notes Random Walks Gambler s Ruin Today we re going to talk about one-dimensional random walks. In
More information6 Scalar, Stochastic, Discrete Dynamic Systems
47 6 Scalar, Stochastic, Discrete Dynamic Systems Consider modeling a population of sand-hill cranes in year n by the first-order, deterministic recurrence equation y(n + 1) = Ry(n) where R = 1 + r = 1
More informationNOTES ON LINEAR TRANSFORMATIONS
NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all
More informationCHAPTER 2 Estimating Probabilities
CHAPTER 2 Estimating Probabilities Machine Learning Copyright c 2016. Tom M. Mitchell. All rights reserved. *DRAFT OF January 24, 2016* *PLEASE DO NOT DISTRIBUTE WITHOUT AUTHOR S PERMISSION* This is a
More informationMath/Stats 425 Introduction to Probability. 1. Uncertainty and the axioms of probability
Math/Stats 425 Introduction to Probability 1. Uncertainty and the axioms of probability Processes in the real world are random if outcomes cannot be predicted with certainty. Example: coin tossing, stock
More informationLecture Note 1 Set and Probability Theory. MIT 14.30 Spring 2006 Herman Bennett
Lecture Note 1 Set and Probability Theory MIT 14.30 Spring 2006 Herman Bennett 1 Set Theory 1.1 Definitions and Theorems 1. Experiment: any action or process whose outcome is subject to uncertainty. 2.
More informationLecture 2 Binomial and Poisson Probability Distributions
Lecture 2 Binomial and Poisson Probability Distributions Binomial Probability Distribution l Consider a situation where there are only two possible outcomes (a Bernoulli trial) H Example: u flipping a
More informationLECTURE 16. Readings: Section 5.1. Lecture outline. Random processes Definition of the Bernoulli process Basic properties of the Bernoulli process
LECTURE 16 Readings: Section 5.1 Lecture outline Random processes Definition of the Bernoulli process Basic properties of the Bernoulli process Number of successes Distribution of interarrival times The
More informationDevelopment of dynamically evolving and self-adaptive software. 1. Background
Development of dynamically evolving and self-adaptive software 1. Background LASER 2013 Isola d Elba, September 2013 Carlo Ghezzi Politecnico di Milano Deep-SE Group @ DEIB 1 Requirements Functional requirements
More informationCOUNTING INDEPENDENT SETS IN SOME CLASSES OF (ALMOST) REGULAR GRAPHS
COUNTING INDEPENDENT SETS IN SOME CLASSES OF (ALMOST) REGULAR GRAPHS Alexander Burstein Department of Mathematics Howard University Washington, DC 259, USA aburstein@howard.edu Sergey Kitaev Mathematics
More informationMATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column
More informationProbabilistic Strategies: Solutions
Probability Victor Xu Probabilistic Strategies: Solutions Western PA ARML Practice April 3, 2016 1 Problems 1. You roll two 6-sided dice. What s the probability of rolling at least one 6? There is a 1
More informationStochastic Processes and Advanced Mathematical Finance. Ruin Probabilities
Steven R. Dunbar Department of Mathematics 203 Avery Hall University of Nebraska-Lincoln Lincoln, NE 68588-0130 http://www.math.unl.edu Voice: 402-472-3731 Fax: 402-472-8466 Stochastic Processes and Advanced
More information3.4. The Binomial Probability Distribution. Copyright Cengage Learning. All rights reserved.
3.4 The Binomial Probability Distribution Copyright Cengage Learning. All rights reserved. The Binomial Probability Distribution There are many experiments that conform either exactly or approximately
More informationThursday, October 18, 2001 Page: 1 STAT 305. Solutions
Thursday, October 18, 2001 Page: 1 1. Page 226 numbers 2 3. STAT 305 Solutions S has eight states Notice that the first two letters in state n +1 must match the last two letters in state n because they
More informationHow To Understand The Theory Of Media Theory
Fundamentals of Media Theory ergei Ovchinnikov Mathematics Department an Francisco tate University an Francisco, CA 94132 sergei@sfsu.edu Abstract Media theory is a new branch of discrete applied mathematics
More informationMarkov Chains for the RISK Board Game Revisited. Introduction. The Markov Chain. Jason A. Osborne North Carolina State University Raleigh, NC 27695
Markov Chains for the RISK Board Game Revisited Jason A. Osborne North Carolina State University Raleigh, NC 27695 Introduction Probabilistic reasoning goes a long way in many popular board games. Abbott
More informationEfficient Recovery of Secrets
Efficient Recovery of Secrets Marcel Fernandez Miguel Soriano, IEEE Senior Member Department of Telematics Engineering. Universitat Politècnica de Catalunya. C/ Jordi Girona 1 i 3. Campus Nord, Mod C3,
More informationSECTION 10-2 Mathematical Induction
73 0 Sequences and Series 6. Approximate e 0. using the first five terms of the series. Compare this approximation with your calculator evaluation of e 0.. 6. Approximate e 0.5 using the first five terms
More informationSection 6.1 Discrete Random variables Probability Distribution
Section 6.1 Discrete Random variables Probability Distribution Definitions a) Random variable is a variable whose values are determined by chance. b) Discrete Probability distribution consists of the values
More informationA simple analysis of the TV game WHO WANTS TO BE A MILLIONAIRE? R
A simple analysis of the TV game WHO WANTS TO BE A MILLIONAIRE? R Federico Perea Justo Puerto MaMaEuSch Management Mathematics for European Schools 94342 - CP - 1-2001 - DE - COMENIUS - C21 University
More informationPerformance Analysis of a Telephone System with both Patient and Impatient Customers
Performance Analysis of a Telephone System with both Patient and Impatient Customers Yiqiang Quennel Zhao Department of Mathematics and Statistics University of Winnipeg Winnipeg, Manitoba Canada R3B 2E9
More informationThe Basics of Graphical Models
The Basics of Graphical Models David M. Blei Columbia University October 3, 2015 Introduction These notes follow Chapter 2 of An Introduction to Probabilistic Graphical Models by Michael Jordan. Many figures
More informationFrom the probabilities that the company uses to move drivers from state to state the next year, we get the following transition matrix:
MAT 121 Solutions to Take-Home Exam 2 Problem 1 Car Insurance a) The 3 states in this Markov Chain correspond to the 3 groups used by the insurance company to classify their drivers: G 0, G 1, and G 2
More informationChapter 5. Discrete Probability Distributions
Chapter 5. Discrete Probability Distributions Chapter Problem: Did Mendel s result from plant hybridization experiments contradicts his theory? 1. Mendel s theory says that when there are two inheritable
More informationChapter 3: DISCRETE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS. Part 3: Discrete Uniform Distribution Binomial Distribution
Chapter 3: DISCRETE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS Part 3: Discrete Uniform Distribution Binomial Distribution Sections 3-5, 3-6 Special discrete random variable distributions we will cover
More informationMATH 289 PROBLEM SET 4: NUMBER THEORY
MATH 289 PROBLEM SET 4: NUMBER THEORY 1. The greatest common divisor If d and n are integers, then we say that d divides n if and only if there exists an integer q such that n = qd. Notice that if d divides
More informationA linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form
Section 1.3 Matrix Products A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form (scalar #1)(quantity #1) + (scalar #2)(quantity #2) +...
More informationNotes on Probability Theory
Notes on Probability Theory Christopher King Department of Mathematics Northeastern University July 31, 2009 Abstract These notes are intended to give a solid introduction to Probability Theory with a
More informationThe Characteristic Polynomial
Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem
More informationIntroduction to Matrices
Introduction to Matrices Tom Davis tomrdavis@earthlinknet 1 Definitions A matrix (plural: matrices) is simply a rectangular array of things For now, we ll assume the things are numbers, but as you go on
More informationSystems of Linear Equations
Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and
More informationE3: PROBABILITY AND STATISTICS lecture notes
E3: PROBABILITY AND STATISTICS lecture notes 2 Contents 1 PROBABILITY THEORY 7 1.1 Experiments and random events............................ 7 1.2 Certain event. Impossible event............................
More information9.2 Summation Notation
9. Summation Notation 66 9. Summation Notation In the previous section, we introduced sequences and now we shall present notation and theorems concerning the sum of terms of a sequence. We begin with a
More informationDETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH
DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH CHRISTOPHER RH HANUSA AND THOMAS ZASLAVSKY Abstract We investigate the least common multiple of all subdeterminants,
More informationMarkov Chains. Table of Contents. Schedules
Markov Chains These notes contain material prepared by colleagues who have also presented this course at Cambridge, especially James Norris. The material mainly comes from books of Norris, Grimmett & Stirzaker,
More informationSolved Problems. Chapter 14. 14.1 Probability review
Chapter 4 Solved Problems 4. Probability review Problem 4.. Let X and Y be two N 0 -valued random variables such that X = Y + Z, where Z is a Bernoulli random variable with parameter p (0, ), independent
More informationDirect Methods for Solving Linear Systems. Matrix Factorization
Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011
More informationNotes on Factoring. MA 206 Kurt Bryan
The General Approach Notes on Factoring MA 26 Kurt Bryan Suppose I hand you n, a 2 digit integer and tell you that n is composite, with smallest prime factor around 5 digits. Finding a nontrivial factor
More informationStationary random graphs on Z with prescribed iid degrees and finite mean connections
Stationary random graphs on Z with prescribed iid degrees and finite mean connections Maria Deijfen Johan Jonasson February 2006 Abstract Let F be a probability distribution with support on the non-negative
More informationThe sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1].
Probability Theory Probability Spaces and Events Consider a random experiment with several possible outcomes. For example, we might roll a pair of dice, flip a coin three times, or choose a random real
More informationIEOR 6711: Stochastic Models I Fall 2012, Professor Whitt, Tuesday, September 11 Normal Approximations and the Central Limit Theorem
IEOR 6711: Stochastic Models I Fall 2012, Professor Whitt, Tuesday, September 11 Normal Approximations and the Central Limit Theorem Time on my hands: Coin tosses. Problem Formulation: Suppose that I have
More information3. Mathematical Induction
3. MATHEMATICAL INDUCTION 83 3. Mathematical Induction 3.1. First Principle of Mathematical Induction. Let P (n) be a predicate with domain of discourse (over) the natural numbers N = {0, 1,,...}. If (1)
More informationCh. 13.3: More about Probability
Ch. 13.3: More about Probability Complementary Probabilities Given any event, E, of some sample space, U, of a random experiment, we can always talk about the complement, E, of that event: this is the
More informationSummary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)
Summary of Formulas and Concepts Descriptive Statistics (Ch. 1-4) Definitions Population: The complete set of numerical information on a particular quantity in which an investigator is interested. We assume
More informationCONTINUED FRACTIONS AND PELL S EQUATION. Contents 1. Continued Fractions 1 2. Solution to Pell s Equation 9 References 12
CONTINUED FRACTIONS AND PELL S EQUATION SEUNG HYUN YANG Abstract. In this REU paper, I will use some important characteristics of continued fractions to give the complete set of solutions to Pell s equation.
More informationThe Graphical Method: An Example
The Graphical Method: An Example Consider the following linear program: Maximize 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2 0, where, for ease of reference,
More informationGREEN CHICKEN EXAM - NOVEMBER 2012
GREEN CHICKEN EXAM - NOVEMBER 2012 GREEN CHICKEN AND STEVEN J. MILLER Question 1: The Green Chicken is planning a surprise party for his grandfather and grandmother. The sum of the ages of the grandmother
More informationRESULTANT AND DISCRIMINANT OF POLYNOMIALS
RESULTANT AND DISCRIMINANT OF POLYNOMIALS SVANTE JANSON Abstract. This is a collection of classical results about resultants and discriminants for polynomials, compiled mainly for my own use. All results
More informationModeling and Performance Evaluation of Computer Systems Security Operation 1
Modeling and Performance Evaluation of Computer Systems Security Operation 1 D. Guster 2 St.Cloud State University 3 N.K. Krivulin 4 St.Petersburg State University 5 Abstract A model of computer system
More informationPÓLYA URN MODELS UNDER GENERAL REPLACEMENT SCHEMES
J. Japan Statist. Soc. Vol. 31 No. 2 2001 193 205 PÓLYA URN MODELS UNDER GENERAL REPLACEMENT SCHEMES Kiyoshi Inoue* and Sigeo Aki* In this paper, we consider a Pólya urn model containing balls of m different
More informationElements of probability theory
2 Elements of probability theory Probability theory provides mathematical models for random phenomena, that is, phenomena which under repeated observations yield di erent outcomes that cannot be predicted
More informationTHE DYING FIBONACCI TREE. 1. Introduction. Consider a tree with two types of nodes, say A and B, and the following properties:
THE DYING FIBONACCI TREE BERNHARD GITTENBERGER 1. Introduction Consider a tree with two types of nodes, say A and B, and the following properties: 1. Let the root be of type A.. Each node of type A produces
More informationChapter 5 Discrete Probability Distribution. Learning objectives
Chapter 5 Discrete Probability Distribution Slide 1 Learning objectives 1. Understand random variables and probability distributions. 1.1. Distinguish discrete and continuous random variables. 2. Able
More information