M.1 Markov Processes. If I asked you to find the distribution in 1970 what would you have to do?

Similar documents
Math Quizzes Winter 2009

From the probabilities that the company uses to move drivers from state to state the next year, we get the following transition matrix:

3.2 Roulette and Markov Chains

CHAPTER 15 NOMINAL MEASURES OF CORRELATION: PHI, THE CONTINGENCY COEFFICIENT, AND CRAMER'S V

4.1. Title: data analysis (systems analysis) Annotation of educational discipline: educational discipline includes in itself the mastery of the

1 Gambler s Ruin Problem

c. Given your answer in part (b), what do you anticipate will happen in this market in the long-run?

n 2 + 4n + 3. The answer in decimal form (for the Blitz): 0, 75. Solution. (n + 1)(n + 3) = n lim m 2 1

T HINK ABOUT IT MARKOV CHAINS. Transition Matrix In sociology, it is convenient to classify people by income

Minimax Strategies. Minimax Strategies. Zero Sum Games. Why Zero Sum Games? An Example. An Example

Introduction to Matrices

Prime Time: Homework Examples from ACE

Introduction to Matrix Algebra

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

DERIVATIVES AS MATRICES; CHAIN RULE

Probabilistic Strategies: Solutions

A Little Set Theory (Never Hurt Anybody)

Lecture 2 Matrix Operations

10.2 Series and Convergence

Chapter Two: Matrix Algebra and its Applications 1 CHAPTER TWO. Matrix Algebra and its applications

Markov Chains. Chapter Stochastic Processes

REPEATED TRIALS. The probability of winning those k chosen times and losing the other times is then p k q n k.

Row Echelon Form and Reduced Row Echelon Form

1/3 1/3 1/

Markov Chains for the RISK Board Game Revisited. Introduction. The Markov Chain. Jason A. Osborne North Carolina State University Raleigh, NC 27695

Optimization in ICT and Physical Systems

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

Current California Math Standards Balanced Equations

Operations with positive and negative numbers - see first chapter below. Rules related to working with fractions - see second chapter below

How To Understand And Solve A Linear Programming Problem

Name: Date: Use the following to answer questions 2-3:

Here are some examples of combining elements and the operations used:

Activity- The Energy Choices Game

SALEM COMMUNITY COLLEGE Carneys Point, New Jersey COURSE SYLLABUS COVER SHEET. Action Taken (Please Check One) New Course Initiated

Finite Mathematics Using Microsoft Excel

A Profit-Maximizing Production Lot sizing Decision Model with Stochastic Demand

Ch5: Discrete Probability Distributions Section 5-1: Probability Distribution

1.2. Successive Differences

A Detailed Price Discrimination Example

Question 2: How do you solve a matrix equation using the matrix inverse?

Practice Problems for Homework #8. Markov Chains. Read Sections Solve the practice problems below.

MATHEMATICS FOR ENGINEERING BASIC ALGEBRA

Being a Wise Borrower: The Importance of Managing Your Money

FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL

25 Integers: Addition and Subtraction

Reliability Guarantees in Automata Based Scheduling for Embedded Control Software

MODELING CUSTOMER RELATIONSHIPS AS MARKOV CHAINS. Journal of Interactive Marketing, 14(2), Spring 2000, 43-55

A simple analysis of the TV game WHO WANTS TO BE A MILLIONAIRE? R

Teaching Mathematics and Statistics Using Tennis

Mathematical finance and linear programming (optimization)

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

Handout #1: Mathematical Reasoning

MATHEMATICS FOR ENGINEERS BASIC MATRIX THEORY TUTORIAL 2

Lecture Note 1 Set and Probability Theory. MIT Spring 2006 Herman Bennett

Tutorial Customer Lifetime Value

Is it possible to beat the lottery system?

Decision Analysis. Here is the statement of the problem:

Permutation Groups. Tom Davis April 2, 2003

2.2/2.3 - Solving Systems of Linear Equations

The Clute Institute International Academic Conference Las Vegas, Nevada, USA 2013

Tenth Problem Assignment

7 Literal Equations and

Multi-state transition models with actuarial applications c

1.2 Solving a System of Linear Equations

Alabama Department of Postsecondary Education

Video Poker in South Carolina: A Mathematical Study

Thursday, October 18, 2001 Page: 1 STAT 305. Solutions

One pile, two pile, three piles

General Framework for an Iterative Solution of Ax b. Jacobi s Method

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Ch. 13.2: Mathematical Expectation

Practice Problems on NIPA and Key Prices

Mathematical model based on the product sales market forecast of markov forecasting and application

Solving Systems of Linear Equations Using Matrices

Chapter 31 out of 37 from Discrete Mathematics for Neophytes: Number Theory, Probability, Algorithms, and Other Stuff by J. M.

Lecture 1: Systems of Linear Equations

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Continued Fractions and the Euclidean Algorithm

Common Core Standards for Fantasy Sports Worksheets. Page 1

LS.6 Solution Matrices

Chapter 3. Cartesian Products and Relations. 3.1 Cartesian Products

THE LEAST SQUARES LINE (other names Best-Fit Line or Regression Line )

Question of the Day. Key Concepts. Vocabulary. Mathematical Ideas. QuestionofDay

Means, standard deviations and. and standard errors

COMP 250 Fall 2012 lecture 2 binary representations Sept. 11, 2012

PLANNING PROBLEMS OF A GAMBLING-HOUSE WITH APPLICATION TO INSURANCE BUSINESS. Stockholm

Every Positive Integer is the Sum of Four Squares! (and other exciting problems)

Sequential lmove Games. Using Backward Induction (Rollback) to Find Equilibrium

3.2 Matrix Multiplication

Section 8.2 Solving a System of Equations Using Matrices (Guassian Elimination)

Decision Making under Uncertainty

Introduction to Probability

ECONOMICS 351* -- Stata 10 Tutorial 2. Stata 10 Tutorial 2

Mathematical Induction

QUALITY ENGINEERING PROGRAM

TEACHER S GUIDE TO RUSH HOUR

Statistics in Retail Finance. Chapter 6: Behavioural models

How will the programme be delivered (e.g. inter-institutional, summerschools, lectures, placement, rotations, on-line etc.):

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

Creating a NL Texas Hold em Bot

Transcription:

M.1 Markov Processes Introductory Example: At the beginning of 1950, 55% of a certain state were Republicans and 45% were Democrats. Based on past trends, it is expected that every decade, 10% of Republicans become Democrats, while 9% of Democrats become Republicans. What percentage of the population were Republicans and what percentage were Democrats at the beginning of 1960? In other words, what was the distribution of Republicans and Democrats? If I asked you to find the distribution in 1970 what would you have to do? We did this problem above using conditional probabilities, but we can also solve the problem using matrices. A Markov process (or Markov chain) is a process in which the probabilities of the outcomes at any stage of an experiment depend only on the outcomes of the preceding stage. The outcome at any stage of the experiment in a Markov process is called the state of the experiment. We can write the conditional probabilites from the tree in a matrix, called the transition matrix, denoted by T. T will have the following form: Next State State 1 State 2 Current State [ State 1 State 2 ] P (state 1 state 1) P (state 1 state 2) P (state 2 state 1) P (state 2 state 2) Find the transition matrix for the example above involving Republicans/Democrats. If there were 3 states, then the transition matrix would be 3 x 3, and so on. 1

We can write the distribution at any stage of the experiment using a 1-column matrix. X 0 represents the initial distribution. X 0 = X 1 represents the distribution after 1 stage, that is after 1 decade in this scenario at the beginning of 1960. X 1 = T X 0 = X 2 represents the distribution after 2 stages, so at the beginning of 1970. X 2 = T X 1 = What was the distribution at the beginning of 1980? Important summary fact: We can find the distribution at any stage of the Markov process by using: X m = T m X 0 Example: At a university, it is estimated that currently 25% of the commuter students drive to school and the other 75% take the bus. Based on trends, it is determined that every semester, 20% of the commuters who drive to school will start taking the bus, and 70% of those who take the bus will continue to use the bus. What percent of commuter students will drive to school and take the bus 1 year later? 2

What percent of commuter students will drive to school and take the bus 2 years later? If a student is currently taking the bus, what is the probability that they will still be taking the bus 3 semesters later? Example: Suppose that in a study of Ford, Honda, and Chevrolet, the following data was found. Every year, it is found that of the customers who own a Ford, 50% stay with a Ford, 20% would switch to a Honda, and 30% would switch to Chevy. Of the customers who own a Honda, 50% would stick with Honda, 40% would buy a Ford, and 10% would buy a Chevy. Of the customers who own a Chevy, 70% would stick with Chevy, 20% would switch to Ford, and 10% would buy a Honda. The current market shares of these companies are 60%, 30%, and 10% respectively. (This data is fictional.) Determine the market share distribution of each of these car companies three years later. If a person currently owns a Ford, what is the probability they will own a Honda 5 years later? 3

The transition matrices we have been dealing with are examples of stochastic matrices. A stochastic matrix is a square matrix where: All of its entries are greater than or equal to 0 (i.e., not negative), and the sum of the entries in each column is 1. Are the following matrices stochastic? [ 0.8 ] 0.3 [ 0.4 0.9 ] 0.2 0.7 0.6 0.2 0 0.35 0.2 1 0.65 0.5 0 0 0.3 [ 2 0.5 1 0.5 ] [ 1 0.5 0.3 0 0.5 0.7 ] M.2 Regular Markov Processes With certain Markov processes, as we keep going and going, the distributions at each stage will begin to approach some fixed matrix. In other words, in the long run, the distributions are tending towards a steady-state distribution. Consider the example of the cars in the previous section. Here are some of the distributions over many years. X 10 = X 1 = 0.44 0.28 0.28 0.3501302203 0.2251302024 0.4247395774, X 2 = 0.388 0.256 0.356... X 20 =, X 3 = 0.3676 0.2412 0.3912 0.3500001272 0.2250001272 0.4249997457, X 4 =... X 30 = 0.35852 0.23324 0.40824 0.3500000001 0.2250000001 0.4249999998 As we go more and more stages later, you can see that the distributions are approaching: 0.35 0.225 0.425 We call this the steady-state distribution or sometimes the limiting distribution. What this means is that in the long run, we would expect 35% will own a Ford, 22.5% will own a Honda, and 42.5% will own a Chevy. If a Markov process has a steady-state distribution, it will ALWAYS approach this matrix, no matter what the initial distribution is. If a Markov process has a steady-state distribution, we call it a regular Markov process. Not every Markov process is regular. To test if a Markov process is regular, use the following rule to determine if its transition matrix is regular:. 4

A stochastic transition matrix T is regular if some power of T has all positive entries (all entries are strictly greater than 0). Examples: Determine whether the following matrices are regular. [ 0.4 0.5 0.6 0.3 ] [ 0.2 0.7 0.8 0.3 ] [ 0 0.3 1 0.7 ] [ 1 0.8 0 0.2 ] 0 0 0.25 1 0 0 0 1 0.75 Once we know that a matrix T is regular, we know that the Markov chain has some steady-state distribution. How can we find this steady-state distribution, which we will denote X (without a subscript)? Easy way: Mathematical Way: We find X by solving the equation T X = X with the added condition that the sum of the entries of X must be equal to 1. Example: Find the steady-state distribution for the regular Markov chain whose transition matrix is [ ] 0.5 0.2 T = 0.5 0.8 5

Example: A poll was conducted among workers in a certain city. It was found that of those who work for a large corporation in a given year, 30% will move to a retail chain and 10% will move to a small business in the next year. Of those who work for a retail chain in a given year, 15% will change to a large corporation and 15% will work for a small business the next year. Finally, of those who work for a small business in a given year, 20% will move to retail and 25% will change to a large corporation. Write the transition matrix for this Markov process. Does this Markov process have a steady-state distribution? If so, what can we expect the employment distribution to be in the long run? 6

M.3 Absorbing Markov Processes In a Markov process, a state is an absorbing state if once you enter that state, it is impossible to leave. A Markov process is absorbing if: There is at least one absorbing state. It is possible to move from any non-absorbing state to an absorbing state in a finite number of stages. Example 1: A mouse house has 3 rooms: Room A, B, and C. Room A has cheese in it. Every minute, the mouse makes a choice. If the mouse is in Room B, the probability is 0.6 that it will move to Room A and 0.4 that it will move to Room C. If the mouse is in Room C, the probability that it will stay there is 0.1, the probability that it will move to Room A is 0.7, and the probability that it will move to Room B is 0.2. Once the mouse enters Room A, it never leaves. Write the transition matrix for this Markov process. Is it absorbing? If there are a bunch of mice in the house, with 5% in Room A, 60% in Room B, and 35% in Room C, what is the room distribution after 2 minutes? If the mouse is initially in Room C, what are the probabilities of being in each room after 4 minutes? What do you expect to happen in the long term? 7

Example 2: A mouse house has 3 rooms: Room A, B, and C. Room A has cheese in it and Room B has a trap in it. Every minute, the mouse makes a choice. If the mouse is in Room C, the probability is 0.45 that it will stay there, 0.25 that it will move to Room A, and 0.3 that it will move to Room B. Once the mouse enters either Room A or Room B, it never leaves. Write the transition matrix for this Markov process. Is it absorbing? What do you expect to happen in the long run? Find the limiting matrix L. Example 3: A mouse house has 4 rooms: Rooms A, B, C, and D. Room A has cheese in it and Room B has a trap in it. Every minute, the mouse makes a choice. If the mouse is in Room C, the probability is 0.1 that it will stay there, 0.25 that it will move to Room A, 0.3 that it will move to Room B, and 0.35 that it will move to Room D. If the mouse is in Room D, the probability is 0.2 that it will stay there, 0.45 that it will move to Room A, 0.2 that it will move to Room B, and 0.15 that it will move to Room C. Once the mouse enters either Room A or Room B, it never leaves. Write the transition matrix for this Markov process. Is it absorbing? What do you expect to happen in the long term? Find the limiting matrix L. 8

Summary: If a Markov process is absorbing, then eventually everything will be absorbed by one of the absorbing states. If there is one absorbing state, the probability of ending in that state is 1, since everything will eventually be absorbed. If there are 2 or more absorbing states, we can find the probabilities of ending up in each of the absorbing states by raising T to a large power. The resulting matrix is called a limiting matrix, L. (In general, a limiting matrix is one that gives the long-term probabilities associated with a Markov process.) Determine if the following stochastic transition matrices are absorbing. If absorbing, find the limiting matrix. 1 0.2 0.7 0 0.5 0.2 0 0.3 0.1 0 1 0.3 1 0 0.3 0 0 0.4 0.3 0 0.4 0.5 1 0 0.2 0 0.6 1 0 0 0 0.5 0.2 0 0.5 0.8 1 0.5 0 0.4 0 0.2 0 0.5 0 0 1 0 0 0.3 0 0.1 9

Example: A person plays a game in which the probability of winning $1 is 0.3, the probability of breaking even is 0.2, and the probability of losing $1 is 0.5. If you go broke or reach $3, you quit. 1. Write the transition matrix for this process. 2. Find the limiting matrix L. 3. Describe the long-term behavior if you start with $2. 10