# Introduction to Probability Theory for Graduate Economics. Fall 2008

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 Introduction to Probability Theory for Graduate Economics Fall 2008 Yiğit Sağlam December 1, 2008 CHAPTER 5 - STOCHASTIC PROCESSES 1 Stochastic Processes A stochastic process, or sometimes a random process, is the counterpart to a deterministic process (or a deterministic system) in probability theory. Instead of dealing with only one possible reality of how the process might evolve under time (as is the case, for example, for solutions of an ordinary differential equation), in a stochastic or random process there is some indeterminacy in its future evolution described by probability distributions. This means that even if the initial condition (or starting point) is known, there are many possibilities the process might go to, but some paths are more probable and others less. In the simplest possible case ( discrete time ), a stochastic process amounts to a sequence of random variables known as a time series. Another basic type of a stochastic process is a random field, whose domain is a region of space, in other words, a random function whose arguments are drawn from a range of continuously changing values. One approach to stochastic processes treats them as functions of one or several deterministic arguments ( inputs, in most cases regarded as time ) whose values ( outputs ) are random variables: non-deterministic (single) quantities which have certain probability distributions. Random variables corresponding to various times (or points, in the case of random fields) may be completely different. The main requirement is that these different random quantities all have the same type. Although the random values of a stochastic PhD Candidate in Economics, Department of Economics, University of Iowa, W210 Pappajohn Business Building, Iowa City, IA 52242, Phone: 1(319) , Fax: 1(319)

2 process at different times may be independent random variables, in most commonly considered situations they exhibit complicated statistical correlations. Familiar examples of processes modeled as stochastic time series include stock market and exchange rate fluctuations, and random movement such as Brownian motion or random walks. 2 The Poisson Process 2.1 Definition and Properties of a Poisson Process Definition A stochastic process {N(t), t 0} is called a counting process, if N(t) represents the total number of events that have occurred up to time t. A counting process N(t) must satisfy the following properties: N(t) 0, N(t) N, t 0, If s < t, then N(s) N(t), For s < t, N(t) N(s) equals the number of events that have occurred in the interval (s, t). Definition A counting process possesses independent increments if the number of events which occur in disjoint time intervals are independent. For example, the number of events that occurred by time 10; ie, N(10), must be independent of the number of events that occurred between times 10 and 15; ie, N(15) N(10). Definition A counting process possesses stationary increments if the distribution of the number of events that occur in any time interval depends only on the length of the time interval. In other words, if the number of events in the interval (t 1 + s, t 2 + s); ie, N(t 2 + s) N(t 1 + s) has the same distribution as the number of events in the interval (t 1, t 2 ); ie, N(t 2 ) N(t 1 ), for all t 1 < t 2, and s 0. 2

3 Definition The counting process {N(t), t 0} is called a Poisson process having rate λ, λ > 0, if 1. N(0) = 0, 2. The process has independent increments, 3. The number of events in any interval of length t is Poisson distributed with mean λt. That is, for all s, t 0: P r (N(t + s) N(s) = n) = exp ( λt) (λt)n, n = 0, 1, 2, 3,... n! It is noteworthy that it follows from condition (iii) that a Poisson process has stationary increments and also that E [N(t)] = λt which explains why λ is the rate of the process. Definition The function f( ) is said to be o( ) if f(h) lim h 0 h = 0. Definition The counting process {N(t), t 0} is called a Poisson process having rate λ, λ > 0, if 1. N(0) = 0, 2. The process has stationary and independent increments, 3. P r (N(h) = 1) = λh + o(h), 4. P r (N(h) 2) = o(h). Theorem The two definitions of the Poisson process given above are equivalent. Proof See Ross(1993). 3

4 2.2 Interarrival and Waiting Time Distributions Definition Consider a Poisson process, and let T 1 denote the time of the first event. Further, for n > 1, let T n denote the elapsed time between the (n 1) st and n th events. The sequence {T n, n = 1, 2, 3,...} is called the sequence of interarrival times. For example, T 1 = 5 and T 2 = 10, then the first event of the Poisson process would have occurred at time 5, and the second at time 15. Proposition T n, n = 1, 2, 3,..., are independently and identically distributed with exponential random variables having mean 1/λ. Remarks The assumption of stationary and independent increments is basically equivalent to asserting that, at any point in time, the process probabilistically restarts itself. In other words, the process from any point on is independent of all that has previously occurred (by independent increments), and also has the same distribution as the original process (by stationary increments). So, the process has no memory, and exponential arriving times are to be expected. Definition Another quantity of interest is S n, the arrival time of the n th event, also called the waiting time until the n th event. Mathematically, S n = n T i, n 1. i=1 Finally, it is noteworthy that S n has a gamma distribution with parameters n and λ. 2.3 Properties of the Exponential Distribution Definition A random variable X is memoryless, if P r(x t) = P r (X t + s X s) ; t, s 0. In words, the probability that the first occurrence happens at a time X t is equivalent to the probability that the first occurrence happens at time X t o + t, given that it has not yet occurred until time t o. Whenever it is appropriate, the memoryless property of Exponential distribution is useful in economics, as one does not have to keep track of the whole history to compute the probability distribution of a variable in the current state. 4

5 Remark Exponential distribution is not only memoryless, but it is also the unique distribution possessing this property. To see this, suppose that X is memoryless, and F (x) = P r (X > x). Then, it follows that: F (t + s) = F (t) F (s). That is, F (x) satisfies the functional equation: g(t + s) = g(t) g(s). However, the only right continuous solution of this functional equation is g(x) = exp ( λx) and since a distribution function is always right continuous, it implies that F (x) = exp ( λx) F (x) = P r (X x) = 1 exp ( λx) which shows that X is exponentially distributed. Definition The memoryless property is further illustrated by the failure (hazard) rate function of the exponential distribution: r(t) = f(t) 1 F (t). One can interpret the hazard rate as the probability that X will not survive for an additional time dt, given that X has survived until t. Since the memoryless property is associated with the exponential distribution, the hazard rate becomes r(t) = f(t) λ exp ( λt) = = λ. 1 F (t) exp ( λt) Thus, the hazard rate of the exponential distribution equals the reciprocal of the inverse of the mean of the exponential distribution. 5

6 3 Discrete Time Discrete Space Markov Chains 3.1 Definitions First we consider a discrete time Markov chain. In this setup, a stochastic process {X n, n = 0, 1, 2,...} takes on a finite or countable number of possible values. If X n = i, then the process is said to be in state i at time n. We suppose that whenever the process is in state i, there is a fixed probability P ij that it will next be in state j; ie: P r (X n+1 = j X n = i, X n 1 = i n 1,..., X 1 = i 1, X 0 = i 0 ) = P ij for all states i 0, i 1,..., i n 1, i, j and all n 0. Such a stochastic process is called Markov Chain. One can interpret this equation int he following way: The conditional distribution of any future state X n+1 given the past states X 0, X 1,..., X n 1 and the present state X n, is independent of the past states and depends only on the present state. Since the probabilities are nonnegative, and since the process must make a transition into some state, we have that: Every probability must be positive and less than 1: P ij 0; i, j 0, The sum of the probabilities must equal 1: P ij = 1; i = 0, 1, 2,.... j=0 6

7 Definition Chapman-Kolmogorov Equations We now define the n-step transition probabilities P n ij transitions; ie: to be the probability that a process in state i will in state j after n additional P n ij = P r (X n+m = j X m = i), n 0, i, j 0 It is noteworthy that the matrix P is called Transition Matrix: P 00 P 01 P P 10 P 11 P P =... P i0 P i1 P i The Chapman-Kolmogorov equations provide a method for computing these n-step transition probabilities: P n+m ij = k=0 Pik n P kj m, n, m 0, all i, j and are most easily understood by noting that Pik n P kj m represents the probability that starting in i, the process will go to state j in n + m transition thrugh a path which takes it int state k at the n th transition. Hence: P n+m ij = P r (X n+m = j X 0 = i) = P r (X n+m = j, X n = k X 0 = i) = = k=0 P r (X n+m = j X n = k, X 0 = i) P r (X n = k X 0 = i) k=0 k=0 P n ik P m kj In matrix notation, above equation implies: P (n) = P n where P (n) denotes the matrix of n-step transition probabilities P n ij. 7

8 3.2 Limiting Probabilities Definition A Markov chain is called irreducible, if for any i j, P n ij is positive, for n 0. Definition If state i is recurrent, then starting in state i, the process will reenter state i again and again and again in fact, infinitely often. So, the state i is recurrent if: n=1 P n ii =. Definition If state i is recurrent, then it is said to be positive recurrent if, starting in state i, the expected time until the process returns to state i is finite. In a finite-state Markov chain, all recurrent states are positive recurrent. Definition If state i is transient, then every time process enters state i, there will be positive probability that it will never again enter that state. Thus, if state i is transient, then starting in state i, the number of time periods that the process will be in state i has a geometric distribution with finite mean. So, the state i is transient if: n=1 P n ii <. Definition State i has period d, if P n ii = 0 whenever n is not divisible by d, which is the largest integer with this property. For example, starting in state i, it may be possible for the process to enter state i only at the times 2, 4, 6,..., in which case state i has period 2. A state with period 1 is aperiodic. Definition Positive recurrent, aperiodic states are called ergodic. Theorem For an irreducible ergodic Markov chain lim n P n ij Furthermore, letting exists and is independent of i. then π j is the nonnegative solution of π j = lim P n n ij, j 0, π j = π i P ij, j 0, i=0 π j = 1. j=1 8

9 Remarks It can be shown that π j, the limiting probability that the process will be in state j at time n, also equals the long-run proportion of time that the process will be in state j. These long-run proportions are often called stationary probabilities. Also, the vector Π = (π 0, π 1, π 2,...) is also referred to as invariant distribution. This is because if the initial state is chosen according to probabilities π j, j 0, then the probability of being in state j at any time n is also equal to π j : P r (X 0 = j) = π j P r (X n = j) = π j ; n, j 0. Moreover, the proportion of time in state j equals the inverse of the mean time between visits to j. This result is a special case of The Strong Law of Renewal Process. Definition Time Reversibility The idea of a reversible Markov chain comes from the ability to invert a conditional probability using Bayes Rule: P r (X n = i X n+1 = j) = P r (X n = i, X n+1 = j) P r (X n+1 = j) = P r (X n = i) P r (X n+1 = j X n = i). P r (X n+1 = j) It now appears that time has been reversed. Thus, a Markov chain is said to be reversible if there is a Π such that: π i P ij = π j P ji, i, j 0. This condition is also known as the detailed balance condition. Summing over i gives π i P ij = π j, j 0. i=0 So, for reversible Markov chains, Π is always a stationary distribution. 9

10 3.3 Computing an Invariant Distribution and Simulating Markov Chains Below is a MATLAB code to compute the invariant distribution given a transition matrix for a discrete time Markov chain: Input 1: MATLAB Code to Find Invariant Distribution: function[invd] = invdist(tm) % invdist.m - function %% CODE N = size(tm,2); B = (TM-eye(N)); B(1:N,N) = ones(n,1); % o = zeros(1,n); o(n) = 1; INVD = o*inv(b); % Invariant Distribution % Below is a MATLAB code to simulate a stochastic process which follows a discrete time Markov chain: Input 2: MATLAB Code to Simulate Markov Chains: function [theta,ug] = simulationmc(g,tm,t) % simulation - function %% Arguments: % G = grid points for the Markov chain; % TM = transition matrix for the Markov chain; % T = number of periods to simulate; %% CODE: CDF = cumsum(tm,2); % cumulative distribution function. i = 1; % index of the initial shock. ind = zeros(1,t); ind(1) = i; for t = 1:T; % simulation for T periods. ug(t) = rand(1); % create pseudo-random numbers from the uniform dist. j = find(cdf(i,:)>=ug(t),1, First ); ind(t) = j; i = j; end theta = G(ind); % pseudo-random numbers from the Markov chain for simulation. % 10

11 3.4 Simple Example Grid Points Suppose there are uncertainty is caused by a stochastic shock which follows a known discrete space Markov Chain. First, let G denote the grid points for the shock. >> G = 1:5; Thus, the grid points are the integers from 1 to 5. Transition Matrix One can also create a transition matrix for the shock in MATLAB in the following way: >> rand( state,12356); % Seed the uniform, pseudo-random number generator. TM = rand(length(g)); for i=1:length(g); TM(i,:) = TM(i,:)./sum(TM(i,:)); end TM = One can interpret the transition matrix in the following way: The number is the probability to go stay in state 1, given that the current state is already 1. Similarly, the number is the probability to go to state 3, given that the current state is 4. 11

12 Cumulative Distribution Function Now, one can easily compute the cumulative distribution function (CDF) in the following way: >> CDF = cumsum(tm,2); CDF = One can interpret the cumulative distribution function in the following way: The number is the probability that tomorrow state is less than or equal to 2, given that the current state is 1. Similarly, the number is the probability that tomorrow s state is less than or equal to 3, given that the current state is 5. Invariant Distribution Now, one can easily compute the invariant distribution (INVD) using the transition matrix in the following way: >> INVD = invdist(tm); INVD = One can interpret the invariant distribution in the following way: The number is the probability to go to state 1, regardless of the current state. Similarly, the number is the probability to go to state 3, regardless of the current state. 12

13 Simulating from the Transition Matrix One can simulate the shock from the Markov chain using the transition matrix for a fixed number of periods (T) in the following way: >> T = 10; >> [theta,ug] = simulationmc(g,tm,t); theta = ug = In this setup, let theta and ug denote the shock value and the respective probability draw in a given period. One can interpret the simulation results in the following way: Starting from state 1 (fixed inside the code), the state in period 1 is theta = 5. This is because looking at the first row of the CDF, the probability draw ug = corresponds to the range (0.8751, 1]. Thus, the shock value equals 5 in the first period. In the second period, the shock value is theta = 4, as the probability draw ug = corresponds to the range (0.4369, ], the fifth row of the CDF. It is noteworthy that the simulation results theta may change from one trial to another. This is because the pseudo-random number generation is not fixed with a given state in the code simulationmc.m. Example Example.m in the zip file for Chapter 5 contains the steps carried out above. 13

14 4 Continuous Time Continuous Space Stochastic Processes Since most of the definitions and theorems in discrete time Markov chains can be extended to continuous time Markov chains, we will rather focus on a particular application of the continuous time Markov chains in economics. 4.1 Approximating A First-Order Autoregressive AR(1) Process With Finite Space Markov Chains Procedure Suppose that a stochastic shock follows the following AR(1) process: s = µ + λs + ɛ, with var(ɛ) = σ 2, where λ < 1. (1) Here ɛ is white noise; ie, ɛ is distributed with mean zero and variance σ 2. To discretize the AR(1) process, one must assume the process stays within a bounded interval to be able to solve the problem. To use the approach laid out in Tauchen (1986). Specifically, Tauchen (1986) considers an AR(1) process like the one in (1), where µ = 0. Denote the distribution function of ɛ as F (ɛ). We are going to approximate the continuous process in (1) by a discrete process with N values where the shock s can take on the values s 1 <... < s N. To determine the values s i take on, Tauchen recommends the following: ( ) 1 σ 2 2 s N mσ s = m ɛ, 1 λ 2 s 1 = s N, and s i = s 1 + (i 1)( s N s 1 ) N 1 (ie, use equidistant spacing). From this, one can calculate a transition matrix. Define p ij Prob(s(t) = s j s(t 1) = s i ), this would be the element in the transition matrix row i and column j - such that j p ij = 1. For j [2, N 1], p ij can be computed as ( ) ( ) sj λ s i + w/2 sj λ s i w/2 p ij = F F σ ɛ σ ɛ where w = s k s k 1 (note that given we are using equidistant points this value is the same for all k). These expressions can be thought of as the probability that λ s i +ɛ [ s j w/2, s j +w/2]. 14

15 Otherwise the probability of transitioning from state i into state 1 is ( ) s1 λ s i + w/2 p i1 = F σ ɛ and the probability of going from state i to state N is ( ) sn λ s i w/2 p in = 1 F. σ ɛ This discrete approximation to the conditional distribution of s(t) given s(t 1) will converge in the sense of probability to the true conditional distribution for the stochastic process articulated in (1). One practical concern for the above approach is how to deal with negative values for the shock. Specifically this means the firm s technology produces negative output, which does not make much economic sense. To prevent this situation we transform the shocks by letting s = exp(s) which ensures all values of the shock are positive. Another benefit of this transformation is that the grid becomes finer at the lower end and more coarse for high shock values. Example Example2.m in the zip file for Chapter 5 provides a simple implementation of the algorithm. Please refer to the markovprob.m file on the next page for a way to implement the algorithm. 5 References Ross, Sheldon M. Introduction to Probability Models. Fifth Edition. San Diego: Academic Press, Silos, Pedro. Assessing Markov-Chain Approximations: A Minimal Econometric Approach. Journal of Economic Dynamics and Control, 30 (2006), Tauchen, George. Finite State Markov-Chain Approximations to Univariate and Vector Autoregressions. Economic Letters, 20 (1986),

16 Implementation Tauchen Algorithm. Input 3: MATLAB Code for Tauchen Algorithm: function [Gl, TM, CDF, INVD, sz] = markovprob(mue, p, s, N, m) % markovprob - function %% Arguments: % mue = intercept of AR(1) process; % p = slope coeff. of AR(1) process; % s = std. dev. of residuals in AR(1) process; % N = # of grid points for the z variable; % m = Density of the grid for z variable; %% CODE: sz = s / ((1-p 2) (1/2)); % Std. Dev. of z. zmin = -m * sz + mue/(1-p); zmax = m * sz + mue/(1-p); z = linspace(zmin,zmax,n); % Grid Points %% Transition Matrix: TM = zeros(n,n); % Transition Matrix w = z(n) - z(n-1); for j = 1:N; TM(j,1) = cdf( norm,(z(1)+w/2-mue-p*z(j))/s,0,1); TM(j,N) = 1 - cdf( norm,(z(n)-w/2-mue-p*z(j))/s,0,1); for k = 2:N-1; TM(j,k) = cdf( norm,(z(k)+w/2-mue-p*z(j))/s,0,1)-... cdf( norm,(z(k)-w/2-mue-p*z(j))/s,0,1); end end %% Cumulative Distribution Function: CDF = cumsum(tm,2); %% Invariant Distribution: INVD = invdist(tm) ; % Invariant Distribution %% Grids: Gl = exp(z ); fprintf( If we have a lognormal var. (log(z)) in AR(1) process, \n ); fprintf( To make the interval finer at the lower end and coarser at the upper end.\n ); % 16

### 1 Limiting distribution for a Markov chain

Copyright c 2009 by Karl Sigman Limiting distribution for a Markov chain In these Lecture Notes, we shall study the limiting behavior of Markov chains as time n In particular, under suitable easy-to-check

### The Exponential Distribution

21 The Exponential Distribution From Discrete-Time to Continuous-Time: In Chapter 6 of the text we will be considering Markov processes in continuous time. In a sense, we already have a very good understanding

### Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab

Monte Carlo Simulation: IEOR E4703 Fall 2004 c 2004 by Martin Haugh Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab 1 Overview of Monte Carlo Simulation 1.1 Why use simulation?

### 15 Markov Chains: Limiting Probabilities

MARKOV CHAINS: LIMITING PROBABILITIES 67 Markov Chains: Limiting Probabilities Example Assume that the transition matrix is given by 7 2 P = 6 Recall that the n-step transition probabilities are given

### Bonus-malus systems and Markov chains

Bonus-malus systems and Markov chains Dutch car insurance bonus-malus system class % increase new class after # claims 0 1 2 >3 14 30 14 9 5 1 13 32.5 14 8 4 1 12 35 13 8 4 1 11 37.5 12 7 3 1 10 40 11

### Continuous-Time Markov Chains - Introduction

25 Continuous-Time Markov Chains - Introduction Prior to introducing continuous-time Markov chains today, let us start off with an example involving the Poisson process. Our particular focus in this example

### Exponential Distribution

Exponential Distribution Definition: Exponential distribution with parameter λ: { λe λx x 0 f(x) = 0 x < 0 The cdf: F(x) = x Mean E(X) = 1/λ. f(x)dx = Moment generating function: φ(t) = E[e tx ] = { 1

### IEOR 6711: Stochastic Models I Fall 2012, Professor Whitt, Tuesday, September 11 Normal Approximations and the Central Limit Theorem

IEOR 6711: Stochastic Models I Fall 2012, Professor Whitt, Tuesday, September 11 Normal Approximations and the Central Limit Theorem Time on my hands: Coin tosses. Problem Formulation: Suppose that I have

### LECTURE 4. Last time: Lecture outline

LECTURE 4 Last time: Types of convergence Weak Law of Large Numbers Strong Law of Large Numbers Asymptotic Equipartition Property Lecture outline Stochastic processes Markov chains Entropy rate Random

### Monte Carlo Methods in Finance

Author: Yiyang Yang Advisor: Pr. Xiaolin Li, Pr. Zari Rachev Department of Applied Mathematics and Statistics State University of New York at Stony Brook October 2, 2012 Outline Introduction 1 Introduction

### IEOR 6711: Stochastic Models, I Fall 2012, Professor Whitt, Final Exam SOLUTIONS

IEOR 6711: Stochastic Models, I Fall 2012, Professor Whitt, Final Exam SOLUTIONS There are four questions, each with several parts. 1. Customers Coming to an Automatic Teller Machine (ATM) (30 points)

### MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES Contents 1. Random variables and measurable functions 2. Cumulative distribution functions 3. Discrete

### Chapter 2: Binomial Methods and the Black-Scholes Formula

Chapter 2: Binomial Methods and the Black-Scholes Formula 2.1 Binomial Trees We consider a financial market consisting of a bond B t = B(t), a stock S t = S(t), and a call-option C t = C(t), where the

### 1. (First passage/hitting times/gambler s ruin problem:) Suppose that X has a discrete state space and let i be a fixed state. Let

Copyright c 2009 by Karl Sigman 1 Stopping Times 1.1 Stopping Times: Definition Given a stochastic process X = {X n : n 0}, a random time τ is a discrete random variable on the same probability space as

### Probability and Statistics

CHAPTER 2: RANDOM VARIABLES AND ASSOCIATED FUNCTIONS 2b - 0 Probability and Statistics Kristel Van Steen, PhD 2 Montefiore Institute - Systems and Modeling GIGA - Bioinformatics ULg kristel.vansteen@ulg.ac.be

### Master s Theory Exam Spring 2006

Spring 2006 This exam contains 7 questions. You should attempt them all. Each question is divided into parts to help lead you through the material. You should attempt to complete as much of each problem

### Notes on Continuous Random Variables

Notes on Continuous Random Variables Continuous random variables are random quantities that are measured on a continuous scale. They can usually take on any value over some interval, which distinguishes

### Renewal Theory. (iv) For s < t, N(t) N(s) equals the number of events in (s, t].

Renewal Theory Def. A stochastic process {N(t), t 0} is said to be a counting process if N(t) represents the total number of events that have occurred up to time t. X 1, X 2,... times between the events

### Markov Chains and Queueing Networks

CS 797 Independent Study Report on Markov Chains and Queueing Networks By: Vibhu Saujanya Sharma (Roll No. Y211165, CSE, IIT Kanpur) Under the supervision of: Prof. S. K. Iyer (Dept. Of Mathematics, IIT

### 1 The Brownian bridge construction

The Brownian bridge construction The Brownian bridge construction is a way to build a Brownian motion path by successively adding finer scale detail. This construction leads to a relatively easy proof

### SYSM 6304: Risk and Decision Analysis Lecture 3 Monte Carlo Simulation

SYSM 6304: Risk and Decision Analysis Lecture 3 Monte Carlo Simulation M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas Email: M.Vidyasagar@utdallas.edu September 19, 2015 Outline

### Brownian Motion and Stochastic Flow Systems. J.M Harrison

Brownian Motion and Stochastic Flow Systems 1 J.M Harrison Report written by Siva K. Gorantla I. INTRODUCTION Brownian motion is the seemingly random movement of particles suspended in a fluid or a mathematical

### ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE

ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE YUAN TIAN This synopsis is designed merely for keep a record of the materials covered in lectures. Please refer to your own lecture notes for all proofs.

### Probabilities and Random Variables

Probabilities and Random Variables This is an elementary overview of the basic concepts of probability theory. 1 The Probability Space The purpose of probability theory is to model random experiments so

### Simulating Stochastic Differential Equations

Monte Carlo Simulation: IEOR E473 Fall 24 c 24 by Martin Haugh Simulating Stochastic Differential Equations 1 Brief Review of Stochastic Calculus and Itô s Lemma Let S t be the time t price of a particular

### 18 Poisson Process 18 POISSON PROCESS 196. A counting process is a random process N(t), t 0, such that. 1. N(t) is a nonnegative integer for each t;

18 POISSON PROCESS 196 18 Poisson Process A counting process is a random process N(t), t 0, such that 1. N(t) is a nonnegative integer for each t;. N(t) is nondecreasing in t; and 3. N(t) is right-continuous.

### General theory of stochastic processes

CHAPTER 1 General theory of stochastic processes 1.1. Definition of stochastic process First let us recall the definition of a random variable. A random variable is a random number appearing as a result

### Introduction to Stationary Distributions

13 Introduction to Stationary Distributions We first briefly review the classification of states in a Markov chain with a quick example and then begin the discussion of the important notion of stationary

### e.g. arrival of a customer to a service station or breakdown of a component in some system.

Poisson process Events occur at random instants of time at an average rate of λ events per second. e.g. arrival of a customer to a service station or breakdown of a component in some system. Let N(t) be

### Worked examples Random Processes

Worked examples Random Processes Example 1 Consider patients coming to a doctor s office at random points in time. Let X n denote the time (in hrs) that the n th patient has to wait before being admitted

### M/M/1 and M/M/m Queueing Systems

M/M/ and M/M/m Queueing Systems M. Veeraraghavan; March 20, 2004. Preliminaries. Kendall s notation: G/G/n/k queue G: General - can be any distribution. First letter: Arrival process; M: memoryless - exponential

### Random access protocols for channel access. Markov chains and their stability. Laurent Massoulié.

Random access protocols for channel access Markov chains and their stability laurent.massoulie@inria.fr Aloha: the first random access protocol for channel access [Abramson, Hawaii 70] Goal: allow machines

### Markov Chains, Stochastic Processes, and Advanced Matrix Decomposition

Markov Chains, Stochastic Processes, and Advanced Matrix Decomposition Jack Gilbert Copyright (c) 2014 Jack Gilbert. Permission is granted to copy, distribute and/or modify this document under the terms

### A Uniform Asymptotic Estimate for Discounted Aggregate Claims with Subexponential Tails

12th International Congress on Insurance: Mathematics and Economics July 16-18, 2008 A Uniform Asymptotic Estimate for Discounted Aggregate Claims with Subexponential Tails XUEMIAO HAO (Based on a joint

### FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL

FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL STATIsTICs 4 IV. RANDOm VECTORs 1. JOINTLY DIsTRIBUTED RANDOm VARIABLEs If are two rom variables defined on the same sample space we define the joint

### Lecture 7: Continuous Random Variables

Lecture 7: Continuous Random Variables 21 September 2005 1 Our First Continuous Random Variable The back of the lecture hall is roughly 10 meters across. Suppose it were exactly 10 meters, and consider

### Nonparametric adaptive age replacement with a one-cycle criterion

Nonparametric adaptive age replacement with a one-cycle criterion P. Coolen-Schrijner, F.P.A. Coolen Department of Mathematical Sciences University of Durham, Durham, DH1 3LE, UK e-mail: Pauline.Schrijner@durham.ac.uk

### 1 Sufficient statistics

1 Sufficient statistics A statistic is a function T = rx 1, X 2,, X n of the random sample X 1, X 2,, X n. Examples are X n = 1 n s 2 = = X i, 1 n 1 the sample mean X i X n 2, the sample variance T 1 =

### Section 5.1 Continuous Random Variables: Introduction

Section 5. Continuous Random Variables: Introduction Not all random variables are discrete. For example:. Waiting times for anything (train, arrival of customer, production of mrna molecule from gene,

### 5. Continuous Random Variables

5. Continuous Random Variables Continuous random variables can take any value in an interval. They are used to model physical characteristics such as time, length, position, etc. Examples (i) Let X be

### 9.2 Summation Notation

9. Summation Notation 66 9. Summation Notation In the previous section, we introduced sequences and now we shall present notation and theorems concerning the sum of terms of a sequence. We begin with a

### Some ergodic theorems of linear systems of interacting diffusions

Some ergodic theorems of linear systems of interacting diffusions 4], Â_ ŒÆ êæ ÆÆ Nov, 2009, uà ŒÆ liuyong@math.pku.edu.cn yangfx@math.pku.edu.cn 1 1 30 1 The ergodic theory of interacting systems has

### Some stability results of parameter identification in a jump diffusion model

Some stability results of parameter identification in a jump diffusion model D. Düvelmeyer Technische Universität Chemnitz, Fakultät für Mathematik, 09107 Chemnitz, Germany Abstract In this paper we discuss

### Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015.

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment -3, Probability and Statistics, March 05. Due:-March 5, 05.. Show that the function 0 for x < x+ F (x) = 4 for x < for x

### Financial Mathematics and Simulation MATH 6740 1 Spring 2011 Homework 2

Financial Mathematics and Simulation MATH 6740 1 Spring 2011 Homework 2 Due Date: Friday, March 11 at 5:00 PM This homework has 170 points plus 20 bonus points available but, as always, homeworks are graded

### Analysis of a Production/Inventory System with Multiple Retailers

Analysis of a Production/Inventory System with Multiple Retailers Ann M. Noblesse 1, Robert N. Boute 1,2, Marc R. Lambrecht 1, Benny Van Houdt 3 1 Research Center for Operations Management, University

### Lecture 8: Random Walk vs. Brownian Motion, Binomial Model vs. Log-Normal Distribution

Lecture 8: Random Walk vs. Brownian Motion, Binomial Model vs. Log-ormal Distribution October 4, 200 Limiting Distribution of the Scaled Random Walk Recall that we defined a scaled simple random walk last

### Markov Chain Monte Carlo Simulation Made Simple

Markov Chain Monte Carlo Simulation Made Simple Alastair Smith Department of Politics New York University April2,2003 1 Markov Chain Monte Carlo (MCMC) simualtion is a powerful technique to perform numerical

### Probability Generating Functions

page 39 Chapter 3 Probability Generating Functions 3 Preamble: Generating Functions Generating functions are widely used in mathematics, and play an important role in probability theory Consider a sequence

### Load Balancing and Switch Scheduling

EE384Y Project Final Report Load Balancing and Switch Scheduling Xiangheng Liu Department of Electrical Engineering Stanford University, Stanford CA 94305 Email: liuxh@systems.stanford.edu Abstract Load

### LECTURE 16. Readings: Section 5.1. Lecture outline. Random processes Definition of the Bernoulli process Basic properties of the Bernoulli process

LECTURE 16 Readings: Section 5.1 Lecture outline Random processes Definition of the Bernoulli process Basic properties of the Bernoulli process Number of successes Distribution of interarrival times The

### Chapter 3 RANDOM VARIATE GENERATION

Chapter 3 RANDOM VARIATE GENERATION In order to do a Monte Carlo simulation either by hand or by computer, techniques must be developed for generating values of random variables having known distributions.

### Generating Random Variables and Stochastic Processes

Monte Carlo Simulation: IEOR E4703 c 2010 by Martin Haugh Generating Random Variables and Stochastic Processes In these lecture notes we describe the principal methods that are used to generate random

### Numerical Methods for Option Pricing

Chapter 9 Numerical Methods for Option Pricing Equation (8.26) provides a way to evaluate option prices. For some simple options, such as the European call and put options, one can integrate (8.26) directly

### Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Module No. #01 Lecture No. #15 Special Distributions-VI Today, I am going to introduce

### Supplement to Call Centers with Delay Information: Models and Insights

Supplement to Call Centers with Delay Information: Models and Insights Oualid Jouini 1 Zeynep Akşin 2 Yves Dallery 1 1 Laboratoire Genie Industriel, Ecole Centrale Paris, Grande Voie des Vignes, 92290

### Homework set 4 - Solutions

Homework set 4 - Solutions Math 495 Renato Feres Problems R for continuous time Markov chains The sequence of random variables of a Markov chain may represent the states of a random system recorded at

### Introduction to Simulation Using MATLAB

Chapter 12 Introduction to Simulation Using MATLAB A. Rakhshan and H. Pishro-Nik 12.1 Analysis versus Computer Simulation A computer simulation is a computer program which attempts to represent the real

### Generating Random Variables and Stochastic Processes

Monte Carlo Simulation: IEOR E4703 Fall 2004 c 2004 by Martin Haugh Generating Random Variables and Stochastic Processes 1 Generating U(0,1) Random Variables The ability to generate U(0, 1) random variables

### , for x = 0, 1, 2, 3,... (4.1) (1 + 1/n) n = 2.71828... b x /x! = e b, x=0

Chapter 4 The Poisson Distribution 4.1 The Fish Distribution? The Poisson distribution is named after Simeon-Denis Poisson (1781 1840). In addition, poisson is French for fish. In this chapter we will

### Diagonal, Symmetric and Triangular Matrices

Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by

### 4.1 Introduction and underlying setup

Chapter 4 Semi-Markov processes in labor market theory 4.1 Introduction and underlying setup Semi-Markov processes are, like all stochastic processes, models of systems or behavior. As extensions of Markov

### Hedging Options In The Incomplete Market With Stochastic Volatility. Rituparna Sen Sunday, Nov 15

Hedging Options In The Incomplete Market With Stochastic Volatility Rituparna Sen Sunday, Nov 15 1. Motivation This is a pure jump model and hence avoids the theoretical drawbacks of continuous path models.

### Chapter 4 Expected Values

Chapter 4 Expected Values 4. The Expected Value of a Random Variables Definition. Let X be a random variable having a pdf f(x). Also, suppose the the following conditions are satisfied: x f(x) converges

### 3. Renewal Theory. Definition 3 of the Poisson process can be generalized: Let X 1, X 2,..., iidf(x) be non-negative interarrival times.

3. Renewal Theory Definition 3 of the Poisson process can be generalized: Let X 1, X 2,..., iidf(x) be non-negative interarrival times. Set S n = n i=1 X i and N(t) = max {n : S n t}. Then {N(t)} is a

### 6 Scalar, Stochastic, Discrete Dynamic Systems

47 6 Scalar, Stochastic, Discrete Dynamic Systems Consider modeling a population of sand-hill cranes in year n by the first-order, deterministic recurrence equation y(n + 1) = Ry(n) where R = 1 + r = 1

### Introduction to Flocking {Stochastic Matrices}

Supelec EECI Graduate School in Control Introduction to Flocking {Stochastic Matrices} A. S. Morse Yale University Gif sur - Yvette May 21, 2012 CRAIG REYNOLDS - 1987 BOIDS The Lion King CRAIG REYNOLDS

### SPARE PARTS INVENTORY SYSTEMS UNDER AN INCREASING FAILURE RATE DEMAND INTERVAL DISTRIBUTION

SPARE PARS INVENORY SYSEMS UNDER AN INCREASING FAILURE RAE DEMAND INERVAL DISRIBUION Safa Saidane 1, M. Zied Babai 2, M. Salah Aguir 3, Ouajdi Korbaa 4 1 National School of Computer Sciences (unisia),

### 4 The M/M/1 queue. 4.1 Time-dependent behaviour

4 The M/M/1 queue In this chapter we will analyze the model with exponential interarrival times with mean 1/λ, exponential service times with mean 1/µ and a single server. Customers are served in order

### Performance Analysis of Computer Systems

Performance Analysis of Computer Systems Introduction to Queuing Theory Holger Brunst (holger.brunst@tu-dresden.de) Matthias S. Mueller (matthias.mueller@tu-dresden.de) Summary of Previous Lecture Simulation

### 6. Jointly Distributed Random Variables

6. Jointly Distributed Random Variables We are often interested in the relationship between two or more random variables. Example: A randomly chosen person may be a smoker and/or may get cancer. Definition.

### 4. Joint Distributions

Virtual Laboratories > 2. Distributions > 1 2 3 4 5 6 7 8 4. Joint Distributions Basic Theory As usual, we start with a random experiment with probability measure P on an underlying sample space. Suppose

### Two Topics in Parametric Integration Applied to Stochastic Simulation in Industrial Engineering

Two Topics in Parametric Integration Applied to Stochastic Simulation in Industrial Engineering Department of Industrial Engineering and Management Sciences Northwestern University September 15th, 2014

### Binomial lattice model for stock prices

Copyright c 2007 by Karl Sigman Binomial lattice model for stock prices Here we model the price of a stock in discrete time by a Markov chain of the recursive form S n+ S n Y n+, n 0, where the {Y i }

### Math Homework 7 Solutions

Math 450 - Homework 7 Solutions 1 Find the invariant distribution of the transition matrix: 0 1 0 P = 2 1 0 3 3 p 1 p 0 The equation πp = π, and the normalization condition π 1 + π 2 + π 3 = 1 give the

### P (A) = lim P (A) = N(A)/N,

1.1 Probability, Relative Frequency and Classical Definition. Probability is the study of random or non-deterministic experiments. Suppose an experiment can be repeated any number of times, so that we

### Hydrodynamic Limits of Randomized Load Balancing Networks

Hydrodynamic Limits of Randomized Load Balancing Networks Kavita Ramanan and Mohammadreza Aghajani Brown University Stochastic Networks and Stochastic Geometry a conference in honour of François Baccelli

### Continuous Random Variables

Continuous Random Variables COMP 245 STATISTICS Dr N A Heard Contents 1 Continuous Random Variables 2 11 Introduction 2 12 Probability Density Functions 3 13 Transformations 5 2 Mean, Variance and Quantiles

### Basics of Statistical Machine Learning

CS761 Spring 2013 Advanced Machine Learning Basics of Statistical Machine Learning Lecturer: Xiaojin Zhu jerryzhu@cs.wisc.edu Modern machine learning is rooted in statistics. You will find many familiar

### CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August

### Constructing Phylogenetic Trees Using Maximum Likelihood

Claremont Colleges Scholarship @ Claremont Scripps Senior Theses Scripps Student Scholarship 2012 Constructing Phylogenetic Trees Using Maximum Likelihood Anna Cho Scripps College Recommended Citation

### Stochastic Modelling and Forecasting

Stochastic Modelling and Forecasting Department of Statistics and Modelling Science University of Strathclyde Glasgow, G1 1XH RSE/NNSFC Workshop on Management Science and Engineering and Public Policy

### Confidence Intervals for Exponential Reliability

Chapter 408 Confidence Intervals for Exponential Reliability Introduction This routine calculates the number of events needed to obtain a specified width of a confidence interval for the reliability (proportion

### Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

### Probability and Random Variables. Generation of random variables (r.v.)

Probability and Random Variables Method for generating random variables with a specified probability distribution function. Gaussian And Markov Processes Characterization of Stationary Random Process Linearly

### Probability Calculator

Chapter 95 Introduction Most statisticians have a set of probability tables that they refer to in doing their statistical wor. This procedure provides you with a set of electronic statistical tables that

### Ito Excursion Theory. Calum G. Turvey Cornell University

Ito Excursion Theory Calum G. Turvey Cornell University Problem Overview Times series and dynamics have been the mainstay of agricultural economic and agricultural finance for over 20 years. Much of the

### 3. Regression & Exponential Smoothing

3. Regression & Exponential Smoothing 3.1 Forecasting a Single Time Series Two main approaches are traditionally used to model a single time series z 1, z 2,..., z n 1. Models the observation z t as a

### B. Maddah ENMG 501 Engineering Management I 03/17/07

B. Maddah ENMG 501 Engineering Management I 03/17/07 Discrete Time Markov Chains (3) Long-run Properties of MC (Stationary Solution) Consider the two-state MC of the weather condition in Example 4. P 0.5749

### Generating Random Numbers Variance Reduction Quasi-Monte Carlo. Simulation Methods. Leonid Kogan. MIT, Sloan. 15.450, Fall 2010

Simulation Methods Leonid Kogan MIT, Sloan 15.450, Fall 2010 c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 1 / 35 Outline 1 Generating Random Numbers 2 Variance Reduction 3 Quasi-Monte

EXAM 3, FALL 003 Please note: On a one-time basis, the CAS is releasing annotated solutions to Fall 003 Examination 3 as a study aid to candidates. It is anticipated that for future sittings, only the

### 6. Distribution and Quantile Functions

Virtual Laboratories > 2. Distributions > 1 2 3 4 5 6 7 8 6. Distribution and Quantile Functions As usual, our starting point is a random experiment with probability measure P on an underlying sample spac

### Outline. Random Variables. Examples. Random Variable

Outline Random Variables M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno Random variables. CDF and pdf. Joint random variables. Correlated, independent, orthogonal. Correlation,

### Principle of Data Reduction

Chapter 6 Principle of Data Reduction 6.1 Introduction An experimenter uses the information in a sample X 1,..., X n to make inferences about an unknown parameter θ. If the sample size n is large, then

### 10.2 Series and Convergence

10.2 Series and Convergence Write sums using sigma notation Find the partial sums of series and determine convergence or divergence of infinite series Find the N th partial sums of geometric series and

### MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform

MATH 433/533, Fourier Analysis Section 11, The Discrete Fourier Transform Now, instead of considering functions defined on a continuous domain, like the interval [, 1) or the whole real line R, we wish

### Math 431 An Introduction to Probability. Final Exam Solutions

Math 43 An Introduction to Probability Final Eam Solutions. A continuous random variable X has cdf a for 0, F () = for 0 <

### STAT/MTHE 353: Probability II. STAT/MTHE 353: Multiple Random Variables. Review. Administrative details. Instructor: TamasLinder

STAT/MTHE 353: Probability II STAT/MTHE 353: Multiple Random Variables Administrative details Instructor: TamasLinder Email: linder@mast.queensu.ca T. Linder ueen s University Winter 2012 O ce: Je ery