Math 526: Brownian Motion Notes



Similar documents
1 Simulating Brownian motion (BM) and geometric Brownian motion (GBM)

ARBITRAGE-FREE OPTION PRICING MODELS. Denis Bell. University of North Florida

Stochastic Processes and Advanced Mathematical Finance. The Definition of Brownian Motion and the Wiener Process

IEOR 6711: Stochastic Models, I Fall 2012, Professor Whitt, Final Exam SOLUTIONS

THE CENTRAL LIMIT THEOREM TORONTO

The Exponential Distribution

Moreover, under the risk neutral measure, it must be the case that (5) r t = µ t.

Transformations and Expectations of random variables

The Black-Scholes-Merton Approach to Pricing Options

A spot price model feasible for electricity forward pricing Part II

Mathematical Finance

The Black-Scholes Formula

Chapter 2: Binomial Methods and the Black-Scholes Formula

Stochastic Processes LECTURE 5

COMPLETE MARKETS DO NOT ALLOW FREE CASH FLOW STREAMS

Marshall-Olkin distributions and portfolio credit risk

Goal Problems in Gambling and Game Theory. Bill Sudderth. School of Statistics University of Minnesota

Lecture 13: Martingales

Exponential Distribution

e.g. arrival of a customer to a service station or breakdown of a component in some system.

LECTURE 15: AMERICAN OPTIONS

Monte Carlo Methods in Finance

Lectures 5-6: Taylor Series

Lectures on Stochastic Processes. William G. Faris

LOGNORMAL MODEL FOR STOCK PRICES

EXIT TIME PROBLEMS AND ESCAPE FROM A POTENTIAL WELL

Binomial lattice model for stock prices

Maximum likelihood estimation of mean reverting processes

Wald s Identity. by Jeffery Hein. Dartmouth College, Math 100

Pricing Barrier Options under Local Volatility

A Coefficient of Variation for Skewed and Heavy-Tailed Insurance Losses. Michael R. Powers[ 1 ] Temple University and Tsinghua University

1. (First passage/hitting times/gambler s ruin problem:) Suppose that X has a discrete state space and let i be a fixed state. Let

Markovian projection for volatility calibration

Martingale Pricing Applied to Options, Forwards and Futures

QUANTIZED INTEREST RATE AT THE MONEY FOR AMERICAN OPTIONS

CHAPTER IV - BROWNIAN MOTION

correct-choice plot f(x) and draw an approximate tangent line at x = a and use geometry to estimate its slope comment The choices were:

What is Statistics? Lecture 1. Introduction and probability review. Idea of parametric inference

Asian Option Pricing Formula for Uncertain Financial Market

Estimating the Degree of Activity of jumps in High Frequency Financial Data. joint with Yacine Aït-Sahalia

Diusion processes. Olivier Scaillet. University of Geneva and Swiss Finance Institute

Black-Scholes Option Pricing Model

1 IEOR 4700: Introduction to stochastic integration

Notes on Elastic and Inelastic Collisions

The integrating factor method (Sect. 2.1).

Metric Spaces. Chapter Metrics

Grey Brownian motion and local times

IEOR 6711: Stochastic Models I Fall 2012, Professor Whitt, Tuesday, September 11 Normal Approximations and the Central Limit Theorem

Notes on Black-Scholes Option Pricing Formula

General Theory of Differential Equations Sections 2.8, , 4.1

MS3b/MScMCF Lévy Processes and Finance. Matthias Winkel Department of Statistics University of Oxford

LECTURE 9: A MODEL FOR FOREIGN EXCHANGE

BROWNIAN MOTION DEVELOPMENT FOR MONTE CARLO METHOD APPLIED ON EUROPEAN STYLE OPTION PRICE FORECASTING

2.2. Instantaneous Velocity

Probability density function : An arbitrary continuous random variable X is similarly described by its probability density function f x = f X

ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE

Brownian Motion and Stochastic Flow Systems. J.M Harrison

Definition: Suppose that two random variables, either continuous or discrete, X and Y have joint density

Important Probability Distributions OPRE 6301

ANALYZING INVESTMENT RETURN OF ASSET PORTFOLIOS WITH MULTIVARIATE ORNSTEIN-UHLENBECK PROCESSES

Introduction to Probability

Hydrodynamic Limits of Randomized Load Balancing Networks

1 Short Introduction to Time Series

Does Black-Scholes framework for Option Pricing use Constant Volatilities and Interest Rates? New Solution for a New Problem

OPTIMAl PREMIUM CONTROl IN A NON-liFE INSURANCE BUSINESS

Chapter 8 - Power Density Spectrum

Final Mathematics 5010, Section 1, Fall 2004 Instructor: D.A. Levin

Generating Random Variables and Stochastic Processes

Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab

BROWNIAN MOTION 1. INTRODUCTION

1 The Brownian bridge construction

THE BLACK-SCHOLES MODEL AND EXTENSIONS

Introduction to Markov Chain Monte Carlo

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).

A Vega-Gamma Relationship for European-Style or Barrier Options in the Black-Scholes Model

Section 5.1 Continuous Random Variables: Introduction

Simple Arbitrage. Motivated by and partly based on a joint work with T. Sottinen and E. Valkeila. Christian Bender. Saarland University

Generating Random Variables and Stochastic Processes

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

Dirichlet forms methods for error calculus and sensitivity analysis

Black-Scholes Equation for Option Pricing

Flash crashes and order avalanches

6.042/18.062J Mathematics for Computer Science December 12, 2006 Tom Leighton and Ronitt Rubinfeld. Random Walks

A SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS

Volatility Index: VIX vs. GVIX

Corrected Diffusion Approximations for the Maximum of Heavy-Tailed Random Walk

Alternative Price Processes for Black-Scholes: Empirical Evidence and Theory

Pricing American Options without Expiry Date

Bias in the Estimation of Mean Reversion in Continuous-Time Lévy Processes

On the mathematical theory of splitting and Russian roulette

High-frequency trading in a limit order book

M/M/1 and M/M/m Queueing Systems

LS.6 Solution Matrices

Markov modeling of Gas Futures

1.3.1 Position, Distance and Displacement

Lecture 6: Option Pricing Using a One-step Binomial Tree. Friday, September 14, 12

Transcription:

Math 526: Brownian Motion Notes Definition. Mike Ludkovski, 27, all rights reserved. A stochastic process (X t ) is called Brownian motion if:. The map t X t (ω) is continuous for every ω. 2. (X t X t ), (X t2 X t ),..., (X tn X tn ) are independent for any collection of times t t... t n. 3. The distribution of X t X s depends only on (t s). Property is called continuity of sample paths. Property 2 is called independent increments. Property 3 is called stationarity of increments. Here are some immediate conclusions and comments: Property 2 and 3 imply that (X t ) is time-homogeneous (one may shift the time-axis) and space-homogeneous (since distribution of increments does not depend on current X s ). Property 2 and 3 also imply that (X t ) is Markov, since the future increments are conditionally independent of the past given the present. Properties 2 and 3 are also satisfied by the Poisson process. process certainly does not have continuous sample paths. However the Poisson It is not at all clear that such a process (X t ) exists, since the requirement of continuous paths is very stringent. The underlying probability space here is Ω {continuous functions on [, )}. Brownian Motion has Gaussian Marginals. We have the following Theorem. If (X t ) is a Brownian motion, then there exist constants µ, σ 2 such that X t X s N ((t s)/µ, (t s)σ 2 ). Proof. Without loss of generality take s and pick some n Z and write X t X (X t/n X ) + (X 2t/n X t/n ) +... + (X t X (n )t/n ). Each term on the right hand side is independent and identically distributed. Letting n, by the central limit theorem, the right hand side must converge to some normally distributed random variable (the continuity of (X t ) makes sure that one can apply the CLT, as the increments must become smaller and smaller as n increases).

We conclude that X t X N (a(t), b(t)) for some functions a, b. However, since X t+s X (X t X ) + (X t+s X t ) and the two are independent, we have that X t+s X N (a(t), b(t)) + N (a(s), b(s)) d N (a(t) + a(s), b(t) + b(s)). On the other hand, we directly have X t+s X N (a(t + s), b(t + s)). It follows that a(t + s) a(t) + a(s) and similarly for b. But this means that a, b are linear functions, i.e. a(t) µt, b(t) σ 2 t (clearly b must have positive slope), for some µ, σ 2. The parameter µ is called drift of the Brownian motion and σ is called the volatility. To summarize, if (X t ) is a BM then the marginal distribution is Wiener Process X t X N (µt, σ 2 t). The special case µ, σ 2, X is called the Wiener process. We write (W t ) in that case. Here are some computations for the Wiener process: E[W t ]. V ar(w t ) t. Cov(W s, W t ) E[W s W t ] E[W s ]E[W t ] E [W s (W s + W t W s )] E[Ws 2 ] + E[W s ]E[W t W s ] V ar(w s ) + s. s < t The last statement is written as Cov(W s, W t ) min(s, t). Here is another important computation: we find the conditional distribution of W s given W t z, s < t: P(W s dx W t z) P(W s dx, W t dz) P(W t dz) P(W s dx, (W t W s ) (dz x)) P(W t dz) Using the Gaussian density and independent increments: ( 2πs e dx) ( ) x2 /(2s) e (z x)2 /(2(t s)) dz 2π(t s) 2πt e z2 /(2t) dz ( exp ( x 2 /s + (z x) 2 /(t s) z 2 /t )) 2 exp( (x s t z)2 /(2s(t s)/t)). 2πs(t s)/t 2πs(t s)/t 2

The last line is the density of a Gaussian random variable and we conclude that ( s ) W s W t z N t z, s(t s)/t. This is quite intuitive: E[W s W t z] (s/t)z and V ar(w s W t z) (s/t) 2 (t s) 2. Constructing Brownian Motion. Here is the physicist s construction of BM. This idea was first advanced by Einstein in 96 in connection with molecular/atomic forces. Consider a particle of pollen suspended in liquid. The particle is rather large but light and is subject to being hit at random by the liquid molecules (which are small but very many). Let us focus on the movement of the particle along the x axis, and record its position at time t as Xt ɛ. Each time a particle is hit on the left, it moves to the right by a distance ɛ, and each time it is hit on the right, it moves left by an ɛ. Let L t, M t be the number of hits by time t on the left and right respectively. Then Xt ɛ ɛl t ɛm t (for simplicity we took X ). It is natural to assume that L t, M t are two independent stationary Markov processes with iid increments. That is, L t, M t are two Poisson processes with same intensity λ ɛ. We immediately get E[Xt ɛ ] and V ar(x ɛ t ) ɛ 2 E[(L t M t ) 2 ] ɛ 2 (2(λ ɛ t + (λ ɛ t) 2 ) 2E[L t M t ] 2ɛ 2 λ ɛ t. So far ɛ was a dummy parameter and of course we are planning on taking ɛ. To obtain an interesting (X t ) in the limit, we should make sure that the variance above converges to some non-zero finite limit. This would happen if we take λ ɛ C/(2ɛ 2 ). Thus, as the displacement of the pollen particle from each hit is diminished we naturally increase the frequency of hits (and the correct scaling is quadratic). To figure out what is the limiting (X t ) we compute the mgf (recall the mgf of P oisson(λ) is e λ(es ) ): E[e sxɛ t ] exp (λɛ t(e sɛ )) exp ( λ ɛ t(e sɛ ) ) Since by L Hopital rule we have e sɛ + e sɛ 2 lim ɛ ɛ 2 e tc/(2ɛ2 )(e sɛ +e sɛ 2). lim ɛ se sɛ se sɛ 2ɛ lim ɛ s 2 e sɛ + s 2 e sɛ 2 s 2 lim E[e sxɛ t ] e tcs 2 /2. ɛ The latter is the moment generating function of a N (, tc) random variable. Since for every ɛ >, (X ɛ t ) had stationary and independent increments (because L t, M t do), so does the 3

limiting (X t ). Moreover, since the displacement ɛ, (X t ) should be continuous. Putting it all together we conclude that (X t ) is a Brownian motion with zero drift and volatility C. If C then we get the Wiener process. The name Brownian motion comes from the botanist Robert Brown who first observed the irregular motion of pollen particles suspended in water in 828. As you can see it took 8 years until Einstein could provide a good mathematical model for these observations. Einstein s model was a crucial step in convincing physicists that molecules really exist and move around randomly when in liquid or gaseous form. Since then, this model has indicated to physicists that any random movement in physics should be based on Brownian motion, at least on the atomic/molecular level where external forces, such as gravity, are minimal. Remark: note that (X ɛ t ) above is a scaled simple birth-and-death process. Recognizing Brownian Motion. We first observe that if (X t ) is a Brownian motion with µ then (X t ) is a martingale with respect to itself. Indeed, let F t {X u : u t} be the history of X up to time t. Then for s > t E[X s F t ] E[(X s X t ) + X t F t ] E[X s X t ] + X t X t, since the expected value of any increment is zero. More generally, Y t X t µt is a martingale for any Brownian motion with drift µ. The above property has far-reaching consequences. In fact, it turns out that the Wiener process is the canonical continuous martingale. This fact forms the basis for stochastic calculus and underlines the importance of understanding the behavior of BM. One basic application is the following Levy characterization of Wiener process: Theorem 2. Suppose that (Z t ) is a continuous-time stochastic process such that: The paths of Z are continuous. (Z t ) is a martingale with respect to its own history. V ar(z t Z s ) (t s) for any t > s >. Then (Z t Z ) is a Wiener process. As a corollary, if (Z t ) is a continuous process such that Z t µt is a martingale and V ar(z t ) σ 2 t then (Z t ) is a Brownian motion with drift µ and volatility σ. From Random Walk to Brownian Motion. Here is another construction of Brownian motion. Let (S δ t ) be a simple symmetric random walk that makes steps of size ±δ at times t /n, 2/n,.... We know that S( δ t) is a time- and space-stationary discrete-time martingale. In particular, E[S δ t ], and V ar(s δ t ) δ 2 nt. To pass to the limit δ we therefore select n /δ 2. As δ, we obtain a continuoustime, continuous-space stochastic process that has stationary and independent increments, 4

continuous-paths and the martingale property. Moreover, V ar(st ) t, so (St ) must be the Wiener process. One can also pick other random walks to make the convergence faster. For example a good choice is to take the step distribution as X n +δ with prob. /6 with prob. 2/3 δ with prob. /6 and n 3/δ 2. The corresponding random walk S n X +... + X n can go up, go down, or stay at the same level. It is often helpful to think of Brownian motion as the limit of symmetric random walks. As we will see, most of the general results about random walks (including recurrence, reflection principle, ballot theorem, ruin probabilities) carry over nicely to Brownian motion. Hitting Time Distribution. Let (W t ) be the Wiener process and T b (ω) min{t : W t (ω) b} be the first time (W t ) hits level b. We are interested in computing the distribution of T b. Since the behavior of the Wiener process is symmetric about the x-axis, we take b >. Define Ŵt W Tb +t W Tb to be the future of W after time T b. Note that T b is random, so it is not clear how Ŵ looks like. However, it is possible to verify all the conditions of Levy s Theorem and conclude that Ŵ is again a Wiener process. This is because (W t) is a strong Markov process, meaning that the Markov property of (W t ) continues to hold when applied at random (stopping!) times, such as T b. Thus, the future of (W t ) after T b is independent of its history up to T b. From this fact we obtain P(W t > b T b < t) P(Ŵt T b > ) /2, since P(Ŵs > ) /2 for any time s by symmetry. However, the LHS above can also be written as P(W t > b T b < t) P(W t > b, T b < t) P(T b < t) P(W t > b) P(T b < t), since the only way that W is above b at t is if the hitting time of b has already occurred. Comparing we conclude that P(T b < t) 2P(W t > b) 2 b/ t 2π e x2 /2 dx. An immediate corollary is that if we take t then the RHS goes to 2 2π e x2 /2 dx, which means that P(T b < ), irrespective of value of b. Therefore, the Wiener process hits any level b with probability. 5

Furthermore, differentiating the above with respect to t we find that the density of T b is given by: /(2t) P(T b dt) b e b2, b R. () 2πt 3 The latter belongs to the family of stable (or inverse Gamma) distributions (we say that T b has stable(/2)-distribution). In fact, here is a curious connection: let Z N (, ) and Y b 2 /Z 2. Then T b Y. Indeed, Differentiating, P(Y t) P(b 2 /Z 2 t) P( Z b / t) 2P(Z b / t). P(Y dt) 2 b 2t 3/2 2π e (b/ t) 2 /2 which matches with (). Using () one also shows that E[T b ] + for any b. So the conclusion is: No matter how large b is, P(T b < ). No matter how small b is, E[T b ]. Maximum Process. Let M t max s t W s be the running maximum of Wiener process W. Then (M t ) is a non-decreasing process that goes to + since we know that (W t ) will eventually hit any positive level. Moreover, P(M t > x) P(T x < t) 2P(W t > x) P( t Z > x) based on properties of normal r.v. s. We conclude that the marginal distribution of M t is the same as that of an absolute value of a Gaussian random variable. In particular, E[M t ] 2t/π, V ar(m t ) ( 2/π)t. Note that while (M t ) is increasing, it does so in an imperceptible creep: for any fixed t, (M t ) is flat in the neighborhood of t with probability. If you draw a typical path of (M t ) on an interval of say [, ], the length of flat areas will add up to (even though M > M with probability ). For those of you who know some analysis, the set of points where (M t ) icnreases is a Cantor set of measure zero. By symmetry the minimum m t min s t W s has the same distribution as M t. Since both m t, M t + it must be that (W t ) recrosses zero an infinite number of times. Thus, the zero level (and by space-homogeneity, any level) is recurrent for (W t ). The next computation shows that much more is true: 6

Probability of a Zero. We are interested in computing the probability that the Wiener process W hits zero on the time interval [s, t]. Observe that if W s a, then the probability that W hits zero on [s, t] is just t s a P(T a < (t s)) /(2y) 2πy 3 e a2 dy. However, W s N (, s), so putting it together (ie conditioning on W s ) we have: P(T a < (t s)) e a2 /(2s) da 2πs t s t s 2 π t s t s a /(2y) 2πy 3 e a2 dy e a2 /(2s) da 2πs ( ) a e a2 /(2y) a 2 /(2s) da dy 2πs /2 y 3/2 ( ) a e a2 (y+s)/(2sy) da dy πs /2 y 3/2 sy πs /2 y 3/2 s + y dy s t s dy π (s + y) y (t s)/s dx by substitution y sx 2 + x 2 2 π arctan( (t s)/s) 2 π arccos s t. Corollary: if s then probability of a zero on [, t] is 2 arccos for any t. Therefore, π W returns to zero immediately after t. After some thinking and using the strong Markov property of W, it follows that W has infinitely many zeros on any interval [, t]. This is quite counterintuitive given that E[T ɛ ] + even for very small ɛ. Thus W really oscillates wildly around zero. This is one indication that W is nowhere differentiable: it oscillates around every point and the scale of oscillations over time interval of length h is on the order of O( W h), which means that lim t+h W t h cannot exist. h Time of Last Zero. Let L t be the time of last zero of W before time t. Let D t be the time of first zero of W after time t. We observe that {L t < s} {D s > t} {W has no zeros on [s, t]}. From the last section, we therefore find that meaning that the density of L t is: P(L t < s) 2 π arccos s t 2 π arcsin s t, P(L t ds) π s(t s), s t. 7

These facts are known as the Arc-sine Law and show that the last zero of W is likely to be either close to zero or to t. Here is another derivation of the same result using the Gaussian-Gamma connection. Note that conditional on W t a, the distribution of D t is D t t + T a t + a2 γ where γ Gamma(/2, /2). Therefore, P(D t (t + u)) P(D t t u W t a)f Wt (a) da P(a 2 /γ u)f Wt (a) da P(Wt 2 /γ u) P(tγ /γ u) ( ) tγ P u, γ where γ Gamma(/2, /2) is another Gamma rv, independent of γ (since T a is indep of W ). So distribution of D t t is same as tγ γ ie distribution of D t is equal to that of t( + γ /γ ) t/ γ γ +γ. The latter expression β γ γ +γ is known to have a Beta(/2, /2) distribution with density f β (u), u. So P(D t < (t + u)) P(t/β < π u( u) (t + u)) P(β > t/(t + u)). However, as mentioned before, s/t P(L t < s) P(D s > s + (t s)) P(β < s/t) π u( u) du 2 s π arcsin t, using the substitution x u. In particular note that the distribution of L is Beta(/2, /2). Planar BM Let (W, W 2 ) be two independent Wiener processes, and take X t X + Wt, Y t Y + Wt 2. Then viewing (X t, Y t ) as the (x, y)-coordinates of a moving particle, we obtain what is called a planar Brownian motion, starting at the initial location (X, Y ). Here is one curious computation: Suppose (X, Y ) (, b) so the particle starts somewhere on the positive y-axis. Let T be the first time that Y t the particle hits the x-axis. We are interested in the distribution of X T. Recall that from the remark after (), T b 2 /Z 2. It follows that X T T Z where (Z, Z) are two independent standard normal random variables. In other words, X T b Z Z which we recall has a Cauchy distribution, namely P(X T dx) b (b 2 + x 2 )π. Like the one-dimensional case, two-dimensional Brownian motion is recurrent and will hit every point in the plane with probability. 8