Generating Random Numbers Variance Reduction Quasi-Monte Carlo. Simulation Methods. Leonid Kogan. MIT, Sloan. 15.450, Fall 2010



Similar documents
Monte Carlo Methods in Finance

3. Monte Carlo Simulations. Math6911 S08, HM Zhu

Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab

Variance Reduction. Pricing American Options. Monte Carlo Option Pricing. Delta and Common Random Numbers

Monte Carlo Simulation

Joint Exam 1/P Sample Exam 1

General Sampling Methods

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March Due:-March 25, 2015.

Chapter 3 RANDOM VARIATE GENERATION

Monte Carlo Simulation

Pricing Discrete Barrier Options

Stephane Crepey. Financial Modeling. A Backward Stochastic Differential Equations Perspective. 4y Springer

Arbitrage-Free Pricing Models

For a partition B 1,..., B n, where B i B j = for i. A = (A B 1 ) (A B 2 ),..., (A B n ) and thus. P (A) = P (A B i ) = P (A B i )P (B i )

Simulating Stochastic Differential Equations

Numerical Methods for Option Pricing

Chapter 2: Binomial Methods and the Black-Scholes Formula

The Heston Model. Hui Gong, UCL ucahgon/ May 6, 2014

Monte Carlo-based statistical methods (MASM11/FMS091)

Variance Reduction Methods I

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)

Binomial lattice model for stock prices

problem arises when only a non-random sample is available differs from censored regression model in that x i is also unobserved

Pricing Barrier Option Using Finite Difference Method and MonteCarlo Simulation

S 1 S 2. Options and Other Derivatives

VI. Real Business Cycles Models

Monte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMS091)

PHASE ESTIMATION ALGORITHM FOR FREQUENCY HOPPED BINARY PSK AND DPSK WAVEFORMS WITH SMALL NUMBER OF REFERENCE SYMBOLS

The Monte Carlo Framework, Examples from Finance and Generating Correlated Random Variables

An Introduction to Modeling Stock Price Returns With a View Towards Option Pricing

Errata and updates for ASM Exam C/Exam 4 Manual (Sixteenth Edition) sorted by page

1 The Brownian bridge construction

Monte Carlo simulation techniques

Lecture 10. Finite difference and finite element methods. Option pricing Sensitivity analysis Numerical examples

Econometrics Simple Linear Regression

Mathematical Finance

Numerical methods for American options

Lecture Notes 1. Brief Review of Basic Probability

Math 431 An Introduction to Probability. Final Exam Solutions

Principle of Data Reduction

1 The Black-Scholes model: extensions and hedging

The sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1].

Part 2: Analysis of Relationship Between Two Variables

Basics of Statistical Machine Learning

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Chapter 13 Introduction to Nonlinear Regression( 非 線 性 迴 歸 )

Lecture 6: Option Pricing Using a One-step Binomial Tree. Friday, September 14, 12

Monte Carlo Methods and Models in Finance and Insurance

Hedging. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Hedging

Simple Linear Regression Inference

2WB05 Simulation Lecture 8: Generating random variables

Martingale Pricing Applied to Options, Forwards and Futures

Ch 7. Greek Letters and Trading Strategies

Centre for Central Banking Studies

Generating Random Variables and Stochastic Processes

Correlation in Random Variables

Math 151. Rumbos Spring Solutions to Assignment #22

CHAPTER 6: Continuous Uniform Distribution: 6.1. Definition: The density function of the continuous random variable X on the interval [A, B] is.

Point Biserial Correlation Tests

Lecture 1: Asset pricing and the equity premium puzzle

Supplement to Call Centers with Delay Information: Models and Insights

Monte Carlo Methods and Black Scholes model

Studies of Barrier Options and their Sensitivities

Moreover, under the risk neutral measure, it must be the case that (5) r t = µ t.

The Bivariate Normal Distribution

Option Valuation. Chapter 21

1 Teaching notes on GMM 1.

Pricing and hedging American-style options: a simple simulation-based approach

Tutorial on Markov Chain Monte Carlo

Exam MFE Spring 2007 FINAL ANSWER KEY 1 B 2 A 3 C 4 E 5 D 6 C 7 E 8 C 9 A 10 B 11 D 12 A 13 E 14 E 15 C 16 D 17 B 18 A 19 D

Lecture 8. Confidence intervals and the central limit theorem

Generating Random Variables and Stochastic Processes

Options 1 OPTIONS. Introduction

TABLE OF CONTENTS. A. Put-Call Parity 1 B. Comparing Options with Respect to Style, Maturity, and Strike 13

GLMs: Gompertz s Law. GLMs in R. Gompertz s famous graduation formula is. or log µ x is linear in age, x,

Lecture 3: Linear methods for classification

Properties of the SABR model

Notes on Continuous Random Variables

Stock Price Dynamics, Dividends and Option Prices with Volatility Feedback

Portfolio Distribution Modelling and Computation. Harry Zheng Department of Mathematics Imperial College

Simulation Exercises to Reinforce the Foundations of Statistical Thinking in Online Classes

CS 522 Computational Tools and Methods in Finance Robert Jarrow Lecture 1: Equity Options

Maximum Likelihood Estimation

6.2 Permutations continued

Pricing Options with Discrete Dividends by High Order Finite Differences and Grid Stretching

1 Prior Probability and Posterior Probability

Probability and Random Variables. Generation of random variables (r.v.)

ARMA, GARCH and Related Option Pricing Method

Caput Derivatives: October 30, 2003

Hedging Options In The Incomplete Market With Stochastic Volatility. Rituparna Sen Sunday, Nov 15

Auxiliary Variables in Mixture Modeling: 3-Step Approaches Using Mplus

Master of Mathematical Finance: Course Descriptions

Contents. List of Figures. List of Tables. List of Examples. Preface to Volume IV

Equity-Based Insurance Guarantees Conference November 1-2, New York, NY. Operational Risks

Lecture 11: The Greeks and Risk Management

Nonparametric adaptive age replacement with a one-cycle criterion

5. Continuous Random Variables

Static Hedging and Model Risk for Barrier Options

A SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS

Lecture 21. The Multivariate Normal Distribution

Transcription:

Simulation Methods Leonid Kogan MIT, Sloan 15.450, Fall 2010 c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 1 / 35

Outline 1 Generating Random Numbers 2 Variance Reduction 3 Quasi-Monte Carlo c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 2 / 35

Overview Simulation methods (Monte Carlo) can be used for option pricing, risk management, econometrics, etc. Naive Monte Carlo may be too slow in some practical situations. Many special techniques for variance reduction: antithetic variables, control variates, stratified sampling, importance sampling, etc. Recent developments: Quasi-Monte Carlo (low discrepancy sequences). c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 3 / 35

The Basic Problem Consider the basic problem of computing an expectation θ = E[f (X )], X pdf (X ) Monte Carlo simulation approach specifies generating N independent draws from the distribution pdf (X ), X 1, X 2,..., X N, and approximating 1 N E[f (X)] θ N f (X i ) N By Law of Large Numbers, the approximation θ N converges to the true value as N increases to infinity. Monte Carlo estimate θ N is unbiased: E θ N = θ By Central Limit Theorem, i=1 θ N θ N σ N(0, 1), σ 2 = Var[f (X )] c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 4 / 35

Outline 1 Generating Random Numbers 2 Variance Reduction 3 Quasi-Monte Carlo c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 5 / 35

Generating Random Numbers Pseudo random number generators produce deterministic sequences of numbers that appear stochastic, and match closely the desired probability distribution. For some standard distributions, e.g., uniform and Normal, MATLAB provides built-in random number generators. Sometimes it is necessary to simulate from other distributions, not covered by the standard software. Then apply one of the basic methods for generating random variables from a specified distribution. c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 6 / 35

The Inverse Transform Method Consider a random variable X with a continuous, strictly increasing CDF function F (x). We can simulate X according to This works, because X = F 1 (U), U Unif [0, 1] Prob(X x) = Prob(F 1 (U) x) = Prob(U F (x)) = F (x) If F (x) has jumps, or flat sections, generalize the above rule to X = min (x : F (x) U) c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 7 / 35

The Inverse Transform Method Example: Exponential Distribution Consider an exponentially-distributed random variable, characterized by a CDF F (x) = 1 e x/θ Exponential distributions often arise in credit models. Compute F 1 (u) u = 1 e x/θ X = θ ln(1 U) θ ln U c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 8 / 35

The Inverse Transform Method Example: Discrete Distribution Consider a discrete random variable X with values Define cumulative probabilities c 1 < c 2 < < c n, Prob(X = c i ) = p i i F (c i ) = q i = p j j=1 Can simulate X as follows: 1 2 3 Generate U Unif [0, 1]. Find K {1,..., n} such that q K 1 U q K. Set X = c K. c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 9 / 35

The Acceptance-Rejection Method Generate samples with probability density f (x). The acceptance-rejection (A-R) method can be used for multivariate problems as well. Suppose we know how to generate samples from the distribution with pdf g(x), s.t., f (x) cg(x), c > 1 Follow the algorithm 1 2 3 Generate X from the distribution g(x); Generate U from Unif [0, 1]; If U f (X )/[cg(x)], return X ; otherwise go to Step 1. Probability of acceptance on each attempt is 1/c. Want c close to 1. See the Appendix for derivations. c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 10 / 35

The Acceptance-Rejection Method Example: Beta Distribution The beta density is 1 2 f (x) = B(α 1, β) x α 1 (1 x) β 1, 0 x 1 Assume α, β 1. Then f (x) has a maximum at (α 1)/(α + β 2). Define α 1 c = f α + β 2 and choose g(x) = 1. The A-R method becomes Generate independent U 1 and U 2 from Unif [0, 1] until cu 2 f (U 1 ); Return U 1. c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 11 / 35

The Acceptance-Rejection Method Example: Beta Distribution c f (U 1 ) Accept U 1 if cu 2 in this range 0 U 1 1 Illustration of the acceptance-rejection method using uniformly distributed candidates. Image by MIT OpenCourseWare. Source: Glasserman 2004, Figure 2.8 c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 12 / 35

Outline 1 Generating Random Numbers 2 Variance Reduction 3 Quasi-Monte Carlo c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 13 / 35

Variance reduction Suppose we have simulated N independent draws from the distribution f (x). How accurate is our estimate of the expected value E[f (X )]? Using the CLT, construct the 100(1 α)% confidence interval σ θ N z1 α/2, θ σ N + z1 α/2, N N 2 σ 2 = 1 N f (X i ) θ N N i=1 where z 1 α/2 is the (1 α/2) percentile of the standard Normal distribution. For a fixed number of simulations N, the length of the interval is proportional to σ. The number of simulations required to achieve desired accuracy is proportional to the standard deviation of f (X i ), σ. The idea of variance reduction: replace the original problem with another simulation problem, with the same answer but smaller variance! c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 14 / 35

Antithetic Variates Attempt to reduce variance by introducing negative dependence between pairs of replications. Suppose want to estimate θ = E[f (X )], pdf (X ) = pdf ( X ) Note that f (X) + f ( X) X pdf (X) E = E[f (X)] 2 Define Y i = [f (X i ) + f ( X i )]/2 and compute θ AV N N 1 = Y i N i=1 Note that Y i are IID, and by CLT, θ AV N N E[f (X)] σ AV N(0, 1), σ AV = Var[Y i ] c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 15 / 35

Antithetic Variates When are they useful? Assume that the computational cost of computing Y i is roughly twice that of computing f (X i ). Antithetic variates are useful if Var[θ AV N ] < Var 1 2N 2N f (X i ) using the IID property of Y i, as well as X i, the above condition is equivalent to 1 Var[Y i ] < Var[f (X i )] 2 4Var[Y i ] =Var [f (X i ) + f ( X i )] = i=1 Var[f (X i )] + Var[f ( X i )] + 2Cov[f (X i ), f ( X i )] = 2Var[f (X i )] + 2Cov[f (X i ), f ( X i )] Antithetic variates reduce variance if Cov[f (X i ), f ( X i )] < 0 c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 16 / 35

Antithetic Variates When do they work best? Suppose that f is a monotonically increasing function. Then Cov[f (X ), f ( X )] < 0 and the antithetic variates reduce simulation variance. By how much? Define f 0 (X ) = f 0 (X ) and f 1 (X ) are uncorrelated: f (X) + f ( X) f (X) f ( X), f 1 (X) = 2 2 1 E[f 0 (X )f 1 (X )] = E[f 2 (X ) f 2 ( X )] = 0 = E[f 0 (X )]E[f 1 (X )] 4 Conclude that Var[f (X )] = Var[f 0 (X )] + Var[f 1 (X )] If f (X ) is linear, Var[f 0 (X )] = 0, and antithetic variates eliminate all variance! Antithetics are more effective when f (X ) is close to linear. c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 17 / 35

Control Variates The idea behind the control variates approach is to decompose the unknown expectation E[Y ] into the part known in closed form, and the part that needs to be estimated by simulation. There is no need to use simulation for the part known explicitly. The variance of the remainder may be much smaller than the variance of Y. c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 18 / 35

Control Variates Suppose we want to estimate the expected value E[Y ]. On each replication, generate another variable, X i. Thus, draw a sequence of pairs (X i, Y i ). Assume that E[X ] is known. How can we use this information to reduce the variance of our estimate of E[Y ]? Define Y i (b) = Y i b(x i E[X ]) Note that E[Y i (b)] = E[Y i ], so 1 N in =1 Y i (b) is an unbiased estimator of E[Y ]. Can choose b to minimize variance of Y i (b): Var[Y i (b)] = Var[Y ] 2b Cov[X, Y ] + b 2 Var[X ] Optimal choice b is the OLS coefficient in regression of Y on X : b Cov[X, Y ] = Var[X ] c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 19 / 35

Control Variates The higher the R 2 in the regression of Y on X, the larger the variance reduction. Denoting the correlation between X and Y by ρ XY, find 1 N Var N i=1 Y i (b ) = 1 ρ 2 N XY 1 Var N i=1 Y i In practice, b is not known, but is easy to estimate using OLS. Two-stage approach: 1 2 Simulate N 0 pairs of (X i, Y i ) and use them to estimate b. Simulate N more pairs and estimate E[Y ] as 1 N Y i b (X i E[X]) N i=1 c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 20 / 35

Control Variates Example: Pricing a European Call Option Suppose we want to price a European call option using simulation. Assume constant interest rate r. Under the risk-neutral probability Q, we need to evaluate E Q 0 e rt max(0, S T K ) The stock price itself is a natural control variate. Assuming no dividends, E Q 0 e rt S T = S 0 Consider the Black-Scholes setting with r = 0.05, σ = 0.3, S 0 = 50, T = 0.25 Evaluate correlation between the option payoff and the stock price for different values of K. ρ 2 is the percentage of variance eliminated by the control variate. c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 21 / 35

Control Variates Example: Pricing a European Call Option K 40 45 50 55 60 65 70 ρ 2 0.99 0.94 0.80 0.59 0.36 0.19 0.08 Source: Glasserman 2004, Table 4.1 For in-the-money call options, option payoff is highly correlated with the stock price, and significant variance reduction is possible. For out-of-the-money call options, correlation of option payoff with the stock price is low, and variance reduction is very modest. c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 22 / 35

Control Variates Example: Pricing an Asian Call Option Suppose we want to price an Asian call option with the payoff J 1 max(0, S T K ), S T S(t j ), t 1 < t 2 < < t J T J j=1 A natural control variate is the discounted payoff of the European call option: X = e rt max(0, S T K ) Expectation of the control variate under Q is given by the Black-Scholes formula. Note that we may use multiple controls, e.g., option payoffs at multiple dates. When pricing look-back options, barrier options by simulation can use similar ideas. c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 23 / 35

Control Variates Example: Stochastic Volatility Suppose we want to price a European call option in a model with stochastic volatility. Consider a discrete-time setting, with the stock price following S(t i+1 ) = S(t i ) exp (r σ(t i ) 2 Q /2)(t i+1 t i ) + σ(t i ) t i+1 t i ε i+1 ε Q i IID N(0, 1) σ(t i ) follows its own stochastic process. Along with S(t i ), simulate another stock price process σ 2 Q S(t i+1 ) = S(t i ) exp r (t i+1 t i ) + σ t i+1 t i ε 2 i+1 Pick σ close to a typical value of σ(t i ). Use the same sequence of Normal variables ε Q i for S (t i ) as for S(t i ). Can use the discounted payoff of the European call option on S as a control variate: expectation given by the Black-Scholes formula. c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 24 / 35

Control Variates Example: Hedges as Control Variates Suppose, again, that we want to price the European call option on a stock with stochastic volatility. Let C(t, S t ) denote the price of a European call option with some constant volatility σ, given by the Black-Scholes formula. Construct the process for discounted gains from a discretely-rebalanced delta-hedge. The delta is based on the Black-Scholes model with constant volatility σ. The stock price follows the true stochastic-volatility dynamics. I 1 V (T ) = V (0) + C(t i, S(t i )) e rt i+1 S(t i+1 ) e rt i S(t i ), S(t i ) i=1 Under the risk-neutral probability Q, E Q 0 [V (T )] = V (0) (Check using iterated expectations) t I = T Can use V (T ) as a control variate. The better the discrete-time delta-hedge, the better the control variate that results. c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 25 / 35

Hedges as Control Variates Example Consider a model of stock returns with stochastic volatility under the risk-neutral probability measure S((i + 1)Δ) = S(iΔ) exp (r v(iδ)/2)δ + v(iδ) Δε Q i+1 v((i + 1)Δ) = v(iδ) κ(v(iδ) v)δ + γ v(iδ)δu Q i+1 ε Q i IID N(0, 1), u Q i IID N(0, 1), Price a European call option under the parameters corr(ε Q i, uq i ) = ρ r = 0.05, T = 0.5, S 0 = 50, K = 55, Δ = 0.01 v 0 = 0.09, v = 0.09, κ = 2, ρ = 0.5, γ = 0.1, 0.2, 0.3, 0.4, 0.5 Perform 10,000 simulations to estimate the option price. Report the fraction of variance eliminated by the control variate. c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 26 / 35

Hedges as Control Variates Example γ 0.1 0.2 0.3 0.4 0.5 ρ 2 0.9944 0.9896 0.9799 0.9618 0.9512 Naive Monte Carlo C S.E.(C ) C S.E.(C ) 2.7102 2.6836 2.5027 2.5505 2.4834 0.0559 0.0537 0.0506 0.0504 0.0483 Control variates 2.7508 2.6908 2.6278 2.5544 2.4794 0.0042 0.0056 0.0071 0.0088 0.0107 c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 27 / 35

Outline 1 Generating Random Numbers 2 Variance Reduction 3 Quasi-Monte Carlo c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 28 / 35

Quasi-Monte Carlo Overview Quasi-Monte Carlo, or low-discrepancy methods present an alternative to Monte Carlo simulation. Instead of probability theory, QMC is based on number theory and algebra. 1 Consider a problem of integrating a function, 0 f (x) dx. Monte Carlo approach prescribes simulating N draws of a random variable X Unif [0, 1] and approximating 1 0 N 1 f (x) dx f (X i ) N QMC generates a deterministic sequence X i, and approximates 1 0 i=1 N 1 f (x) dx N i=1 f (X i ) Monte Carlo error declines with sample size as O(1/ N). QMC error declines almost as fast as O(1/N). c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 29 / 35

Quasi-Monte Carlo We focus on generating a d-dimensional sequence of low-discrepancy points filling a d-dimensional hypercube, [0, 1) d. QMC is a substitute for draws from d-dimensional uniform distribution. As discussed, all distributions can be obtained from Unif [0, 1] using the inverse transform method. There are many algorithms for producing low-discrepancy sequences. In financial applications, Sobol sequences have shown good performance. In practice, due to the nature of Sobol sequences, it is recommended to use N = 2 k (integer k) points in the sequence. c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 30 / 35

Quasi-Monte Carlo Illustration 256 points from a 2-dimensional Sobol sequence MATLAB Code 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 31 / 35

Quasi-Monte Carlo Randomization Low-discrepancy sequence can be randomized to produce independent draws. 1 Each independent draw of N points yields an unbiased estimate of 0 f (x) dx. By using K independent draws, each containing N points, we can construct confidence intervals. Since randomizations are independent, standard Normal approximation can be used for confidence intervals. c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 32 / 35

Quasi-Monte Carlo Randomization Two independent randomizations using 256 points from a 2-dimensional Sobol sequence MATLAB Code 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 33 / 35

Readings Campbell, Lo, MacKinlay, 1997, Section 9.4. Boyle, P., M. Broadie, P. Glasserman, 1997, Monte Carlo methods for security pricing, Journal of Economic Dynamics and Control, 21, 1267-1321. Glasserman, P., 2004, Monte Carlo Methods in Financial Engineering, Springer, New York. Sections 2.2, 4.1, 4.2, 7.1, 7.2. c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 34 / 35

Appendix Derivation of the Acceptance-Rejection Method Suppose the A-R algorithm generates Y. Y has the same distribution as X, conditional on f (X ) U cg(x ) Derive the distribution of Y. For any event A, f (X) f (X) Prob X A, U cg(x ) Prob(Y A) = Prob X A U = cg(x ) Prob U f (X) Note that Prob U f (X) X = f (X) and therefore cg(x ) cg(x) Prob U f (X) f (x) 1 = g(x) dx = cg(x ) cg(x) c cg(x ) Conclude that f (X) f (x) Prob(Y A) = c Prob X A, U = c g(x) dx = f (x) dx cg(x) A cg(x) A Since A is an arbitrary event, this verifies that Y has density f. c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 35 / 35

MIT OpenCourseWare http://ocw.mit.edu 15.450 Analytics of Finance Fall 2010 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.