References. Importance Sampling. Jessi Cisewski (CMU) Carnegie Mellon University. June 2014



Similar documents
Definition: Suppose that two random variables, either continuous or discrete, X and Y have joint density

Law of Large Numbers. Alexandra Barbato and Craig O Connell. Honors 391A Mathematical Gems Jenia Tevelev

1 Simulating Brownian motion (BM) and geometric Brownian motion (GBM)

Monte Carlo-based statistical methods (MASM11/FMS091)

1. (First passage/hitting times/gambler s ruin problem:) Suppose that X has a discrete state space and let i be a fixed state. Let

Tutorial on Markov Chain Monte Carlo

How To Solve A Sequential Mca Problem

Sampling Distributions

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization

Master s thesis tutorial: part III

VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS

Model-based Synthesis. Tony O Hagan

An Introduction to Information Theory

Nonparametric adaptive age replacement with a one-cycle criterion

Discrete Frobenius-Perron Tracking

6.041/6.431 Spring 2008 Quiz 2 Wednesday, April 16, 7:30-9:30 PM. SOLUTIONS

Inference on Phase-type Models via MCMC

Simulation Techniques in Financial Risk Management

Statistics 100A Homework 3 Solutions

A Dynamic Supply-Demand Model for Electricity Prices

A Tutorial on Particle Filters for Online Nonlinear/Non-Gaussian Bayesian Tracking

Monte Carlo Methods in Finance

MAS108 Probability I

Monte Carlo simulations and option pricing

Chapter 1. Introduction

Monte Carlo Simulation

Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab

CHAPTER 2 Estimating Probabilities

Monte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMS091)

Deterministic Sampling-based Switching Kalman Filtering for Vehicle Tracking

Exploring Geometric Probabilities with Buffon s Coin Problem

Data Modeling & Analysis Techniques. Probability & Statistics. Manfred Huber

Monte Carlo-based statistical methods (MASM11/FMS091)

Probability and Random Variables. Generation of random variables (r.v.)

Introduction to Probability

Optimal reinsurance with ruin probability target

People have thought about, and defined, probability in different ways. important to note the consequences of the definition:

EE 570: Location and Navigation

THE CENTRAL LIMIT THEOREM TORONTO

Chapter 3 RANDOM VARIATE GENERATION

Frequentist vs. Bayesian Statistics

Gaussian Processes to Speed up Hamiltonian Monte Carlo

5. Continuous Random Variables

The Bonferonni and Šidák Corrections for Multiple Comparisons

HT2015: SC4 Statistical Data Mining and Machine Learning

Statistics 100A Homework 8 Solutions

Introduction to Monte Carlo. Astro 542 Princeton University Shirley Ho

Chapter 9 Monté Carlo Simulation

For a partition B 1,..., B n, where B i B j = for i. A = (A B 1 ) (A B 2 ),..., (A B n ) and thus. P (A) = P (A B i ) = P (A B i )P (B i )

What is Statistics? Lecture 1. Introduction and probability review. Idea of parametric inference

Question: What is the probability that a five-card poker hand contains a flush, that is, five cards of the same suit?

Real-Time Monitoring of Complex Industrial Processes with Particle Filters

Final Mathematics 5010, Section 1, Fall 2004 Instructor: D.A. Levin

Binomial Sampling and the Binomial Distribution

Math 370, Actuarial Problemsolving Spring 2008 A.J. Hildebrand. Practice Test, 1/28/2008 (with solutions)

Chapter 4 Lecture Notes

Impact of Remote Control Failure on Power System Restoration Time

The Binomial Probability Distribution

Basics of Statistical Machine Learning

Probability and Expected Value

Aggregate Loss Models

You flip a fair coin four times, what is the probability that you obtain three heads.

11. Time series and dynamic linear models

Principle of Data Reduction

6 PROBABILITY GENERATING FUNCTIONS

Bayes and Naïve Bayes. cs534-machine Learning

Random variables, probability distributions, binomial random variable

BNG 202 Biomechanics Lab. Descriptive statistics and probability distributions I

Forecasting "High" and "Low" of financial time series by Particle systems and Kalman filters

Review of Random Variables

MATH 140 Lab 4: Probability and the Standard Normal Distribution

Math 431 An Introduction to Probability. Final Exam Solutions

Inference on the parameters of the Weibull distribution using records

Coin Flip Questions. Suppose you flip a coin five times and write down the sequence of results, like HHHHH or HTTHT.

Flood Risk Analysis considering 2 types of uncertainty

On a Flat Expanding Universe

Stochastic Processes and Advanced Mathematical Finance. Ruin Probabilities

Math 461 Fall 2006 Test 2 Solutions

Stats on the TI 83 and TI 84 Calculator

Lecture 8. Confidence intervals and the central limit theorem

Generating Random Numbers Variance Reduction Quasi-Monte Carlo. Simulation Methods. Leonid Kogan. MIT, Sloan , Fall 2010

Representing Uncertainty by Probability and Possibility What s the Difference?

PTE505: Inverse Modeling for Subsurface Flow Data Integration (3 Units)

STAT 315: HOW TO CHOOSE A DISTRIBUTION FOR A RANDOM VARIABLE

Optimizing Prediction with Hierarchical Models: Bayesian Clustering

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

Regularized Logistic Regression for Mind Reading with Parallel Validation

Sums of Independent Random Variables

Lecture Notes 1. Brief Review of Basic Probability

Statistical Distributions in Astronomy

Bias in the Estimation of Mean Reversion in Continuous-Time Lévy Processes

Estimating the Degree of Activity of jumps in High Frequency Financial Data. joint with Yacine Aït-Sahalia

Stat 704 Data Analysis I Probability Review

Some Research Problems in Uncertainty Theory

The Kelly criterion for spread bets

Lecture 13: Martingales

Transcription:

Jessi Cisewski Carnegie Mellon University June 2014

Outline 1 Recall: Monte Carlo integration 2 3 Examples of (a) Monte Carlo, Monaco (b) Monte Carlo Casino Some content and examples from Wasserman (2004)

Simple illustration: what is π? Area = πr 2 Area (2r)(2r) = π 4

Monte Carlo Integration: motivation Goal: evaluate this integral I = b a h(y)dy Sometimes we can find I (e.g. if h( ) is a function from Calc I) But sometimes we can t and need a way to approximate I. Monte Carlo methods are one (of many) approaches to do this.

The Law of Large Numbers While nothing is more uncertain than the duration of a single life, nothing is more certain than the average duration of a thousand lives. Elizur Wright Figure: Elizur Wright (1804-1885), American mathematician, the father of life insurance, father of insurance regulation (http://en.wikipedia.org)

Law of Large Numbers The Law of Large Numbers describes what happens when performing the same experiment many times. After many trials, the average of the results should be close to the expected value and will be more accurate with more trials. For Monte Carlo simulation, this means that we can learn properties of a random variable (mean, variance, etc.) simply by simulating it over many trials. Suppose we want to estimate the probability, p, of a coin landing heads up. How many times should we flip the coin?

Law of Large Numbers (LLN) Given an independent and identically distributed sequence of random variables Y 1, Y 2,..., Y n with Ȳ n = n 1 n i=1 Y i and E(Y i ) = µ, then for every ɛ > 0 as n. P( Ȳn µ > ɛ) 0,

Monte Carlo Integration General idea Monte Carlo methods are a form of stochastic integration used to approximate expectations by invoking the law of large numbers. I = b a h(y)dy = b a w(y)f (y)dy = E f (w(y )) where f (y) = 1 b a and w(y) = h(y) (b a) f (y) = 1 b a is the pdf of a U(a,b) random variable By the LLN, if we take an iid sample of size N from U(a, b), we can estimate I as Î = N 1 N i=1 w(y i ) E(w(Y )) = I

Monte Carlo Integration: standard error I = b a h(y)dy = b a w(y)f (y)dy = E f (w(y )) Monte Carlo estimator: Î = N 1 N i=1 w(y i) Standard error of estimator: SE ˆ = s N where s 2 = (N 1) 1 N i=1 ( ) 2 w(y i ) Î

Monte Carlo Integration: Gaussian CDF example Goal: estimate F Y (y) = P(Y y) = E [ I (,y) (Y ) ] where Y N(0, 1): y 1 F (Y y) = e t2 /2 1 dt = h(t) e t2 /2 dt 2π 2π where h(t) = 1 if t < y and h(t) = 0 if t y Draw an iid sample Y 1,..., Y N from a N(0, 1), then the estimator is N Î = N 1 h(y i ) = # draws < x N i=1 Example 24.2 of Wasserman (2004)

: motivation Standard Monte Carlo integration is great if you can sample from the target distribution (i.e. the desired distribution) But what if you can t sample from the target? Idea of importance sampling: draw the sample from a proposal distribution and re-weight the integral using importance weights so that the correct distribution is targeted

Monte Carlo Integration I = h(y)f (y)dy h is some function and f is the probability density function of Y When the density f is difficult to sample from, importance sampling can be used Rather than sampling from f, you specify a different probability density function, g, as the proposal distribution. I = h(y)f (y)dy = h(y) f (y) g(y) g(y)dy = h(y)f (y) g(y)dy g(y)

[ ] h(y)f (y) h(y )f (Y ) I = E f [h(y )] = g(y)dy = E g g(y) g(y ) Hence, given an iid sample Y 1,..., Y N from g, our estimator of I becomes Î = N 1 N i=1 h(y i )f (Y i ) g(y i ) [ ] h(y )f (Y ) E g = I g(y )

: selecting the proposal distribution The standard error of Î could be infinite if g( ) is not selected appropriately g should have thicker tails than f (don t want ratio f /g to get large) [ (h(y ) ] )f (Y ) 2 ( ) h(y)f (y) 2 E g = g(y)dy g(y ) g(y) Select a g that has a similar shape to f, but with thicker tails Variance of Î is minimized when g(y) f (y) Want to be able to sample from g(y) with ease

Importance sampling: Illustration Goal: estimate P(Y < 0.3) where Y f Try two proposal distributions: U(0,1) and U(0,4)

Importance sampling: Illustration, continued. If take 1000 samples of size 100, and find the IS estimates, we get the following estimated expected values and variances. Expected Value Variance Truth 0.206 0 g 1 : U(0,1) 0.206 0.0014 g 2 : U(0,4) 0.211 0.0075

Monte Carlo Integration: Gaussian tail probability example Goal: estimate P(Y 3) where Y N(0, 1) (Truth is 0.001349) 1 P(Y > 3) = e t2 /2 1 dt = h(t) e t2 /2 dt 3 2π 2π where h(t) = 1 if t > 3 and h(t) = 0 if t 3 Draw an iid sample Y 1,..., Y 100 from a N(0, 1), then the estimator is Î = 1 100 100 i=1 Example 24.6 of Wasserman (2004) h(y i ) = # draws > 3 100

Gaussian tail probability example, continued. Draw an iid sample Y 1,..., Y 100 from a N(0, 1), then the estimator is Î = 1 100 h(y i ) 100 i=1 Draw an iid sample Y 1,..., Y 100 from a N(4, 1), then the estimator is Î = 1 100 h(y i )f (Y i ) 100 g(y i ) i=1 where f is the density of a N(0,1) and g is the density of N(4,1) Example 24.6 of Wasserman (2004)

Gaussian tail probability example, continued. If take N samples of size 100, and find the MC and IS estimates, we get the following estimated expected values and variances. N = 10 5 Expected Value Variance Truth 0.00135 0 Monte Carlo 0.00136 1.3 10 5 0.00135 9.5 10 8

Extensions of Sequential Sequential Monte Carlo (Particle Filtering) See Doucet et al. (2001) Approximate Bayesian Computation See Turner and Zandt (2012) for a tutorial, and Cameron and Pettitt (2012); Weyant et al. (2013) for applications to astronomy

Bibliography Cameron, E. and Pettitt, A. N. (2012), Approximate Bayesian Computation for Astronomical Model Analysis: A Case Study in Galaxy Demographics and Morphological Transformation at High Redshift, Monthly Notices of the Royal Astronomical Society, 425, 44 65. Doucet, A., De Freitas, N., and Gordon, N. (2001), Sequential Monte Carlo Methods in Practice, Statistics for Engineering and Information Science, New York: Springer-Verlag. Turner, B. M. and Zandt, T. V. (2012), A tutorial on approximate Bayesian computation, Journal of Mathematical Psychology, 56, 69 85. Wasserman, L. (2004), All of statistics: a concise course in statistical inference, Springer. Weyant, A., Schafer, C., and Wood-Vasey, W. M. (2013), Likeihood-free cosmological inference with type Ia supernovae: approximate Bayesian computation for a complete treatment of uncertainty, The Astrophysical Journal, 764, 116.