Sampling Methods: Particle Filtering

Size: px
Start display at page:

Download "Sampling Methods: Particle Filtering"

Transcription

1 Penn State Sampling Methods: Particle Filtering CSE586 Computer Vision II CSE Dept, Penn State Univ

2 Penn State Recall: Importance Sampling Procedure to estimate E P (f(x)): 1) Generate N samples x i from Q(x) 2) form importance weights 3) compute empirical estimate of E P (f(x)), the expected value of f(x) under distribution P(x), as

3 Penn State Resampling Note: We thus have a set of weighted samples (xi, wi i=1,,n) If we really need random samples from P, we can generate them by resampling such that the likelihood of choosing value x i is proportional to its weight w i This would now involve now sampling from a discrete distribution of N possible values (the N values of x i ) Therefore, regardless of the dimensionality of vector x, we are resampling from a 1D distribution (we are essentially sampling from the indices 1...N, in proportion to the importance weights w i ). So we can using the inverse transform sampling method we discussed earlier.

4 Penn State Sequential Monte Carlo Methods Sequential Importance Sampling (SIS) and the closely related algorithm Sampling Importance Sampling (SIR) are known by various names in the literature: - bootstrap filtering - particle filtering - Condensation algorithm - survival of the fittest General idea: Importance sampling on time series data, with samples and weights updated as each new data term is observed. Well-suited for simulating recursive Bayes filtering!

5 Penn State Recall: Bayes Filtering Two-step Iteration at Each Time t: Motion Prediction Step: Data Correction Step (Bayes rule):

6 Penn State Recall: Bayes Filtering Problem: in general we get intractible integrals Motion Prediction Step: Data Correction Step (Bayes rule):

7 Penn State Sequential Monte Carlo Methods Intuition: Represent probability distributions by samples (called particles). Each particle is a guess at the true state. For each one, simulate it s motion update and add noise to get a motion prediction. Measure the likelihood of this prediction, and weight the resulting particles proportional to their likelihoods.

8 Penn State Back to Bayes Filtering This integral in the denominator of Bayes rule disappears as a consequence of representing distributions by a weighted set of samples. Since we have only a finite number of samples, the normalization constant will be the sum of the weights! Data Correction Step (Bayes rule):

9 Penn State Back to Bayes Filtering Now let s write the Bayes filter by combining motion prediction and data correction steps into one equation. new posterior data term motion term old posterior

10 Penn State Monte Carlo Bayes Filtering Assume the posterior at time t-1 (which is the prior at time t) has been approximated as a set of N weighted particles: So that Where is the delta dirac function Useful property:

11 Penn State Monte Carlo Bayes Filtering Then the motion prediction integral simplifies to a summation Motion prediction integral The prior had been approximated by N particles Exchange order of summation and integration Property of Dirac delta function

12 Penn State Monte Carlo Bayes Filtering Our Bayes filtering equation thus simplifies as well Plugging in result from previous page Bringing term that doesn t depend on i into the summation

13 Penn State Monte Carlo Bayes Filtering Our new posterior is therefore but this is still not amenable to computation in closed-form for arbitrary motion models and likelihood functions (e.g. we would have to integrate it to compute the normalization constant c) Idea : Let s approximate the posterior as a set of N samples! Idea 2 : Hey wait a minute, the prior was already represented as a set of N samples! Why don t we just update each of those?

14 Penn State Monte Carlo Bayes Filtering Approach: for each sample x i t-1, generate a new sample x i t from convenient proposal distribution by importance sampling using some So, generate a sample and compute its importance weight

15 Penn State Monte Carlo Bayes Filtering We then can approximate our posterior as where

16 Penn State SIS Algorithm

17 w w w Robert Collins Penn State SIS Degeneracy Unfortunately, pure SIS suffers from degeneracy. In many cases, after a few iterations, all but one particle will have negligible weight. Illustration of degeneracy: Time 1 Time 10 Time 19

18 Penn State Resampling to Combat Degeneracy Sampling with replacement to get N new samples, each having equal weight 1/N Samples with high weight get replicated Samples with low weight die off Concentrates particles in areas of higher probability

19 Penn State Generic Particle Filter

20 Penn State Sample Importance Resample (SIR) SIR is a special case of the generic particle filter where: - the prior density is used as the proposal density - resampling is done every iteration therefore and thus cancellation the old weights are all equal due to resampling

21 Penn State SIR Algorithm

22 Penn State Drawing from the Prior Density note, when we use the prior as the importance density, we only need to sample from the process noise distribution (typically uniform or Gaussian). Why? Recall: x k = f k (x k-1, v k-1 ) v is process noise Thus we can sample from the prior P(x k x k-1 ) by starting with sample x i k-1, generating a noise vector v i k-1 from the noise process, and forming the noisy sample x i k = f k (x i k-1, v i k-1) If the noise is additive, this leads to a very simple interpretation: move each particle using motion prediction, then add noise.

23 Penn State Robert Collins SIR Filtering Illustration M m m k M x 1 ) ( 1 1, x M m m k m k w x 1 ) ( ) (, M m m k M x 1 ) ( ~ 1, M m m k M x 1 ) ( 1 1, M m m k m k w x 1 ) ( 1 ) ( 1, M m m k M x 1 ) ( 1 ~ 1, M m m k M x 1 ) ( 2 1,

24 Penn State Problems with SIS/SIR Degeneracy: in SIS, after several iterations all samples except one tend to have negligible weight. Thus a lot of computational effort is spent on particles that make no contribution. Resampling is supposed to fix this, but also causes a problem... Sample Impoverishment: in SIR, after several iterations all samples tend to collapse into a single state. The ability to representation multimodal distributions is thus short-lived.

25 Particle Filter Failure Analysis References King and Forsyth, How Does CONDENSATION Behave with a Finite Number of Samples? ECCV 2000, Karlin and Taylor, A First Course in Stochastic Processes, 2nd edition, Academic Press, 1975.

26 Particle Filter Failure Analysis Summary Condensation/SIR is aymptotically correct as the number of samples tends towards infinity. However, as a practical issue, it has to be run with a finite number of samples. Iterations of Condensation form a Markov chain whose state space is quantized representations of a density. This Markov chain has some undesirable properties high variance - different runs can lead to very different answers low apparent variance within each individual run (appears stable) state can collapse to single peak in time roughly linear in number of samples tracker may appear to follow peaks in the posterior even in the absence of any meaningful measurements. These properties generally known as sample impoverishment

27 Stationary Analysis For simplicity, we focus on tracking problems with stationary distributions (posterior should be the same at any time step). [because it is hard to really focus on what is going on when the posterior modes are deterministically moving around. Any movement of modes in our analysis will be due to behavior of the particle filter]

28 A Simple PMF State Space Consider 10 particles representing a probability mass function over 2 locations. PMF state space: {(0,10)(1,9)(2,8)(3,7)(4,6)(5,5) (6,4)(7,3)(8,2)(9,1)(10,0)} 1 2 (4,6) We will now instantiate a particular two-state filtering model that we can analyze in closed-form, and explore the Markov chain process (on the PMF state space above) that describes how particle filtering performs on that process.

29 Discrete, Stationary, No Noise Assume a stationary process model with no-noise process model: X k+1 = F X k + v k I Identity 0 no noise process model: X k+1 = X k

30 Perfect Two-State Ambiguity Let our two filtering states be {a,b}. We define both prior distribution and observation model to be ambiguous (equal belief in a and b). P(X 0 ) =.5 X 0 = a.5 X 0 = b P(Z X k ) =.5 X 0 = a.5 X 0 = b from process model: a P(X k+1 X k ) = b a 1 0 b 0 1

31 These are exact propagation equations. Robert Collins Prediction: Recall: Recursive Filtering predicted current state state transition previous estimated state Update: estimated current state measurement predicted current state normalization term

32 Analytic Filter Analysis Predict =.5 =.5 Update =.25/( ) =.5 =.25/( ) =.5

33 Analytic Filter Analysis Therefore, for all k, the posterior distribution is P(X k z 1:k ) =.5 X k = a.5 X k = b which agrees with our intuition in regards to the stationarity and ambiguity of our two-state model. Now let s see how a particle filter behaves...

34 Particle Filter Consider 10 particles representing a probability mass function over our 2 locations {a,b}. In accordance with our ambiguous prior, we will initialize with 5 particles in each location P(X 0 ) = a b

35 Condensation (SIR) Particle Filter 1) Select N new samples with replacement, according to the sample weights (equal weights in this case) 2) Apply process model to each sample (deterministic motion + noise) (no-op in this case) 3) For each new position, set weight of particle in accordance to observation probability (all weights become.5 in this case) 4) Normalize weights so they sum to one (weights are still equal )

36 Condensation as Markov Chain (Key Step) Recall that 10 particles representing a probability mass function over 2 locations can be thought of as having a state space with 11 elements: {(0,10)(1,9)(2,8)(3,7)(4,6)(5,5)(6,4)(7,3)(8,2)(9,1)(10,0)} (5,5) a b

37 Condensation as Markov Chain (Key Step) We want to characterize the probability that the particle filter procedure will transition from the current configuration to a new configuration: {(0,10)(1,9)(2,8)(3,7)(4,6)(5,5)(6,4)(7,3)(8,2)(9,1)(10,0)} (5,5)? a b

38 Condensation as Markov Chain (Key Step) We want to characterize the probability that the particle filter procedure will transition from the current configuration to a new configuration: {(0,10)(1,9)(2,8)(3,7)(4,6)(5,5)(6,4)(7,3)(8,2)(9,1)(10,0)}? (5,5) Let P(j i) be prob of transitioning from (i,10-i) to (j,10-j) a b

39 Example N=10 samples (5,5) a b.2051 (4,6).1172 a b (3,7) a b (5,5) a b (6,4) a b.25 0 P( j 5) j 0 10

40 0 Full Transition Table P( j i) i.25 P( j 5) 0 0 j j 10

41 P(j 5) The Crux of the Problem from (5,5), there is a good chance we will jump to away from (5,5), say to (6,4) P(j 6) P(j 7) once we do that, we are no longer sampling from the transition distribution at (5,5), but from the one at (6,4). But this is biased off center from (5,5) and so on. The behavior will be similar to that of a random walk.

42 0 Another Problem P( j i) P(0 0) = 1 i (0,10) and (10,0) are absorbing states! 10 0 j 10 P(10 10) = 1

43 Observations The Markov chain has two absorbing states (0,10) and (10,0) Once the chain gets into either of these two states, it can never get out (all the particles have collapsed into a single bucket) There is a nonzero probability of getting into either absorbing state, starting from (5,5) These are the seeds of our destruction!

44 Simulation

45 Some sample runs with 10 particles

46 N=10 More Sample Runs N=20 N=100

47 average time to absorbtion Robert Collins Average Time to Absorbtion number of particles N Dots - from running simulator (100 trials at N=10,20,30...) Line - plot of 1.4 N, the asymptotic analytic estimate (King and Forsyth)

48 More Generally Implications of stationary process model with no noise, in a discrete state space. any time any bucket contains zero particles, it will forever after have zero particles (for that run). there is typically a nonzero probability of getting zero particles in a bucket sometime during the run. thus, over time, the particles will inevitably collapse into a single bucket.

49 Extending to Continuous Case A similar thing happens in more realistic cases. Consider a continuous case with two stationary modes in the likelihood, and where each mode has small variance with respect to distance between modes. mode1 mode2

50 Extending to Continuous Case The very low variance between modes is fatal to any particles that try to cross from one to the other via diffusion. mode1 mode2

51 Extending to Continuous Case Each mode thus becomes an isolated island, and we can reduce this case to our previous two-state analysis (each mode is one discrete state) mode1 mode2 a b

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization Wolfram Burgard, Maren Bennewitz, Diego Tipaldi, Luciano Spinello 1 Motivation Recall: Discrete filter Discretize

More information

A Tutorial on Particle Filters for Online Nonlinear/Non-Gaussian Bayesian Tracking

A Tutorial on Particle Filters for Online Nonlinear/Non-Gaussian Bayesian Tracking 174 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 50, NO. 2, FEBRUARY 2002 A Tutorial on Particle Filters for Online Nonlinear/Non-Gaussian Bayesian Tracking M. Sanjeev Arulampalam, Simon Maskell, Neil

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.cs.toronto.edu/~rsalakhu/ Lecture 6 Three Approaches to Classification Construct

More information

Master s thesis tutorial: part III

Master s thesis tutorial: part III for the Autonomous Compliant Research group Tinne De Laet, Wilm Decré, Diederik Verscheure Katholieke Universiteit Leuven, Department of Mechanical Engineering, PMA Division 30 oktober 2006 Outline General

More information

Monte Carlo-based statistical methods (MASM11/FMS091)

Monte Carlo-based statistical methods (MASM11/FMS091) Monte Carlo-based statistical methods (MASM11/FMS091) Magnus Wiktorsson Centre for Mathematical Sciences Lund University, Sweden Lecture 6 Sequential Monte Carlo methods II February 7, 2014 M. Wiktorsson

More information

Tutorial on Markov Chain Monte Carlo

Tutorial on Markov Chain Monte Carlo Tutorial on Markov Chain Monte Carlo Kenneth M. Hanson Los Alamos National Laboratory Presented at the 29 th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Technology,

More information

How To Solve A Sequential Mca Problem

How To Solve A Sequential Mca Problem Monte Carlo-based statistical methods (MASM11/FMS091) Jimmy Olsson Centre for Mathematical Sciences Lund University, Sweden Lecture 6 Sequential Monte Carlo methods II February 3, 2012 Changes in HA1 Problem

More information

Language Modeling. Chapter 1. 1.1 Introduction

Language Modeling. Chapter 1. 1.1 Introduction Chapter 1 Language Modeling (Course notes for NLP by Michael Collins, Columbia University) 1.1 Introduction In this chapter we will consider the the problem of constructing a language model from a set

More information

1 Short Introduction to Time Series

1 Short Introduction to Time Series ECONOMICS 7344, Spring 202 Bent E. Sørensen January 24, 202 Short Introduction to Time Series A time series is a collection of stochastic variables x,.., x t,.., x T indexed by an integer value t. The

More information

The Quantum Harmonic Oscillator Stephen Webb

The Quantum Harmonic Oscillator Stephen Webb The Quantum Harmonic Oscillator Stephen Webb The Importance of the Harmonic Oscillator The quantum harmonic oscillator holds a unique importance in quantum mechanics, as it is both one of the few problems

More information

99.37, 99.38, 99.38, 99.39, 99.39, 99.39, 99.39, 99.40, 99.41, 99.42 cm

99.37, 99.38, 99.38, 99.39, 99.39, 99.39, 99.39, 99.40, 99.41, 99.42 cm Error Analysis and the Gaussian Distribution In experimental science theory lives or dies based on the results of experimental evidence and thus the analysis of this evidence is a critical part of the

More information

Robert Collins CSE598G. More on Mean-shift. R.Collins, CSE, PSU CSE598G Spring 2006

Robert Collins CSE598G. More on Mean-shift. R.Collins, CSE, PSU CSE598G Spring 2006 More on Mean-shift R.Collins, CSE, PSU Spring 2006 Recall: Kernel Density Estimation Given a set of data samples x i ; i=1...n Convolve with a kernel function H to generate a smooth function f(x) Equivalent

More information

The Basics of Graphical Models

The Basics of Graphical Models The Basics of Graphical Models David M. Blei Columbia University October 3, 2015 Introduction These notes follow Chapter 2 of An Introduction to Probabilistic Graphical Models by Michael Jordan. Many figures

More information

Markov Chain Monte Carlo Simulation Made Simple

Markov Chain Monte Carlo Simulation Made Simple Markov Chain Monte Carlo Simulation Made Simple Alastair Smith Department of Politics New York University April2,2003 1 Markov Chain Monte Carlo (MCMC) simualtion is a powerful technique to perform numerical

More information

Parameter estimation for nonlinear models: Numerical approaches to solving the inverse problem. Lecture 12 04/08/2008. Sven Zenker

Parameter estimation for nonlinear models: Numerical approaches to solving the inverse problem. Lecture 12 04/08/2008. Sven Zenker Parameter estimation for nonlinear models: Numerical approaches to solving the inverse problem Lecture 12 04/08/2008 Sven Zenker Assignment no. 8 Correct setup of likelihood function One fixed set of observation

More information

Model-based Synthesis. Tony O Hagan

Model-based Synthesis. Tony O Hagan Model-based Synthesis Tony O Hagan Stochastic models Synthesising evidence through a statistical model 2 Evidence Synthesis (Session 3), Helsinki, 28/10/11 Graphical modelling The kinds of models that

More information

1 Maximum likelihood estimation

1 Maximum likelihood estimation COS 424: Interacting with Data Lecturer: David Blei Lecture #4 Scribes: Wei Ho, Michael Ye February 14, 2008 1 Maximum likelihood estimation 1.1 MLE of a Bernoulli random variable (coin flips) Given N

More information

1 Prior Probability and Posterior Probability

1 Prior Probability and Posterior Probability Math 541: Statistical Theory II Bayesian Approach to Parameter Estimation Lecturer: Songfeng Zheng 1 Prior Probability and Posterior Probability Consider now a problem of statistical inference in which

More information

Deterministic Sampling-based Switching Kalman Filtering for Vehicle Tracking

Deterministic Sampling-based Switching Kalman Filtering for Vehicle Tracking Proceedings of the IEEE ITSC 2006 2006 IEEE Intelligent Transportation Systems Conference Toronto, Canada, September 17-20, 2006 WA4.1 Deterministic Sampling-based Switching Kalman Filtering for Vehicle

More information

Master s Theory Exam Spring 2006

Master s Theory Exam Spring 2006 Spring 2006 This exam contains 7 questions. You should attempt them all. Each question is divided into parts to help lead you through the material. You should attempt to complete as much of each problem

More information

Basics of Statistical Machine Learning

Basics of Statistical Machine Learning CS761 Spring 2013 Advanced Machine Learning Basics of Statistical Machine Learning Lecturer: Xiaojin Zhu jerryzhu@cs.wisc.edu Modern machine learning is rooted in statistics. You will find many familiar

More information

An Introduction to Using WinBUGS for Cost-Effectiveness Analyses in Health Economics

An Introduction to Using WinBUGS for Cost-Effectiveness Analyses in Health Economics Slide 1 An Introduction to Using WinBUGS for Cost-Effectiveness Analyses in Health Economics Dr. Christian Asseburg Centre for Health Economics Part 1 Slide 2 Talk overview Foundations of Bayesian statistics

More information

Statistics Graduate Courses

Statistics Graduate Courses Statistics Graduate Courses STAT 7002--Topics in Statistics-Biological/Physical/Mathematics (cr.arr.).organized study of selected topics. Subjects and earnable credit may vary from semester to semester.

More information

Real-time Visual Tracker by Stream Processing

Real-time Visual Tracker by Stream Processing Real-time Visual Tracker by Stream Processing Simultaneous and Fast 3D Tracking of Multiple Faces in Video Sequences by Using a Particle Filter Oscar Mateo Lozano & Kuzahiro Otsuka presented by Piotr Rudol

More information

CHAPTER 2 Estimating Probabilities

CHAPTER 2 Estimating Probabilities CHAPTER 2 Estimating Probabilities Machine Learning Copyright c 2016. Tom M. Mitchell. All rights reserved. *DRAFT OF January 24, 2016* *PLEASE DO NOT DISTRIBUTE WITHOUT AUTHOR S PERMISSION* This is a

More information

Lecture 3: Linear methods for classification

Lecture 3: Linear methods for classification Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,

More information

Binomial lattice model for stock prices

Binomial lattice model for stock prices Copyright c 2007 by Karl Sigman Binomial lattice model for stock prices Here we model the price of a stock in discrete time by a Markov chain of the recursive form S n+ S n Y n+, n 0, where the {Y i }

More information

Computational Geometry Lab: FEM BASIS FUNCTIONS FOR A TETRAHEDRON

Computational Geometry Lab: FEM BASIS FUNCTIONS FOR A TETRAHEDRON Computational Geometry Lab: FEM BASIS FUNCTIONS FOR A TETRAHEDRON John Burkardt Information Technology Department Virginia Tech http://people.sc.fsu.edu/ jburkardt/presentations/cg lab fem basis tetrahedron.pdf

More information

Inference on Phase-type Models via MCMC

Inference on Phase-type Models via MCMC Inference on Phase-type Models via MCMC with application to networks of repairable redundant systems Louis JM Aslett and Simon P Wilson Trinity College Dublin 28 th June 202 Toy Example : Redundant Repairable

More information

A Robust Multiple Object Tracking for Sport Applications 1) Thomas Mauthner, Horst Bischof

A Robust Multiple Object Tracking for Sport Applications 1) Thomas Mauthner, Horst Bischof A Robust Multiple Object Tracking for Sport Applications 1) Thomas Mauthner, Horst Bischof Institute for Computer Graphics and Vision Graz University of Technology, Austria {mauthner,bischof}@icg.tu-graz.ac.at

More information

VARIANCE REDUCTION TECHNIQUES FOR IMPLICIT MONTE CARLO SIMULATIONS

VARIANCE REDUCTION TECHNIQUES FOR IMPLICIT MONTE CARLO SIMULATIONS VARIANCE REDUCTION TECHNIQUES FOR IMPLICIT MONTE CARLO SIMULATIONS An Undergraduate Research Scholars Thesis by JACOB TAYLOR LANDMAN Submitted to Honors and Undergraduate Research Texas A&M University

More information

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION Introduction In the previous chapter, we explored a class of regression models having particularly simple analytical

More information

Generating Random Numbers Variance Reduction Quasi-Monte Carlo. Simulation Methods. Leonid Kogan. MIT, Sloan. 15.450, Fall 2010

Generating Random Numbers Variance Reduction Quasi-Monte Carlo. Simulation Methods. Leonid Kogan. MIT, Sloan. 15.450, Fall 2010 Simulation Methods Leonid Kogan MIT, Sloan 15.450, Fall 2010 c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 1 / 35 Outline 1 Generating Random Numbers 2 Variance Reduction 3 Quasi-Monte

More information

Computing with Finite and Infinite Networks

Computing with Finite and Infinite Networks Computing with Finite and Infinite Networks Ole Winther Theoretical Physics, Lund University Sölvegatan 14 A, S-223 62 Lund, Sweden winther@nimis.thep.lu.se Abstract Using statistical mechanics results,

More information

Bayesian Statistics: Indian Buffet Process

Bayesian Statistics: Indian Buffet Process Bayesian Statistics: Indian Buffet Process Ilker Yildirim Department of Brain and Cognitive Sciences University of Rochester Rochester, NY 14627 August 2012 Reference: Most of the material in this note

More information

Bayes and Naïve Bayes. cs534-machine Learning

Bayes and Naïve Bayes. cs534-machine Learning Bayes and aïve Bayes cs534-machine Learning Bayes Classifier Generative model learns Prediction is made by and where This is often referred to as the Bayes Classifier, because of the use of the Bayes rule

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

1 The Brownian bridge construction

1 The Brownian bridge construction The Brownian bridge construction The Brownian bridge construction is a way to build a Brownian motion path by successively adding finer scale detail. This construction leads to a relatively easy proof

More information

6 Scalar, Stochastic, Discrete Dynamic Systems

6 Scalar, Stochastic, Discrete Dynamic Systems 47 6 Scalar, Stochastic, Discrete Dynamic Systems Consider modeling a population of sand-hill cranes in year n by the first-order, deterministic recurrence equation y(n + 1) = Ry(n) where R = 1 + r = 1

More information

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Module No. #01 Lecture No. #15 Special Distributions-VI Today, I am going to introduce

More information

SENSITIVITY ANALYSIS AND INFERENCE. Lecture 12

SENSITIVITY ANALYSIS AND INFERENCE. Lecture 12 This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike License. Your use of this material constitutes acceptance of that license and the conditions of use of materials on this

More information

LECTURE 4. Last time: Lecture outline

LECTURE 4. Last time: Lecture outline LECTURE 4 Last time: Types of convergence Weak Law of Large Numbers Strong Law of Large Numbers Asymptotic Equipartition Property Lecture outline Stochastic processes Markov chains Entropy rate Random

More information

LECTURE 16. Readings: Section 5.1. Lecture outline. Random processes Definition of the Bernoulli process Basic properties of the Bernoulli process

LECTURE 16. Readings: Section 5.1. Lecture outline. Random processes Definition of the Bernoulli process Basic properties of the Bernoulli process LECTURE 16 Readings: Section 5.1 Lecture outline Random processes Definition of the Bernoulli process Basic properties of the Bernoulli process Number of successes Distribution of interarrival times The

More information

An Application of Inverse Reinforcement Learning to Medical Records of Diabetes Treatment

An Application of Inverse Reinforcement Learning to Medical Records of Diabetes Treatment An Application of Inverse Reinforcement Learning to Medical Records of Diabetes Treatment Hideki Asoh 1, Masanori Shiro 1 Shotaro Akaho 1, Toshihiro Kamishima 1, Koiti Hasida 1, Eiji Aramaki 2, and Takahide

More information

Probability and Random Variables. Generation of random variables (r.v.)

Probability and Random Variables. Generation of random variables (r.v.) Probability and Random Variables Method for generating random variables with a specified probability distribution function. Gaussian And Markov Processes Characterization of Stationary Random Process Linearly

More information

Introduction to Netlogo: A Newton s Law of Gravity Simulation

Introduction to Netlogo: A Newton s Law of Gravity Simulation Introduction to Netlogo: A Newton s Law of Gravity Simulation Purpose Netlogo is an agent-based programming language that provides an all-inclusive platform for writing code, having graphics, and leaving

More information

11. Time series and dynamic linear models

11. Time series and dynamic linear models 11. Time series and dynamic linear models Objective To introduce the Bayesian approach to the modeling and forecasting of time series. Recommended reading West, M. and Harrison, J. (1997). models, (2 nd

More information

An On-Line Algorithm for Checkpoint Placement

An On-Line Algorithm for Checkpoint Placement An On-Line Algorithm for Checkpoint Placement Avi Ziv IBM Israel, Science and Technology Center MATAM - Advanced Technology Center Haifa 3905, Israel avi@haifa.vnat.ibm.com Jehoshua Bruck California Institute

More information

Dirichlet Processes A gentle tutorial

Dirichlet Processes A gentle tutorial Dirichlet Processes A gentle tutorial SELECT Lab Meeting October 14, 2008 Khalid El-Arini Motivation We are given a data set, and are told that it was generated from a mixture of Gaussian distributions.

More information

Lecture 7: Continuous Random Variables

Lecture 7: Continuous Random Variables Lecture 7: Continuous Random Variables 21 September 2005 1 Our First Continuous Random Variable The back of the lecture hall is roughly 10 meters across. Suppose it were exactly 10 meters, and consider

More information

Name Partners Date. Energy Diagrams I

Name Partners Date. Energy Diagrams I Name Partners Date Visual Quantum Mechanics The Next Generation Energy Diagrams I Goal Changes in energy are a good way to describe an object s motion. Here you will construct energy diagrams for a toy

More information

Mechanics 1: Conservation of Energy and Momentum

Mechanics 1: Conservation of Energy and Momentum Mechanics : Conservation of Energy and Momentum If a certain quantity associated with a system does not change in time. We say that it is conserved, and the system possesses a conservation law. Conservation

More information

Dirichlet forms methods for error calculus and sensitivity analysis

Dirichlet forms methods for error calculus and sensitivity analysis Dirichlet forms methods for error calculus and sensitivity analysis Nicolas BOULEAU, Osaka university, november 2004 These lectures propose tools for studying sensitivity of models to scalar or functional

More information

3. Reaction Diffusion Equations Consider the following ODE model for population growth

3. Reaction Diffusion Equations Consider the following ODE model for population growth 3. Reaction Diffusion Equations Consider the following ODE model for population growth u t a u t u t, u 0 u 0 where u t denotes the population size at time t, and a u plays the role of the population dependent

More information

Online Appendix. Supplemental Material for Insider Trading, Stochastic Liquidity and. Equilibrium Prices. by Pierre Collin-Dufresne and Vyacheslav Fos

Online Appendix. Supplemental Material for Insider Trading, Stochastic Liquidity and. Equilibrium Prices. by Pierre Collin-Dufresne and Vyacheslav Fos Online Appendix Supplemental Material for Insider Trading, Stochastic Liquidity and Equilibrium Prices by Pierre Collin-Dufresne and Vyacheslav Fos 1. Deterministic growth rate of noise trader volatility

More information

Real-Time Monitoring of Complex Industrial Processes with Particle Filters

Real-Time Monitoring of Complex Industrial Processes with Particle Filters Real-Time Monitoring of Complex Industrial Processes with Particle Filters Rubén Morales-Menéndez Dept. of Mechatronics and Automation ITESM campus Monterrey Monterrey, NL México rmm@itesm.mx Nando de

More information

Bias in the Estimation of Mean Reversion in Continuous-Time Lévy Processes

Bias in the Estimation of Mean Reversion in Continuous-Time Lévy Processes Bias in the Estimation of Mean Reversion in Continuous-Time Lévy Processes Yong Bao a, Aman Ullah b, Yun Wang c, and Jun Yu d a Purdue University, IN, USA b University of California, Riverside, CA, USA

More information

Robotics. Chapter 25. Chapter 25 1

Robotics. Chapter 25. Chapter 25 1 Robotics Chapter 25 Chapter 25 1 Outline Robots, Effectors, and Sensors Localization and Mapping Motion Planning Motor Control Chapter 25 2 Mobile Robots Chapter 25 3 Manipulators P R R R R R Configuration

More information

One-year reserve risk including a tail factor : closed formula and bootstrap approaches

One-year reserve risk including a tail factor : closed formula and bootstrap approaches One-year reserve risk including a tail factor : closed formula and bootstrap approaches Alexandre Boumezoued R&D Consultant Milliman Paris alexandre.boumezoued@milliman.com Yoboua Angoua Non-Life Consultant

More information

MCMC Using Hamiltonian Dynamics

MCMC Using Hamiltonian Dynamics 5 MCMC Using Hamiltonian Dynamics Radford M. Neal 5.1 Introduction Markov chain Monte Carlo (MCMC) originated with the classic paper of Metropolis et al. (1953), where it was used to simulate the distribution

More information

Discrete Frobenius-Perron Tracking

Discrete Frobenius-Perron Tracking Discrete Frobenius-Perron Tracing Barend J. van Wy and Michaël A. van Wy French South-African Technical Institute in Electronics at the Tshwane University of Technology Staatsartillerie Road, Pretoria,

More information

Covariance and Correlation

Covariance and Correlation Covariance and Correlation ( c Robert J. Serfling Not for reproduction or distribution) We have seen how to summarize a data-based relative frequency distribution by measures of location and spread, such

More information

Adaptive Search with Stochastic Acceptance Probabilities for Global Optimization

Adaptive Search with Stochastic Acceptance Probabilities for Global Optimization Adaptive Search with Stochastic Acceptance Probabilities for Global Optimization Archis Ghate a and Robert L. Smith b a Industrial Engineering, University of Washington, Box 352650, Seattle, Washington,

More information

Objective. Materials. TI-73 Calculator

Objective. Materials. TI-73 Calculator 0. Objective To explore subtraction of integers using a number line. Activity 2 To develop strategies for subtracting integers. Materials TI-73 Calculator Integer Subtraction What s the Difference? Teacher

More information

Tagging with Hidden Markov Models

Tagging with Hidden Markov Models Tagging with Hidden Markov Models Michael Collins 1 Tagging Problems In many NLP problems, we would like to model pairs of sequences. Part-of-speech (POS) tagging is perhaps the earliest, and most famous,

More information

Chapter 4 One Dimensional Kinematics

Chapter 4 One Dimensional Kinematics Chapter 4 One Dimensional Kinematics 41 Introduction 1 4 Position, Time Interval, Displacement 41 Position 4 Time Interval 43 Displacement 43 Velocity 3 431 Average Velocity 3 433 Instantaneous Velocity

More information

Linear Threshold Units

Linear Threshold Units Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear

More information

Gibbs Sampling and Online Learning Introduction

Gibbs Sampling and Online Learning Introduction Statistical Techniques in Robotics (16-831, F14) Lecture#10(Tuesday, September 30) Gibbs Sampling and Online Learning Introduction Lecturer: Drew Bagnell Scribes: {Shichao Yang} 1 1 Sampling Samples are

More information

Chapter 22: The Electric Field. Read Chapter 22 Do Ch. 22 Questions 3, 5, 7, 9 Do Ch. 22 Problems 5, 19, 24

Chapter 22: The Electric Field. Read Chapter 22 Do Ch. 22 Questions 3, 5, 7, 9 Do Ch. 22 Problems 5, 19, 24 Chapter : The Electric Field Read Chapter Do Ch. Questions 3, 5, 7, 9 Do Ch. Problems 5, 19, 4 The Electric Field Replaces action-at-a-distance Instead of Q 1 exerting a force directly on Q at a distance,

More information

Monte Carlo Methods and Models in Finance and Insurance

Monte Carlo Methods and Models in Finance and Insurance Chapman & Hall/CRC FINANCIAL MATHEMATICS SERIES Monte Carlo Methods and Models in Finance and Insurance Ralf Korn Elke Korn Gerald Kroisandt f r oc) CRC Press \ V^ J Taylor & Francis Croup ^^"^ Boca Raton

More information

Econometrics Simple Linear Regression

Econometrics Simple Linear Regression Econometrics Simple Linear Regression Burcu Eke UC3M Linear equations with one variable Recall what a linear equation is: y = b 0 + b 1 x is a linear equation with one variable, or equivalently, a straight

More information

Monte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMS091)

Monte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMS091) Monte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMS091) Magnus Wiktorsson Centre for Mathematical Sciences Lund University, Sweden Lecture 5 Sequential Monte Carlo methods I February

More information

MAN-BITES-DOG BUSINESS CYCLES ONLINE APPENDIX

MAN-BITES-DOG BUSINESS CYCLES ONLINE APPENDIX MAN-BITES-DOG BUSINESS CYCLES ONLINE APPENDIX KRISTOFFER P. NIMARK The next section derives the equilibrium expressions for the beauty contest model from Section 3 of the main paper. This is followed by

More information

2DI36 Statistics. 2DI36 Part II (Chapter 7 of MR)

2DI36 Statistics. 2DI36 Part II (Chapter 7 of MR) 2DI36 Statistics 2DI36 Part II (Chapter 7 of MR) What Have we Done so Far? Last time we introduced the concept of a dataset and seen how we can represent it in various ways But, how did this dataset came

More information

2.3 Solving Equations Containing Fractions and Decimals

2.3 Solving Equations Containing Fractions and Decimals 2. Solving Equations Containing Fractions and Decimals Objectives In this section, you will learn to: To successfully complete this section, you need to understand: Solve equations containing fractions

More information

Graphing Rational Functions

Graphing Rational Functions Graphing Rational Functions A rational function is defined here as a function that is equal to a ratio of two polynomials p(x)/q(x) such that the degree of q(x) is at least 1. Examples: is a rational function

More information

A Coefficient of Variation for Skewed and Heavy-Tailed Insurance Losses. Michael R. Powers[ 1 ] Temple University and Tsinghua University

A Coefficient of Variation for Skewed and Heavy-Tailed Insurance Losses. Michael R. Powers[ 1 ] Temple University and Tsinghua University A Coefficient of Variation for Skewed and Heavy-Tailed Insurance Losses Michael R. Powers[ ] Temple University and Tsinghua University Thomas Y. Powers Yale University [June 2009] Abstract We propose a

More information

2.2 Scientific Notation: Writing Large and Small Numbers

2.2 Scientific Notation: Writing Large and Small Numbers 2.2 Scientific Notation: Writing Large and Small Numbers A number written in scientific notation has two parts. A decimal part: a number that is between 1 and 10. An exponential part: 10 raised to an exponent,

More information

Supplement to Call Centers with Delay Information: Models and Insights

Supplement to Call Centers with Delay Information: Models and Insights Supplement to Call Centers with Delay Information: Models and Insights Oualid Jouini 1 Zeynep Akşin 2 Yves Dallery 1 1 Laboratoire Genie Industriel, Ecole Centrale Paris, Grande Voie des Vignes, 92290

More information

Equity-Based Insurance Guarantees Conference November 1-2, 2010. New York, NY. Operational Risks

Equity-Based Insurance Guarantees Conference November 1-2, 2010. New York, NY. Operational Risks Equity-Based Insurance Guarantees Conference November -, 00 New York, NY Operational Risks Peter Phillips Operational Risk Associated with Running a VA Hedging Program Annuity Solutions Group Aon Benfield

More information

Analysis of a Production/Inventory System with Multiple Retailers

Analysis of a Production/Inventory System with Multiple Retailers Analysis of a Production/Inventory System with Multiple Retailers Ann M. Noblesse 1, Robert N. Boute 1,2, Marc R. Lambrecht 1, Benny Van Houdt 3 1 Research Center for Operations Management, University

More information

Characteristics of Binomial Distributions

Characteristics of Binomial Distributions Lesson2 Characteristics of Binomial Distributions In the last lesson, you constructed several binomial distributions, observed their shapes, and estimated their means and standard deviations. In Investigation

More information

Neuro-Dynamic Programming An Overview

Neuro-Dynamic Programming An Overview 1 Neuro-Dynamic Programming An Overview Dimitri Bertsekas Dept. of Electrical Engineering and Computer Science M.I.T. September 2006 2 BELLMAN AND THE DUAL CURSES Dynamic Programming (DP) is very broadly

More information

6. Vectors. 1 2009-2016 Scott Surgent (surgent@asu.edu)

6. Vectors. 1 2009-2016 Scott Surgent (surgent@asu.edu) 6. Vectors For purposes of applications in calculus and physics, a vector has both a direction and a magnitude (length), and is usually represented as an arrow. The start of the arrow is the vector s foot,

More information

The CUSUM algorithm a small review. Pierre Granjon

The CUSUM algorithm a small review. Pierre Granjon The CUSUM algorithm a small review Pierre Granjon June, 1 Contents 1 The CUSUM algorithm 1.1 Algorithm............................... 1.1.1 The problem......................... 1.1. The different steps......................

More information

Epipolar Geometry. Readings: See Sections 10.1 and 15.6 of Forsyth and Ponce. Right Image. Left Image. e(p ) Epipolar Lines. e(q ) q R.

Epipolar Geometry. Readings: See Sections 10.1 and 15.6 of Forsyth and Ponce. Right Image. Left Image. e(p ) Epipolar Lines. e(q ) q R. Epipolar Geometry We consider two perspective images of a scene as taken from a stereo pair of cameras (or equivalently, assume the scene is rigid and imaged with a single camera from two different locations).

More information

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

More information

Financial TIme Series Analysis: Part II

Financial TIme Series Analysis: Part II Department of Mathematics and Statistics, University of Vaasa, Finland January 29 February 13, 2015 Feb 14, 2015 1 Univariate linear stochastic models: further topics Unobserved component model Signal

More information

Corrected Diffusion Approximations for the Maximum of Heavy-Tailed Random Walk

Corrected Diffusion Approximations for the Maximum of Heavy-Tailed Random Walk Corrected Diffusion Approximations for the Maximum of Heavy-Tailed Random Walk Jose Blanchet and Peter Glynn December, 2003. Let (X n : n 1) be a sequence of independent and identically distributed random

More information

UW CSE Technical Report 03-06-01 Probabilistic Bilinear Models for Appearance-Based Vision

UW CSE Technical Report 03-06-01 Probabilistic Bilinear Models for Appearance-Based Vision UW CSE Technical Report 03-06-01 Probabilistic Bilinear Models for Appearance-Based Vision D.B. Grimes A.P. Shon R.P.N. Rao Dept. of Computer Science and Engineering University of Washington Seattle, WA

More information

Polynomial and Rational Functions

Polynomial and Rational Functions Polynomial and Rational Functions Quadratic Functions Overview of Objectives, students should be able to: 1. Recognize the characteristics of parabolas. 2. Find the intercepts a. x intercepts by solving

More information

Statistics, Probability and Noise

Statistics, Probability and Noise CHAPTER Statistics, Probability and Noise Statistics and probability are used in Digital Signal Processing to characterize signals and the processes that generate them. For example, a primary use of DSP

More information

Christfried Webers. Canberra February June 2015

Christfried Webers. Canberra February June 2015 c Statistical Group and College of Engineering and Computer Science Canberra February June (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 829 c Part VIII Linear Classification 2 Logistic

More information

Important Probability Distributions OPRE 6301

Important Probability Distributions OPRE 6301 Important Probability Distributions OPRE 6301 Important Distributions... Certain probability distributions occur with such regularity in real-life applications that they have been given their own names.

More information

1 Interest rates, and risk-free investments

1 Interest rates, and risk-free investments Interest rates, and risk-free investments Copyright c 2005 by Karl Sigman. Interest and compounded interest Suppose that you place x 0 ($) in an account that offers a fixed (never to change over time)

More information

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

More information

Linear Programming. March 14, 2014

Linear Programming. March 14, 2014 Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

More information

BayesX - Software for Bayesian Inference in Structured Additive Regression

BayesX - Software for Bayesian Inference in Structured Additive Regression BayesX - Software for Bayesian Inference in Structured Additive Regression Thomas Kneib Faculty of Mathematics and Economics, University of Ulm Department of Statistics, Ludwig-Maximilians-University Munich

More information

Understanding and Applying Kalman Filtering

Understanding and Applying Kalman Filtering Understanding and Applying Kalman Filtering Lindsay Kleeman Department of Electrical and Computer Systems Engineering Monash University, Clayton 1 Introduction Objectives: 1. Provide a basic understanding

More information

Pricing and calibration in local volatility models via fast quantization

Pricing and calibration in local volatility models via fast quantization Pricing and calibration in local volatility models via fast quantization Parma, 29 th January 2015. Joint work with Giorgia Callegaro and Martino Grasselli Quantization: a brief history Birth: back to

More information