Markov Chain Monte Carlo Simulation Made Simple

Size: px
Start display at page:

Download "Markov Chain Monte Carlo Simulation Made Simple"

Transcription

1 Markov Chain Monte Carlo Simulation Made Simple Alastair Smith Department of Politics New York University April2,2003 1

2 Markov Chain Monte Carlo (MCMC) simualtion is a powerful technique to perform numerical integration. It can be used to numerically estimate complex economometric models. In this paper I describe the intuition behind the process, show its flexiblity and applicability. I conclude by demonstrating that these methods are often simpler to implement than many common techniques such as MLE. This paper serves as a brief introduction. I do not intend to derive any results or prove any theorems. I beleive that MCMC offers a powerful estimation tool. This paper is designed to remove the mystery surround the process. Not only it extremely powerful and flexible but it is easy to implement. Given the recent growth in the power of computers I beleive that numerical procedures will be the estimation tools of the future. I outline the underlying logic, show why these techniques work. MCMC techniques are most often used in the Bayesian context. I start by outlining the simple linear model in the Bayesian framework. Although, analytical techniques exist for this model they are complex. In general, more complex model are analytically intractable. Having setup the estimation technique I examine the properties of Markov chains. These properties provide the basis for the estimation procedure. 1 The Bayesian Model While I beleive that the Bayesian approach a superior, consistent approach to statistics than the standard frequentist approach, this debate is volumous and not the topic of this paper. For practical purposes if is usually possible to use diffuse priors that do not influence the posterior results. prior f(θ) likelihood L(Y θ) posterior f(θ Y ) f(θ) L(Y θ) For example in the simple linear model θ = {β,σ 2 } 2 Markov Chains A Markov chain is a stochastic process. It generates a series a observations, X. To illustrate the concept I focus on a discrete time, descrete state space model. At each time period the process generates a sample, X t,from 2

3 the state space. For a simple example, suppose that the state space is the numbers 1, 2,and 3. A Markov chain is simply a string of these numbers. The Markov property is that the probability distribution over the next observation depends only upon the current observation. Let p ij represent the probability that the next observation is j (X t+1 = j), given that the current observation is i (X t = i). A convenient way to present these transition probabilities is through a transition matrix P, P = p 11 p 12 p 13 p 21 p 22 p 23. The elements of the first row represent the probabilities of moving to the different states if the current state is 1. Therefore, p 31 p 32 p 33 p 11 represents the probability that state X t+1 =1ifthecurrentstateisalso 1; p 12 represents the probability that state X t+1 =2if the current state is 1, etc... Suppose that our initial observation is indeed 1 (X 0 =1). The probability distribution for state is given by the first row of P. The next question is what is the probability distribution over the following observation. To illustrate, I consider the more specific question, with what probability does X 2 =3? There are three possible paths by which the second observation could equal 3; they are illustrated in the table below. Thus the probability that X 2 =3 is p 11 p 13 + p 12 p 23 + p 13 p 33. Pathway X 0 X 1 X 2 Probability # p 11 p 13 # p 12 p 23 # p 13 p 33 Thus, for any initial state, we can calculate the probability density over the states for a given number of moves. Obviously, as the number of moves increases these become increasingly difficult to calculate. Yet, matrix notation simplifies the calculation. Suppose, rather than start with a specific state we consider a probability distribution over these states, ν 0 = 0. v1 0 v2 0 v3 0 If we randomly select the initial state from this distribution, then what is the probability distribution of the next state in the chain is given by v (1) 1 v (1) = v (1) 2 = Pv (0) = p 11 p 12 p 13 p 21 p 22 p 23 v (1) p 31 p 32 p 33 3 v0 1 v 0 2 v 0 3 = p 11v p 12 v p 13 v 0 3 p 21 v p 22 v p 23 v 0 3 p 31 v p 32 v p 33 v 0 3 3

4 This idea can be extended, the probability distribution over the states after the second move is simply v 2 = Pv 1 = P 2 v 0. This idea can be generalised; specifically, v (t) = P t v (0). Of particular interest, is the distribution as the chain becomes long. As the chain s length increases then the distribution over the states becomes less and less determined by the starting distribution and more and more determined by the transition probabilities. Indeed, providing the chain satisfies certain regularity conditions, i.e. it does not get stuck in one state, there exists a unique invariant distribution associated with every transition matrix. Let π represent this invariant distribution. So for any starting distribution, π (0), as the chain becomes long then the π (t) tends to π ( lim π (t) = π). t There are two ways to calculate this invariant distribution. The first is analytical. This method exploits the fact that π = Pπ, and solves this system of equations. The second, and of more relevance for this paper, is to similate π by actually running the Markov chain. This involves choosing a starting value and simply running the Markov chain. The initial values in the chain depend strongly upon the starting values. However, as the chain becomes longer then the elements of the chain represent random draws from the probability distribution π Suppose, for example, that the transition matrix is P = We could start by setting X 0 =1and then running the Markov chain. We could estimate the density of each state by examining the frequency of each state. Figure 1 demonstrates that, as the number of iterations becomes large, that the relative frequency of each state converges to its invariant density. We can arbitarily increase the accuracy of these estimates simply by taking more iteration. In this example, I use a discrete state space model; however, these ideas are readily extendable to continuous state space models, where the transition matrix is replaced by a transition kernel (a probability density over the next state that depends only upon the current state). 2.1 Exploiting Markov chains for estimation Most of Markov theory revolves around finding the invariant distribution of Markov chains. MCMC turns the problem arround. Rather than finding 4

5 x1 x2 x N Figure 1: 5

6 the invariant distribution of a specific Markov chain, it starts with a specific invariant distributions and says, can I find a Markov chain that has this invariant distribution. 1 Typically, we already know the distribution of interest: the posterior distribution of the parameters. The key is to find a transition kernel that has this invariant distribution f(θ Y ). In Bayesian estimation we want to find the posterior distribution of the parameters, f(θ Y ). As discussed above this is often analytically intractible. However, suppose we have a Markov chain, P, whose invariant distribution is f(θ Y ). If we run thismarkovprocessthen,asthechainbecomeslong,itselementsrepresent random draws from the posterior distribution f(θ Y ). To illustrate how the process works consider the following algorthym. 1. Choose starting values, θ (0), and length of the chain, n 0 + m. 2. Given the current element in the chain, θ (t), use the Markov process P, to draw the next element θ (t+1). 3. If t>m,thenstoreθ (t+1). 4. If t<m+ n 0, then return to step 2; otherwise calculate and report the descriptive statistics for the elements stored in step 3. This algorthym generates and stores n 0 elements from the chain. These elements represent random samples from the posterior distribution of f (θ Y ). Thus the sample average represents an estimate of the expected value of θ. Other properties of f (θ Y ) can also be estimated by examining the properties of the sample. The accuracy of these estimates depends upon the number of draws, n 0. Accurracy is improved by running the chain longer. Note that the first m iterations of the chain were discarded. The initial elements in the chain are strongly influenced by the starting value (as the figure above demonstrates). If these starting values are drawn from a low density region of the posterior denisty then the chain contains too many draws from this region. 2 1 Each Markov process has a unique invariant distribution. Yet, many Markov chains could have the same invariant distribution. Thus, we are free to use any of these process to simulate the invariant distribution. 2 Another practical problem with running this algorthym is the high autocorrelation between elements in the chain. This reduces the rate a which convergence is acheived. A practical solution is to subsample from elements stored at step 3. 6

7 In summary, if we can find a Markov process with transition kernel P, such that its invariant distribution is f (θ Y ), then we can numerically estimate this posterior distribution by running the Markov chain. Obviously, there are many importance convergence consideration that I have not considered. However, the basic point is that if an appropriate transition kernel can be found, then estimations involves nothing more than running the Markov process. So far I have said nothing about how to find an appropriate transition kernel. It is to this point that I turn next. 3 Transition Kernels Table 1 compares the analytically calculated probability distribution with the numerically simulated values. The accuracy of the simulation can be increased by simply increases the number of iterations of the chain. 3 Most of Markov theory revolves around finding the invariant distribution of Markov chains. MCMC turns the problem arround. Typically, we already know the distribution of interest: the posterior distribution of the parameters. The key is to find a transition kernal that has this invariant distribution. Then to estimate this distribution we simply need to run the Markov chain for a suitably long period. 4 Joint, marginal and conditional distributions In the linear model we want to estimate f(β,σ 2 Y ). Being somewhat informal, this is the probability density of seeing a particular value of β and σ 2. Bayesian have calculated this density. It turns out that, with suitable conjugate priors 4, f(β,σ 2 Y ) is distributed inverse gamma normal. Unfortunately, this is about the most complicated model for which we can work with 3 These is a convenient time to discuss several aspects associated with implementing MCMC methods. First the starting value of the Markov chain affect the initial values of the chain. Over time their effect diminishes. However, if the starting values represent very low density portions of the state space then the choice of starting values affects the results. The usual solution is to discard the early part of the chain. This tends to disregard those draws from the chain that are highly dependent upon the starting values. Convergence criticeria??? literature????? 4 what is a conjugate prior? 7

8 the joint posterior density analytically. For more complex models the joint density is simply intractable. Yet, generally our interest is in the marginal denisty of a particular parameter. In particular case of the simple linear model we typically want to know about β and σ 2, separately. For example, this is all we report from a regression model, the distribution of β. This marginal density is simply the joint density of β and σ 2 intregrated across all possible values of σ 2. The key do using MCMC is to stop thinking in terms of calculating things analytically and imagine how you could simulate a single parameter in a model if you knew all the other parameters. Suppose for example that you knew the marginal distribution of σ 2 and wanted to calculate the marginal distribution of β. In order to estimate the marginal density of β Icould simply, integrate out σ 2 from the joint density. While simply tricky in this problem, it is impossible in more complex econometric models. However, knowing the marginal density of σ 2, I can draw a large number of random draws from this density. For each of these draws, the conditional density of β is simple to calculate (with normal priors, f(β Y,σ 2 ) is also distributed normally). To numerically estimate β I could draw a random sample from this distribution. Algorithm to calculate the marginal density of β given that the density of σ 2 is known. 1. set t=1 2. randonly draw (σ 2 ) (t) from its known posterior marginal distribution 3. calculate the posterior density of β given (σ 2 ) (t) (f(β Y,(σ 2 ) (t) )) 4. randomly draw (β) t from f(β Y,σ 2(t) )) 5. let t=t+1 and go to 2 Suppose this algorthm is repeated T times. Then the T samples of β represent random draws from its marginal density. The algorthm effectively integrates out σ 2. As an analogy, in our 101 econometrics classes we learn how to estimate the means of a variable if we know its variance. We then learn to calculate the variance if we know the mean. Being an order of magnitude harder, the calculation of the joint distribution of the mean and variance is typically 8

9 ommitted. Calculating the posterior density of the mean and the variance togther is much harder than calculating either conditional density. However, providing we can break a model down into a series of simple conditional densities we can estimate the marginal density of a parameter. The algorithm above assumed that the distribution of σ 2 was known and it produced a random sample from the posterior density of β. However,ifthe draws from the algorithm represent random draws from the marginal density of β, then we could simply reverse the logic of the argument, and draw random samples from the conditional density of σ 2 given the current value of β. Given that the β s are random draws from the marginal density for β, then random draws of σ 2 represent random draws from the marginal density of σ 2. Hence the following algorithm simulates the posterior ditributions for β and σ 2. Algorithm to calculate the marginal density of β and σ set t=1 and choose starting values, β (0) and (σ 2 ) (0). 2. calculate the posterior density of β given (σ 2 ) (t) (f(β Y,(σ 2 ) (t) )) 3. Randomly β (t+1) draw from this distribution. 4. calculate the posterior density of (σ 2 ) (t) given (β) (t+1) (f((σ 2 ) (t) Y,(β) (t+1) )) 5. randomly draw (σ 2 ) (t+1) from f((σ 2 ) Y,(β) (t+1) ) 6. let t=t+1 and go to 2 Providing the prior are appropriately choosen then the calculates of f(β Y,(σ 2 ) (t) ) and f((σ 2 ) (t) Y,(β) (t+1) ) are straightforward. The following code shows how simply this algorthym can be implied in STATA. See program OLS_MCMC.do 5 Bayesian Updates for simple models. Suppose we assume that the likelihood function is normal and so is our prior: Likelihood: p(y θ) = 1 2π exp ( 1(y 2 θ)2 ) 9

10 Normal prior: f(θ) = 1 2π exp ( 1 2 (θ µ 0) 2 ). To make life as simple as possible, suppose initially that the variance of both the likelihood and the prior density in one. By Bayes rule the posterior density is proportional to the product of the prior and the likelihood: p(θ y) p(y θ)f(θ). We can show that posterior denisty is also normal. Specifically, p(θ y) exp ( 1 2 (y θ)2 )exp ( 1 2 (θ µ 0) 2 )=exp ( 1 2 [(y θ)2 +(θ µ 0 ) 2 ]) We can expand the terms in the exponential and then collect them (completing the square). 1 2 [(y θ)2 +(θ µ 0 ) 2 ]= θ 2 +( y µ 0 ) θ y µ2 0. We only care about term is θ since everything else is in the nomalizing constant. θ 2 +( y µ 0 ) θ y µ2 0 =(θ b) 2 = θ 2 2θb+b 2 so b = 1 (y + µ 2 0) Hence p(θ y) exp ((θ 1 2 (y + µ 0)) 2 ) so p(θ y) is distributed normal with mean 1 2 (y + µ 0) and variance 1 2.We can now move to a more realistic example. The normal prior is referred to as a conjugate prior since it results in posterior density from the same class of distributions. 5.1 Simple Linear Model Consider the OLS simple linear model, y i = x i β + e i where e i N(0,σ 2 ). Using conjugate prior, β 0 N(β 0,B 0 ) and σ 2 0 GAMMA(υ 0,δ 0 ),wecan derive the posterior conditional densities. The posterior β parameter is normally distributed: β y, σ 2 N( β,b) b where β b = B(B0 1 β 0 + P N i=1 x iy i ) and B =(B0 1 + P N i=1 x0 ix i ) 1,and The posterior σ 2 is inverse gamma distributed: So σ 2 is distributed G( υ 0+N, δ 0+SSE ) 2 2 where SSE = P N i=1 (y i x i β) 2 i.e. sum of squared errors. 5.2 More Complex Models A key advantage of MCMC is that models can be built up in simple stepwise fashion. Suppose from example, that instead of a continuous dependent variable we have binary outcomes. Such data is typically analysed as a probit model. Specifically, z i = x i β + e i where e i N(0, 1), asify i =1 then z i > 0 and if y i =0then z i < 0. The variable z i is referred to a latent variable since we never actually observe it. The standard approach 10

11 to estimating such a model is to integrate out the latent variable and then apply maximium likelihood. A simple MCMC approach utilitizes a data augmentation technique (Tanner and Wong, 198?). If we knew the value of these latent data then we could simulate the β sjustaswedidintheols model above. Although we don t care directly about the latent data we can simulate these data. The probit model tells us the distribution of the latent data. Specific, if y i =1then z i has a left truncated normal distribution with mean xβ and variance 1: x TN [0,+ ] (x i β,1). Similarly, if y i =0 then we know that the corresponding latent variable lies between and 0: x TN [,0] (x i β,1). We now that the tools to implement this model. Let Z refer to the set of latent data (i.e. all the z i s.) Algorithm to calculate the marginal density of β in a probit model. 1. set t=1 and choose starting values, β (0) and (Z) (0). 2. calculate the posterior density of β given Z (t) (f(β Z t )) 3. Randomly β (t+1) draw from this distribution. 4. calculate the posterior density of Z (t) given (β) (t+1) (f(z Y,(β) (t+1) )) 5. randomly draw (Z) (t+1) from f(z Y,(β) (t+1) ) 6. let t=t+1 and go to 2 The key simplification here is that given Z, the posterior distribution of β is independent of the binary observed dependent variable. Now while in this context, the MLE approach provides highly reliable estimates in more complex models, such are multivariate, multinominal, or censored descrete choice models, MLE is less reliable. MCMC provides a powerful tool in these cases, being easy to program and less prone to the convergence failure problems of MLE. The construction of MCMC can be done peicewise. For example, the OLS code above with estimate the probit model with two additions. First, set the variance, σ 2, equal to one. Second, add the set to draw the latent data, Z. This is easily acheived using the following simulation. If z is a truncated normal variable with mean xβ, variance 1 with a range p to q, then the following algorithym readily provides a method to genrate a random sample 11

12 from the distribution of z. If x TN [p,q] (µ, σ 2 ) and u is a uniform random number then x = µ + σφ 1 (Φ((p µ)/σ)+u(φ((q µ)/σ) Φ((p µ)/σ))), represents a random draw of x. 12

Basics of Statistical Machine Learning

Basics of Statistical Machine Learning CS761 Spring 2013 Advanced Machine Learning Basics of Statistical Machine Learning Lecturer: Xiaojin Zhu jerryzhu@cs.wisc.edu Modern machine learning is rooted in statistics. You will find many familiar

More information

Bayesian Statistics in One Hour. Patrick Lam

Bayesian Statistics in One Hour. Patrick Lam Bayesian Statistics in One Hour Patrick Lam Outline Introduction Bayesian Models Applications Missing Data Hierarchical Models Outline Introduction Bayesian Models Applications Missing Data Hierarchical

More information

1 Prior Probability and Posterior Probability

1 Prior Probability and Posterior Probability Math 541: Statistical Theory II Bayesian Approach to Parameter Estimation Lecturer: Songfeng Zheng 1 Prior Probability and Posterior Probability Consider now a problem of statistical inference in which

More information

Web-based Supplementary Materials for Bayesian Effect Estimation. Accounting for Adjustment Uncertainty by Chi Wang, Giovanni

Web-based Supplementary Materials for Bayesian Effect Estimation. Accounting for Adjustment Uncertainty by Chi Wang, Giovanni 1 Web-based Supplementary Materials for Bayesian Effect Estimation Accounting for Adjustment Uncertainty by Chi Wang, Giovanni Parmigiani, and Francesca Dominici In Web Appendix A, we provide detailed

More information

Inference on Phase-type Models via MCMC

Inference on Phase-type Models via MCMC Inference on Phase-type Models via MCMC with application to networks of repairable redundant systems Louis JM Aslett and Simon P Wilson Trinity College Dublin 28 th June 202 Toy Example : Redundant Repairable

More information

11. Time series and dynamic linear models

11. Time series and dynamic linear models 11. Time series and dynamic linear models Objective To introduce the Bayesian approach to the modeling and forecasting of time series. Recommended reading West, M. and Harrison, J. (1997). models, (2 nd

More information

1 Sufficient statistics

1 Sufficient statistics 1 Sufficient statistics A statistic is a function T = rx 1, X 2,, X n of the random sample X 1, X 2,, X n. Examples are X n = 1 n s 2 = = X i, 1 n 1 the sample mean X i X n 2, the sample variance T 1 =

More information

Spatial Statistics Chapter 3 Basics of areal data and areal data modeling

Spatial Statistics Chapter 3 Basics of areal data and areal data modeling Spatial Statistics Chapter 3 Basics of areal data and areal data modeling Recall areal data also known as lattice data are data Y (s), s D where D is a discrete index set. This usually corresponds to data

More information

Centre for Central Banking Studies

Centre for Central Banking Studies Centre for Central Banking Studies Technical Handbook No. 4 Applied Bayesian econometrics for central bankers Andrew Blake and Haroon Mumtaz CCBS Technical Handbook No. 4 Applied Bayesian econometrics

More information

Hypothesis Testing for Beginners

Hypothesis Testing for Beginners Hypothesis Testing for Beginners Michele Piffer LSE August, 2011 Michele Piffer (LSE) Hypothesis Testing for Beginners August, 2011 1 / 53 One year ago a friend asked me to put down some easy-to-read notes

More information

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

More information

A Bootstrap Metropolis-Hastings Algorithm for Bayesian Analysis of Big Data

A Bootstrap Metropolis-Hastings Algorithm for Bayesian Analysis of Big Data A Bootstrap Metropolis-Hastings Algorithm for Bayesian Analysis of Big Data Faming Liang University of Florida August 9, 2015 Abstract MCMC methods have proven to be a very powerful tool for analyzing

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.cs.toronto.edu/~rsalakhu/ Lecture 6 Three Approaches to Classification Construct

More information

Lecture 3: Linear methods for classification

Lecture 3: Linear methods for classification Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,

More information

INDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition)

INDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition) INDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition) Abstract Indirect inference is a simulation-based method for estimating the parameters of economic models. Its

More information

PS 271B: Quantitative Methods II. Lecture Notes

PS 271B: Quantitative Methods II. Lecture Notes PS 271B: Quantitative Methods II Lecture Notes Langche Zeng zeng@ucsd.edu The Empirical Research Process; Fundamental Methodological Issues 2 Theory; Data; Models/model selection; Estimation; Inference.

More information

Bayesian Statistics: Indian Buffet Process

Bayesian Statistics: Indian Buffet Process Bayesian Statistics: Indian Buffet Process Ilker Yildirim Department of Brain and Cognitive Sciences University of Rochester Rochester, NY 14627 August 2012 Reference: Most of the material in this note

More information

CHAPTER 2 Estimating Probabilities

CHAPTER 2 Estimating Probabilities CHAPTER 2 Estimating Probabilities Machine Learning Copyright c 2016. Tom M. Mitchell. All rights reserved. *DRAFT OF January 24, 2016* *PLEASE DO NOT DISTRIBUTE WITHOUT AUTHOR S PERMISSION* This is a

More information

CHAPTER 6: Continuous Uniform Distribution: 6.1. Definition: The density function of the continuous random variable X on the interval [A, B] is.

CHAPTER 6: Continuous Uniform Distribution: 6.1. Definition: The density function of the continuous random variable X on the interval [A, B] is. Some Continuous Probability Distributions CHAPTER 6: Continuous Uniform Distribution: 6. Definition: The density function of the continuous random variable X on the interval [A, B] is B A A x B f(x; A,

More information

Imputing Missing Data using SAS

Imputing Missing Data using SAS ABSTRACT Paper 3295-2015 Imputing Missing Data using SAS Christopher Yim, California Polytechnic State University, San Luis Obispo Missing data is an unfortunate reality of statistics. However, there are

More information

1 Teaching notes on GMM 1.

1 Teaching notes on GMM 1. Bent E. Sørensen January 23, 2007 1 Teaching notes on GMM 1. Generalized Method of Moment (GMM) estimation is one of two developments in econometrics in the 80ies that revolutionized empirical work in

More information

Standard errors of marginal effects in the heteroskedastic probit model

Standard errors of marginal effects in the heteroskedastic probit model Standard errors of marginal effects in the heteroskedastic probit model Thomas Cornelißen Discussion Paper No. 320 August 2005 ISSN: 0949 9962 Abstract In non-linear regression models, such as the heteroskedastic

More information

Econometrics Simple Linear Regression

Econometrics Simple Linear Regression Econometrics Simple Linear Regression Burcu Eke UC3M Linear equations with one variable Recall what a linear equation is: y = b 0 + b 1 x is a linear equation with one variable, or equivalently, a straight

More information

Linear Classification. Volker Tresp Summer 2015

Linear Classification. Volker Tresp Summer 2015 Linear Classification Volker Tresp Summer 2015 1 Classification Classification is the central task of pattern recognition Sensors supply information about an object: to which class do the object belong

More information

Gaussian Conjugate Prior Cheat Sheet

Gaussian Conjugate Prior Cheat Sheet Gaussian Conjugate Prior Cheat Sheet Tom SF Haines 1 Purpose This document contains notes on how to handle the multivariate Gaussian 1 in a Bayesian setting. It focuses on the conjugate prior, its Bayesian

More information

Credit Risk Models: An Overview

Credit Risk Models: An Overview Credit Risk Models: An Overview Paul Embrechts, Rüdiger Frey, Alexander McNeil ETH Zürich c 2003 (Embrechts, Frey, McNeil) A. Multivariate Models for Portfolio Credit Risk 1. Modelling Dependent Defaults:

More information

Bayesian logistic betting strategy against probability forecasting. Akimichi Takemura, Univ. Tokyo. November 12, 2012

Bayesian logistic betting strategy against probability forecasting. Akimichi Takemura, Univ. Tokyo. November 12, 2012 Bayesian logistic betting strategy against probability forecasting Akimichi Takemura, Univ. Tokyo (joint with Masayuki Kumon, Jing Li and Kei Takeuchi) November 12, 2012 arxiv:1204.3496. To appear in Stochastic

More information

Validation of Software for Bayesian Models using Posterior Quantiles. Samantha R. Cook Andrew Gelman Donald B. Rubin DRAFT

Validation of Software for Bayesian Models using Posterior Quantiles. Samantha R. Cook Andrew Gelman Donald B. Rubin DRAFT Validation of Software for Bayesian Models using Posterior Quantiles Samantha R. Cook Andrew Gelman Donald B. Rubin DRAFT Abstract We present a simulation-based method designed to establish that software

More information

Generating Random Numbers Variance Reduction Quasi-Monte Carlo. Simulation Methods. Leonid Kogan. MIT, Sloan. 15.450, Fall 2010

Generating Random Numbers Variance Reduction Quasi-Monte Carlo. Simulation Methods. Leonid Kogan. MIT, Sloan. 15.450, Fall 2010 Simulation Methods Leonid Kogan MIT, Sloan 15.450, Fall 2010 c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 1 / 35 Outline 1 Generating Random Numbers 2 Variance Reduction 3 Quasi-Monte

More information

Probabilistic Models for Big Data. Alex Davies and Roger Frigola University of Cambridge 13th February 2014

Probabilistic Models for Big Data. Alex Davies and Roger Frigola University of Cambridge 13th February 2014 Probabilistic Models for Big Data Alex Davies and Roger Frigola University of Cambridge 13th February 2014 The State of Big Data Why probabilistic models for Big Data? 1. If you don t have to worry about

More information

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression Logistic Regression Department of Statistics The Pennsylvania State University Email: jiali@stat.psu.edu Logistic Regression Preserve linear classification boundaries. By the Bayes rule: Ĝ(x) = arg max

More information

Statistical Machine Learning from Data

Statistical Machine Learning from Data Samy Bengio Statistical Machine Learning from Data 1 Statistical Machine Learning from Data Gaussian Mixture Models Samy Bengio IDIAP Research Institute, Martigny, Switzerland, and Ecole Polytechnique

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Math 541: Statistical Theory II Lecturer: Songfeng Zheng Maximum Likelihood Estimation 1 Maximum Likelihood Estimation Maximum likelihood is a relatively simple method of constructing an estimator for

More information

Linear Threshold Units

Linear Threshold Units Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear

More information

Numerical Methods for Option Pricing

Numerical Methods for Option Pricing Chapter 9 Numerical Methods for Option Pricing Equation (8.26) provides a way to evaluate option prices. For some simple options, such as the European call and put options, one can integrate (8.26) directly

More information

Interpretation of Somers D under four simple models

Interpretation of Somers D under four simple models Interpretation of Somers D under four simple models Roger B. Newson 03 September, 04 Introduction Somers D is an ordinal measure of association introduced by Somers (96)[9]. It can be defined in terms

More information

Tutorial on Markov Chain Monte Carlo

Tutorial on Markov Chain Monte Carlo Tutorial on Markov Chain Monte Carlo Kenneth M. Hanson Los Alamos National Laboratory Presented at the 29 th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Technology,

More information

CHAPTER 3 EXAMPLES: REGRESSION AND PATH ANALYSIS

CHAPTER 3 EXAMPLES: REGRESSION AND PATH ANALYSIS Examples: Regression And Path Analysis CHAPTER 3 EXAMPLES: REGRESSION AND PATH ANALYSIS Regression analysis with univariate or multivariate dependent variables is a standard procedure for modeling relationships

More information

HURDLE AND SELECTION MODELS Jeff Wooldridge Michigan State University BGSE/IZA Course in Microeconometrics July 2009

HURDLE AND SELECTION MODELS Jeff Wooldridge Michigan State University BGSE/IZA Course in Microeconometrics July 2009 HURDLE AND SELECTION MODELS Jeff Wooldridge Michigan State University BGSE/IZA Course in Microeconometrics July 2009 1. Introduction 2. A General Formulation 3. Truncated Normal Hurdle Model 4. Lognormal

More information

Chapter 13 Introduction to Nonlinear Regression( 非 線 性 迴 歸 )

Chapter 13 Introduction to Nonlinear Regression( 非 線 性 迴 歸 ) Chapter 13 Introduction to Nonlinear Regression( 非 線 性 迴 歸 ) and Neural Networks( 類 神 經 網 路 ) 許 湘 伶 Applied Linear Regression Models (Kutner, Nachtsheim, Neter, Li) hsuhl (NUK) LR Chap 10 1 / 35 13 Examples

More information

Determining distribution parameters from quantiles

Determining distribution parameters from quantiles Determining distribution parameters from quantiles John D. Cook Department of Biostatistics The University of Texas M. D. Anderson Cancer Center P. O. Box 301402 Unit 1409 Houston, TX 77230-1402 USA cook@mderson.org

More information

Statistical Machine Learning

Statistical Machine Learning Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes

More information

Imperfect Debugging in Software Reliability

Imperfect Debugging in Software Reliability Imperfect Debugging in Software Reliability Tevfik Aktekin and Toros Caglar University of New Hampshire Peter T. Paul College of Business and Economics Department of Decision Sciences and United Health

More information

Introduction to General and Generalized Linear Models

Introduction to General and Generalized Linear Models Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby

More information

Unified Lecture # 4 Vectors

Unified Lecture # 4 Vectors Fall 2005 Unified Lecture # 4 Vectors These notes were written by J. Peraire as a review of vectors for Dynamics 16.07. They have been adapted for Unified Engineering by R. Radovitzky. References [1] Feynmann,

More information

Model-based Synthesis. Tony O Hagan

Model-based Synthesis. Tony O Hagan Model-based Synthesis Tony O Hagan Stochastic models Synthesising evidence through a statistical model 2 Evidence Synthesis (Session 3), Helsinki, 28/10/11 Graphical modelling The kinds of models that

More information

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Module No. #01 Lecture No. #15 Special Distributions-VI Today, I am going to introduce

More information

An Introduction to Machine Learning

An Introduction to Machine Learning An Introduction to Machine Learning L5: Novelty Detection and Regression Alexander J. Smola Statistical Machine Learning Program Canberra, ACT 0200 Australia Alex.Smola@nicta.com.au Tata Institute, Pune,

More information

Handling attrition and non-response in longitudinal data

Handling attrition and non-response in longitudinal data Longitudinal and Life Course Studies 2009 Volume 1 Issue 1 Pp 63-72 Handling attrition and non-response in longitudinal data Harvey Goldstein University of Bristol Correspondence. Professor H. Goldstein

More information

DIFFERENTIABILITY OF COMPLEX FUNCTIONS. Contents

DIFFERENTIABILITY OF COMPLEX FUNCTIONS. Contents DIFFERENTIABILITY OF COMPLEX FUNCTIONS Contents 1. Limit definition of a derivative 1 2. Holomorphic functions, the Cauchy-Riemann equations 3 3. Differentiability of real functions 5 4. A sufficient condition

More information

Poisson Models for Count Data

Poisson Models for Count Data Chapter 4 Poisson Models for Count Data In this chapter we study log-linear models for count data under the assumption of a Poisson error structure. These models have many applications, not only to the

More information

Principle of Data Reduction

Principle of Data Reduction Chapter 6 Principle of Data Reduction 6.1 Introduction An experimenter uses the information in a sample X 1,..., X n to make inferences about an unknown parameter θ. If the sample size n is large, then

More information

problem arises when only a non-random sample is available differs from censored regression model in that x i is also unobserved

problem arises when only a non-random sample is available differs from censored regression model in that x i is also unobserved 4 Data Issues 4.1 Truncated Regression population model y i = x i β + ε i, ε i N(0, σ 2 ) given a random sample, {y i, x i } N i=1, then OLS is consistent and efficient problem arises when only a non-random

More information

Parameter estimation for nonlinear models: Numerical approaches to solving the inverse problem. Lecture 12 04/08/2008. Sven Zenker

Parameter estimation for nonlinear models: Numerical approaches to solving the inverse problem. Lecture 12 04/08/2008. Sven Zenker Parameter estimation for nonlinear models: Numerical approaches to solving the inverse problem Lecture 12 04/08/2008 Sven Zenker Assignment no. 8 Correct setup of likelihood function One fixed set of observation

More information

Chapter 1 Introduction. 1.1 Introduction

Chapter 1 Introduction. 1.1 Introduction Chapter 1 Introduction 1.1 Introduction 1 1.2 What Is a Monte Carlo Study? 2 1.2.1 Simulating the Rolling of Two Dice 2 1.3 Why Is Monte Carlo Simulation Often Necessary? 4 1.4 What Are Some Typical Situations

More information

On Marginal Effects in Semiparametric Censored Regression Models

On Marginal Effects in Semiparametric Censored Regression Models On Marginal Effects in Semiparametric Censored Regression Models Bo E. Honoré September 3, 2008 Introduction It is often argued that estimation of semiparametric censored regression models such as the

More information

5 Numerical Differentiation

5 Numerical Differentiation D. Levy 5 Numerical Differentiation 5. Basic Concepts This chapter deals with numerical approximations of derivatives. The first questions that comes up to mind is: why do we need to approximate derivatives

More information

STAT 830 Convergence in Distribution

STAT 830 Convergence in Distribution STAT 830 Convergence in Distribution Richard Lockhart Simon Fraser University STAT 830 Fall 2011 Richard Lockhart (Simon Fraser University) STAT 830 Convergence in Distribution STAT 830 Fall 2011 1 / 31

More information

Estimating Industry Multiples

Estimating Industry Multiples Estimating Industry Multiples Malcolm Baker * Harvard University Richard S. Ruback Harvard University First Draft: May 1999 Rev. June 11, 1999 Abstract We analyze industry multiples for the S&P 500 in

More information

Normal distribution. ) 2 /2σ. 2π σ

Normal distribution. ) 2 /2σ. 2π σ Normal distribution The normal distribution is the most widely known and used of all distributions. Because the normal distribution approximates many natural phenomena so well, it has developed into a

More information

An Introduction to Using WinBUGS for Cost-Effectiveness Analyses in Health Economics

An Introduction to Using WinBUGS for Cost-Effectiveness Analyses in Health Economics Slide 1 An Introduction to Using WinBUGS for Cost-Effectiveness Analyses in Health Economics Dr. Christian Asseburg Centre for Health Economics Part 1 Slide 2 Talk overview Foundations of Bayesian statistics

More information

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015.

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015. Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment -3, Probability and Statistics, March 05. Due:-March 5, 05.. Show that the function 0 for x < x+ F (x) = 4 for x < for x

More information

DURATION ANALYSIS OF FLEET DYNAMICS

DURATION ANALYSIS OF FLEET DYNAMICS DURATION ANALYSIS OF FLEET DYNAMICS Garth Holloway, University of Reading, garth.holloway@reading.ac.uk David Tomberlin, NOAA Fisheries, david.tomberlin@noaa.gov ABSTRACT Though long a standard technique

More information

Auxiliary Variables in Mixture Modeling: 3-Step Approaches Using Mplus

Auxiliary Variables in Mixture Modeling: 3-Step Approaches Using Mplus Auxiliary Variables in Mixture Modeling: 3-Step Approaches Using Mplus Tihomir Asparouhov and Bengt Muthén Mplus Web Notes: No. 15 Version 8, August 5, 2014 1 Abstract This paper discusses alternatives

More information

Analysis of Bayesian Dynamic Linear Models

Analysis of Bayesian Dynamic Linear Models Analysis of Bayesian Dynamic Linear Models Emily M. Casleton December 17, 2010 1 Introduction The main purpose of this project is to explore the Bayesian analysis of Dynamic Linear Models (DLMs). The main

More information

Basic Bayesian Methods

Basic Bayesian Methods 6 Basic Bayesian Methods Mark E. Glickman and David A. van Dyk Summary In this chapter, we introduce the basics of Bayesian data analysis. The key ingredients to a Bayesian analysis are the likelihood

More information

Chapter 3 RANDOM VARIATE GENERATION

Chapter 3 RANDOM VARIATE GENERATION Chapter 3 RANDOM VARIATE GENERATION In order to do a Monte Carlo simulation either by hand or by computer, techniques must be developed for generating values of random variables having known distributions.

More information

MAN-BITES-DOG BUSINESS CYCLES ONLINE APPENDIX

MAN-BITES-DOG BUSINESS CYCLES ONLINE APPENDIX MAN-BITES-DOG BUSINESS CYCLES ONLINE APPENDIX KRISTOFFER P. NIMARK The next section derives the equilibrium expressions for the beauty contest model from Section 3 of the main paper. This is followed by

More information

A HYBRID GENETIC ALGORITHM FOR THE MAXIMUM LIKELIHOOD ESTIMATION OF MODELS WITH MULTIPLE EQUILIBRIA: A FIRST REPORT

A HYBRID GENETIC ALGORITHM FOR THE MAXIMUM LIKELIHOOD ESTIMATION OF MODELS WITH MULTIPLE EQUILIBRIA: A FIRST REPORT New Mathematics and Natural Computation Vol. 1, No. 2 (2005) 295 303 c World Scientific Publishing Company A HYBRID GENETIC ALGORITHM FOR THE MAXIMUM LIKELIHOOD ESTIMATION OF MODELS WITH MULTIPLE EQUILIBRIA:

More information

Markov Chain Monte Carlo and Applied Bayesian Statistics: a short course Chris Holmes Professor of Biostatistics Oxford Centre for Gene Function

Markov Chain Monte Carlo and Applied Bayesian Statistics: a short course Chris Holmes Professor of Biostatistics Oxford Centre for Gene Function MCMC Appl. Bayes 1 Markov Chain Monte Carlo and Applied Bayesian Statistics: a short course Chris Holmes Professor of Biostatistics Oxford Centre for Gene Function MCMC Appl. Bayes 2 Objectives of Course

More information

CSCI567 Machine Learning (Fall 2014)

CSCI567 Machine Learning (Fall 2014) CSCI567 Machine Learning (Fall 2014) Drs. Sha & Liu {feisha,yanliu.cs}@usc.edu September 22, 2014 Drs. Sha & Liu ({feisha,yanliu.cs}@usc.edu) CSCI567 Machine Learning (Fall 2014) September 22, 2014 1 /

More information

Part 2: One-parameter models

Part 2: One-parameter models Part 2: One-parameter models Bernoilli/binomial models Return to iid Y 1,...,Y n Bin(1, θ). The sampling model/likelihood is p(y 1,...,y n θ) =θ P y i (1 θ) n P y i When combined with a prior p(θ), Bayes

More information

The Basics of Graphical Models

The Basics of Graphical Models The Basics of Graphical Models David M. Blei Columbia University October 3, 2015 Introduction These notes follow Chapter 2 of An Introduction to Probabilistic Graphical Models by Michael Jordan. Many figures

More information

Math 202-0 Quizzes Winter 2009

Math 202-0 Quizzes Winter 2009 Quiz : Basic Probability Ten Scrabble tiles are placed in a bag Four of the tiles have the letter printed on them, and there are two tiles each with the letters B, C and D on them (a) Suppose one tile

More information

Modeling the Distribution of Environmental Radon Levels in Iowa: Combining Multiple Sources of Spatially Misaligned Data

Modeling the Distribution of Environmental Radon Levels in Iowa: Combining Multiple Sources of Spatially Misaligned Data Modeling the Distribution of Environmental Radon Levels in Iowa: Combining Multiple Sources of Spatially Misaligned Data Brian J. Smith, Ph.D. The University of Iowa Joint Statistical Meetings August 10,

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Introduction to Markov Chain Monte Carlo

Introduction to Markov Chain Monte Carlo Introduction to Markov Chain Monte Carlo Monte Carlo: sample from a distribution to estimate the distribution to compute max, mean Markov Chain Monte Carlo: sampling using local information Generic problem

More information

Supplement to Call Centers with Delay Information: Models and Insights

Supplement to Call Centers with Delay Information: Models and Insights Supplement to Call Centers with Delay Information: Models and Insights Oualid Jouini 1 Zeynep Akşin 2 Yves Dallery 1 1 Laboratoire Genie Industriel, Ecole Centrale Paris, Grande Voie des Vignes, 92290

More information

MISSING DATA TECHNIQUES WITH SAS. IDRE Statistical Consulting Group

MISSING DATA TECHNIQUES WITH SAS. IDRE Statistical Consulting Group MISSING DATA TECHNIQUES WITH SAS IDRE Statistical Consulting Group ROAD MAP FOR TODAY To discuss: 1. Commonly used techniques for handling missing data, focusing on multiple imputation 2. Issues that could

More information

Efficiency and the Cramér-Rao Inequality

Efficiency and the Cramér-Rao Inequality Chapter Efficiency and the Cramér-Rao Inequality Clearly we would like an unbiased estimator ˆφ (X of φ (θ to produce, in the long run, estimates which are fairly concentrated i.e. have high precision.

More information

A Basic Introduction to Missing Data

A Basic Introduction to Missing Data John Fox Sociology 740 Winter 2014 Outline Why Missing Data Arise Why Missing Data Arise Global or unit non-response. In a survey, certain respondents may be unreachable or may refuse to participate. Item

More information

statistical learning; Bayesian learning; stochastic optimization; dynamic programming

statistical learning; Bayesian learning; stochastic optimization; dynamic programming INFORMS 2008 c 2008 INFORMS isbn 978-1-877640-23-0 doi 10.1287/educ.1080.0039 Optimal Learning Warren B. Powell and Peter Frazier Department of Operations Research and Financial Engineering, Princeton

More information

Binomial lattice model for stock prices

Binomial lattice model for stock prices Copyright c 2007 by Karl Sigman Binomial lattice model for stock prices Here we model the price of a stock in discrete time by a Markov chain of the recursive form S n+ S n Y n+, n 0, where the {Y i }

More information

PARTIAL LEAST SQUARES IS TO LISREL AS PRINCIPAL COMPONENTS ANALYSIS IS TO COMMON FACTOR ANALYSIS. Wynne W. Chin University of Calgary, CANADA

PARTIAL LEAST SQUARES IS TO LISREL AS PRINCIPAL COMPONENTS ANALYSIS IS TO COMMON FACTOR ANALYSIS. Wynne W. Chin University of Calgary, CANADA PARTIAL LEAST SQUARES IS TO LISREL AS PRINCIPAL COMPONENTS ANALYSIS IS TO COMMON FACTOR ANALYSIS. Wynne W. Chin University of Calgary, CANADA ABSTRACT The decision of whether to use PLS instead of a covariance

More information

Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab

Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab Monte Carlo Simulation: IEOR E4703 Fall 2004 c 2004 by Martin Haugh Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab 1 Overview of Monte Carlo Simulation 1.1 Why use simulation?

More information

Note on growth and growth accounting

Note on growth and growth accounting CHAPTER 0 Note on growth and growth accounting 1. Growth and the growth rate In this section aspects of the mathematical concept of the rate of growth used in growth models and in the empirical analysis

More information

The sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1].

The sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1]. Probability Theory Probability Spaces and Events Consider a random experiment with several possible outcomes. For example, we might roll a pair of dice, flip a coin three times, or choose a random real

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

Big data challenges for physics in the next decades

Big data challenges for physics in the next decades Big data challenges for physics in the next decades David W. Hogg Center for Cosmology and Particle Physics, New York University 2012 November 09 punchlines Huge data sets create new opportunities. they

More information

3. Regression & Exponential Smoothing

3. Regression & Exponential Smoothing 3. Regression & Exponential Smoothing 3.1 Forecasting a Single Time Series Two main approaches are traditionally used to model a single time series z 1, z 2,..., z n 1. Models the observation z t as a

More information

Gamma Distribution Fitting

Gamma Distribution Fitting Chapter 552 Gamma Distribution Fitting Introduction This module fits the gamma probability distributions to a complete or censored set of individual or grouped data values. It outputs various statistics

More information

Monte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMS091)

Monte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMS091) Monte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMS091) Magnus Wiktorsson Centre for Mathematical Sciences Lund University, Sweden Lecture 5 Sequential Monte Carlo methods I February

More information

Using the Delta Method to Construct Confidence Intervals for Predicted Probabilities, Rates, and Discrete Changes

Using the Delta Method to Construct Confidence Intervals for Predicted Probabilities, Rates, and Discrete Changes Using the Delta Method to Construct Confidence Intervals for Predicted Probabilities, Rates, Discrete Changes JunXuJ.ScottLong Indiana University August 22, 2005 The paper provides technical details on

More information

LOGISTIC REGRESSION. Nitin R Patel. where the dependent variable, y, is binary (for convenience we often code these values as

LOGISTIC REGRESSION. Nitin R Patel. where the dependent variable, y, is binary (for convenience we often code these values as LOGISTIC REGRESSION Nitin R Patel Logistic regression extends the ideas of multiple linear regression to the situation where the dependent variable, y, is binary (for convenience we often code these values

More information

Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.

Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not. Statistical Learning: Chapter 4 Classification 4.1 Introduction Supervised learning with a categorical (Qualitative) response Notation: - Feature vector X, - qualitative response Y, taking values in C

More information

Estimating an ARMA Process

Estimating an ARMA Process Statistics 910, #12 1 Overview Estimating an ARMA Process 1. Main ideas 2. Fitting autoregressions 3. Fitting with moving average components 4. Standard errors 5. Examples 6. Appendix: Simple estimators

More information

Nonparametric adaptive age replacement with a one-cycle criterion

Nonparametric adaptive age replacement with a one-cycle criterion Nonparametric adaptive age replacement with a one-cycle criterion P. Coolen-Schrijner, F.P.A. Coolen Department of Mathematical Sciences University of Durham, Durham, DH1 3LE, UK e-mail: Pauline.Schrijner@durham.ac.uk

More information

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION Introduction In the previous chapter, we explored a class of regression models having particularly simple analytical

More information

1 The Brownian bridge construction

1 The Brownian bridge construction The Brownian bridge construction The Brownian bridge construction is a way to build a Brownian motion path by successively adding finer scale detail. This construction leads to a relatively easy proof

More information

Analyzing Structural Equation Models With Missing Data

Analyzing Structural Equation Models With Missing Data Analyzing Structural Equation Models With Missing Data Craig Enders* Arizona State University cenders@asu.edu based on Enders, C. K. (006). Analyzing structural equation models with missing data. In G.

More information