Multi-parameter MCMC notes by Mark Holder

Size: px
Start display at page:

Download "Multi-parameter MCMC notes by Mark Holder"

Transcription

1 Multi-parameter MCMC notes by Mar Holder Review In the last lecture we ustified the Metropolis-Hastings algorithm as a means of constructing a Marov chain with a stationary distribution that is identical to the posterior probability distribution. We found that if you propose a new state from a proposal distribution with probability of proposal denote q(, ) then you could use the following rule to calculate an acceptance probability: α(, ) min [ 1, ( P(D θ ) P(D θ ) ) ( P(θ ) P(θ ) ) ( q(, ) q(, ) )] To get the probability of moving, we have to multiple the proposal probability by the acceptance probability: q(, ) P(x i+1 x i ) α(, ) P(x i+1 x i, x i+1) m, P(x i+1 x i ) q(, )α(, ) If α(, ) < 1 then α(, ) 1. In this case: [( ) ( ) ( )]/ α(, ) P(D θ ) P(θ ) q(, ) 1 α(, ) P(D θ ) P(θ ) q(, ) ( ) ( ) ( ) P(D θ ) P(θ ) q(, ) P(D θ ) P(θ ) q(, ) Thus, the ratio of these two transition probabilities for the Marov chain are: m, m, q(, )α(, ) q(, )α(, ) ( ) ( ) ( ) ( ) q(, ) P(D θ ) P(θ ) q(, ) q(, ) P(D θ ) P(θ ) q(, ) ( ) ( ) P(D θ ) P(θ ) P(D θ ) P(θ ) If we recall that, under detailed balance, we have: π π m, m, we see that we have constructed a chain in which the stationary distribution is proportional to the posterior probability distribution. 1

2 Convergence We can (sometimes) diagnose failure-to-converge by comparing the results of separate MCMC simulations. If all seems to be woring, then we would lie to treat our sampled points from the MCMC simulation as if they were draws from the posterior probability distribution over parameters. Unfortunately, our samples from our MCMC approximation to the posterior will display autocorrelation. We can calculate an effective sample size by diving the number of sampled points by the autocorrelation time. The CODA pacage in R provides several useful tools for diagnosing problems with MCMC convergence. Multi-parameter inference In the simple example discussed in the last lecture, θ, could only tae one of 5 values. In general, our models have multiple, continuous parameters. We can adopt the acceptance rules to continuous parameters by using a Hastings ratio which is the ratio of proposal densities (and we d also have a ratio of prior probability densities). Adapting MCMC to multi-parameter problems is also simple. Because of co-linearity between parameters, it may be most effective to design proposals that change multiple parameters in one step. But this is not necessary. If we update each parameter, we can still construct a valid chain. In doing this, we are effectively sampling the m-th parameter from P(θ m Data, θ m ) where θ m denote the vector of parameters without parameter m included. Having a marginal distribution is not enough to reconstruct a oint distribution. But we have the distribution on θ m for every possible value of the other parameters. So we are effectively sampling from the oint distribution, we are ust updating one parameter at a time. GLM in Bayesian world Consider a data set of different diets many full sibs reared under two diets (normal0, and unlimited1). We measure snout-vent length for a bunch of geos. Our model is: There is an unnown mean SVL under the normal diet, α 0. α 1 is the mean SVL of an infinitely-large independent sample under the unlimited diet. Each family,, will have a mean effect, B. This effect gets added to the mean based on the diet, regardless of diet. Each family,, will have a mean response to unlimited diet, C 1. This effect is only added to individuals on the unlimited diet. For notational convenience, we can simply define C 0 0 for all families.

3 The SVL for each individual is expected to normally-distributed around the expected value; the difference between a response and the expected value is ɛ i. To complete the lielihood model, we have to say something about the probability distributions that govern the random effects: B N (0, σ B ) C 1 N (0, σ C ) ɛ i N (0, σ e ) In our previous approach, we could do a hypothesis test such as H 0 : α 0 α 1, or we could generate point estimates. That is OK, but what if we want to answer questions such as What is the probability that α 1 α 0 > 0.5mm? Could we: reparameterize to δ 1 α 1 α 0, construct a x% confidence interval, search for the largest value of x such that 0 is not included in the confidence interval? do something lie P(α 1 α 0 > 0.5) (1 x )/ or something lie that? No! That is not a correct interpretation of a P -value, or confidence interval! If we conduct Bayesian inference on this model, we can estimate the oint posterior probability distribution over all parameters. From this distribution we can calculate the P(α 1 α 0 > 0.5 X) by integrating ( ) P(α 1 α 0 > 0.5 X) p(α 0, α 1 X)dα 1 dα 0 α From MCMC output we can count the fraction of the sampled points for which α 1 α 0 > 0.5, and use this as an estimate of the probability. Note that it is hard to estimate very small probabilities accurately using a simulation-based approach. But you can often get a very reasonable estimate of the probability of some complicated, oint statement about the parameters by simply counting the fraction of MCMC samples that satisfy the statement. Definitions of probability In the frequency or frequentist definition, the probability of an event is the fraction of times the event would occur if you were able to repeat the trial an infinitely large number of times. The probability is defined in terms of long-run behavior and repeated experiments. Bayesians accept that this is correct (if we say that the probability of heads is 0.6 for a coin, then a Bayesian would expect the fraction heads in a very large experiment to approach 60%), but Bayesians also use probability to quantify uncertainty even in circumstances in which it is does 3

4 not seem reasonable to thin of repeated trials. The classic example is the prior probability assigned to a parameter value. A parameter in a model can be a fixed, but unnown constant in nature. But even if there can only be one true value, Bayesians will use probability statements to represent the degree of belief. Low probabilities are assigned to values that seem unliely. Probability statements can be made based on vague beliefs (as is often the case with priors), or they can be informed by previous data. Fixed vs random effects in the GLM model In order to perform Bayesian inference on the GLM model setched out above, we would need to specify a prior distribution on α 0 and α 1. If we new that the geos tended to be around 5cm long (SVL), then we might use priors lie this: α 0 Gamma(α 10, β ) δ 1 Normal(µ 1, σ 10) Where Gamma(α, β) is a Gamma-distribution with mean α/β and variance α/β. Now that we have a prior for all of the parameters, we may notice that the distinction between random effects and a vector of parameters becomes blurry. Often we can implement a sampler more easily in a Bayesian context if we simply model the outcomes of random processes that we don t observe. A variable that is the unobserved result of the action of our model are referred to as latent variables. When we conduct inference without imputing values for the latent variables, we are effectively integrating them out of the calculations. If we choose, we may prefer to use MCMC to integrate over some of the latent variables. When we do this, the latent variables loo lie parameters in the model, we calculate a lielihood ratio for them and a prior ratio for them whenever they need to be updated 1. In the ML approach to the GLM, we did not try to estimate the random effects. We simply recognized that the presence of a family effect, for example, would cause a non-zero covariance. In essence we were integrating over possible values of each families effect, and ust picing up on the degree to which within-family comparison were non-independent because they shared some unnown parameters. In MCMC, it would also be easy to simply treat possible values for each B and C 1 element as a latent variable. The priors would be the Normal distributions that we outlined above, and y i α 0 + iδ 1 + B + C i + ɛ i or y i N (α 0 + iδ 1 + B + C i, σ e ) 1 The distinction between a parameter and a latent variable can be subtle: when you reduce the model to the minimal statement about unnown quantities, you are dealing with the parameters and their priors. You can add latent variables to a model specification, but when you do this the prior for the latent variables comes from the parameters, existing priors, and other latent variables. So you don t have to specify a new prior for a latent variable - it falls out of the model. 4

5 So f(y i α 0, δ 1, B, C i ) f(ɛ i ) and we can calculate this density by: [ ] 1 (y i α 0 iδ 1 B C i ) f(y i α 0, δ 1, B, C i ) exp πσ e Note that δ 1 only appears in the lielihood for geos on diet 1, thus updating δ 1 will not change many of the terms in the log-lielihood. We only need to calculate the log of the lielihood ratio, so we can ust ignore terms in which δ 1 does not occur when we want to update this parameter. This type of optimization can speed-up the lielihood calculations dramatically. Specifically, we have terms lie: ln f(y 0 α 0, B ) 1 ln [ ] πσe ] [(y 0 α 0 B ) in the log-lielihoods. If we are updating α 0 : ln f(y 1 α 0, B, C 1 ) 1 ln [ ] πσe ] [(y 1 α 0 δ 1 B C 1 ) ln LR(y 0 ) ln f(y 0 α0, δ 1, B, C 1, σ e ) ln f(y 0 α 0, B, δ 1, B, C 1, σ e ) 1 [(y σe 0 α 0 B ) (y 0 α0 B ) ] 1 1 [(y 0 B ) (y 0 B ) α 0 + α 0 (y 0 B ) + (y 0 B ) α 0 (α 0) ] [ (y0 B ) (α 0 α 0 ) + α 0 (α 0) ] Similar algebra leads to: ln LR(y 1 ) ln f(y 1 α0, δ 1, B, C 1, σ e ) ln f(y 1 α 0, B, δ 1, B, C 1, σ e ) 1 [ (y1 σe δ 1 B C 1 ) (α0 α 0 ) + α0 (α0) ] Each of the data points is independent, conditional on all of the latent variables, so: ln LR ln LR(y 0 ) + ln LR(y 1 ) It is helpful to introduce some named variables that are simply convenient functions of the data or parameters: n total # individuals 5

6 n 0 # individuals in treatment 0 n 1 # individuals in treatment 1 f # families B B D B B 1 B (summing only over treatment 1) C 1 C 1 y 0 y 0 y 1 y 1 y y 0 + y 1 R n 0 [y 0 α 0 B ] + n 1 [y 0 α 0 δ 1 B C 1 ] Updating α 0 Thus, the log-lielihood ratio for α 0 α 0 is: ln LR n [ α 0 (α 0 )] + [y n 1 δ 1 B C 1 ] (α 0 α 0) Note that calculating this log-lielihood is very fast. It is easy to eep the various starred-sums up to date when B and C 1 elements change. So evaluating the lielihood ratio for proposals to α 0 does not even involve iterating through every data point. Updating δ 1 If we are ust updating δ 1 then the log-lielihood ratio have fewer terms, because constants drop out in the subtraction: ln LR(y 1 ) ln f(y 1 α 0, δ1, B, C 1, σ e ) ln f(y 1 α 0, B, δ 1, B, C 1, σ e ) 1 [ (y1 σe α 0 B C 1 ) (δ1 δ 1 ) + δ1 (δ1) ] ln LR(Y ) n 1δ1 n 1(δ1 ) + [y 1 n 1 α 0 B 1 C 1 ] (δ1 δ 1) σe 6

7 Updating σ B or σ C If we want to update σ B, and we have a set of latent variables then we only have to consider the portion of the lielihood that comes from the latent variables (we don t have to consider the data): ln LR ( 1 ln [ [ ] π(σ B ) ] B + (σ B ) 1 ln [ [ ]) π(σb) ] B (σb ) [ ] σb f ln σb + ([ ] [ ]) B B (σ B ) (σb ) [ ] σb f ln + D [ 1 (σ B ) 1 ] (σb ) σ B The corresponding formula for updating σ C variance parameter. would use C 1 as the variates that depend on the Updating σ E Updating σ e would entail summing the effect of all of the residual, but this is the only parameter that would require iterating over all of the data points: [ ] σe ln LR n ln + R [ ] 1 (σ E ) 1 (σ E o ) σ E Updating B Updating a family effect is much lie updating another mean effect, except (of course) it only affects the lielihood of one family (denoted family ) but effects both treatments: ] (n 0 + n 1 ) [B (B ) + [y (n 0 + n 1 )α 0 n 1 (δ 1 + C 1 )] (B B ) ln LR Updating C 1 Updating a C 1 variable only affects the treatment-1 individuals in one family (denoted family ): [ ] n 1 C1 (C 1 ) + [y 1 n 1 (α 0 + δ 1 + B )] (C1 C 1) ln LR 7

8 Latent variable mixing The downside of introducing latent variables, is that our chain would have to sample over them (update them). The equation for udpating B loos a lot lie the update of δ 1, but we only have to perform the summation over family. In essence, introducing latent variables lets us do calculations in this form: P(α 0, δ 1, B, C 1, σ B, σ C, σ e Y ) P(Y α 0, δ 1, B, C 1, σ e )P(B σ B )P(C 1 σ C )P(α 0, δ 1, σ B, σ C, σ e ) P(Y ) and use MCMC to integrate over B and C 1 to obtain: P(α 0, δ 1, σ B, σ C, σ e Y ) P(α 0, δ 1, B, C 1, σ B, σ C, σ e Y )db dc 1 Marginalizing over a parameter is easy you simply ignore it when summarizing the MCMC output. 8

9 References 9

CHAPTER 2 Estimating Probabilities

CHAPTER 2 Estimating Probabilities CHAPTER 2 Estimating Probabilities Machine Learning Copyright c 2016. Tom M. Mitchell. All rights reserved. *DRAFT OF January 24, 2016* *PLEASE DO NOT DISTRIBUTE WITHOUT AUTHOR S PERMISSION* This is a

More information

Parallelization Strategies for Multicore Data Analysis

Parallelization Strategies for Multicore Data Analysis Parallelization Strategies for Multicore Data Analysis Wei-Chen Chen 1 Russell Zaretzki 2 1 University of Tennessee, Dept of EEB 2 University of Tennessee, Dept. Statistics, Operations, and Management

More information

Bayesian Statistics in One Hour. Patrick Lam

Bayesian Statistics in One Hour. Patrick Lam Bayesian Statistics in One Hour Patrick Lam Outline Introduction Bayesian Models Applications Missing Data Hierarchical Models Outline Introduction Bayesian Models Applications Missing Data Hierarchical

More information

Bayesian prediction of disability insurance frequencies using economic indicators

Bayesian prediction of disability insurance frequencies using economic indicators Bayesian prediction of disability insurance frequencies using economic indicators Catherine Donnelly Heriot-Watt University, Edinburgh, UK Mario V. Wüthrich ETH Zurich, RisLab, Department of Mathematics,

More information

1 Prior Probability and Posterior Probability

1 Prior Probability and Posterior Probability Math 541: Statistical Theory II Bayesian Approach to Parameter Estimation Lecturer: Songfeng Zheng 1 Prior Probability and Posterior Probability Consider now a problem of statistical inference in which

More information

Multivariate Normal Distribution

Multivariate Normal Distribution Multivariate Normal Distribution Lecture 4 July 21, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2 Lecture #4-7/21/2011 Slide 1 of 41 Last Time Matrices and vectors Eigenvalues

More information

Basics of Statistical Machine Learning

Basics of Statistical Machine Learning CS761 Spring 2013 Advanced Machine Learning Basics of Statistical Machine Learning Lecturer: Xiaojin Zhu jerryzhu@cs.wisc.edu Modern machine learning is rooted in statistics. You will find many familiar

More information

Experimental Uncertainty and Probability

Experimental Uncertainty and Probability 02/04/07 PHY310: Statistical Data Analysis 1 PHY310: Lecture 03 Experimental Uncertainty and Probability Road Map The meaning of experimental uncertainty The fundamental concepts of probability 02/04/07

More information

Time Series Analysis

Time Series Analysis Time Series Analysis hm@imm.dtu.dk Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby 1 Outline of the lecture Identification of univariate time series models, cont.:

More information

Web-based Supplementary Materials for Bayesian Effect Estimation. Accounting for Adjustment Uncertainty by Chi Wang, Giovanni

Web-based Supplementary Materials for Bayesian Effect Estimation. Accounting for Adjustment Uncertainty by Chi Wang, Giovanni 1 Web-based Supplementary Materials for Bayesian Effect Estimation Accounting for Adjustment Uncertainty by Chi Wang, Giovanni Parmigiani, and Francesca Dominici In Web Appendix A, we provide detailed

More information

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN Beckman HLM Reading Group: Questions, Answers and Examples Carolyn J. Anderson Department of Educational Psychology I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN Linear Algebra Slide 1 of

More information

An Introduction to Using WinBUGS for Cost-Effectiveness Analyses in Health Economics

An Introduction to Using WinBUGS for Cost-Effectiveness Analyses in Health Economics Slide 1 An Introduction to Using WinBUGS for Cost-Effectiveness Analyses in Health Economics Dr. Christian Asseburg Centre for Health Economics Part 1 Slide 2 Talk overview Foundations of Bayesian statistics

More information

Tutorial on Markov Chain Monte Carlo

Tutorial on Markov Chain Monte Carlo Tutorial on Markov Chain Monte Carlo Kenneth M. Hanson Los Alamos National Laboratory Presented at the 29 th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Technology,

More information

Statistical Machine Learning

Statistical Machine Learning Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes

More information

Introduction to Markov Chain Monte Carlo

Introduction to Markov Chain Monte Carlo Introduction to Markov Chain Monte Carlo Monte Carlo: sample from a distribution to estimate the distribution to compute max, mean Markov Chain Monte Carlo: sampling using local information Generic problem

More information

MISSING DATA TECHNIQUES WITH SAS. IDRE Statistical Consulting Group

MISSING DATA TECHNIQUES WITH SAS. IDRE Statistical Consulting Group MISSING DATA TECHNIQUES WITH SAS IDRE Statistical Consulting Group ROAD MAP FOR TODAY To discuss: 1. Commonly used techniques for handling missing data, focusing on multiple imputation 2. Issues that could

More information

Linear regression methods for large n and streaming data

Linear regression methods for large n and streaming data Linear regression methods for large n and streaming data Large n and small or moderate p is a fairly simple problem. The sufficient statistic for β in OLS (and ridge) is: The concept of sufficiency is

More information

Likelihood: Frequentist vs Bayesian Reasoning

Likelihood: Frequentist vs Bayesian Reasoning "PRINCIPLES OF PHYLOGENETICS: ECOLOGY AND EVOLUTION" Integrative Biology 200B University of California, Berkeley Spring 2009 N Hallinan Likelihood: Frequentist vs Bayesian Reasoning Stochastic odels and

More information

A Primer on Mathematical Statistics and Univariate Distributions; The Normal Distribution; The GLM with the Normal Distribution

A Primer on Mathematical Statistics and Univariate Distributions; The Normal Distribution; The GLM with the Normal Distribution A Primer on Mathematical Statistics and Univariate Distributions; The Normal Distribution; The GLM with the Normal Distribution PSYC 943 (930): Fundamentals of Multivariate Modeling Lecture 4: September

More information

A Latent Variable Approach to Validate Credit Rating Systems using R

A Latent Variable Approach to Validate Credit Rating Systems using R A Latent Variable Approach to Validate Credit Rating Systems using R Chicago, April 24, 2009 Bettina Grün a, Paul Hofmarcher a, Kurt Hornik a, Christoph Leitner a, Stefan Pichler a a WU Wien Grün/Hofmarcher/Hornik/Leitner/Pichler

More information

A Basic Introduction to Missing Data

A Basic Introduction to Missing Data John Fox Sociology 740 Winter 2014 Outline Why Missing Data Arise Why Missing Data Arise Global or unit non-response. In a survey, certain respondents may be unreachable or may refuse to participate. Item

More information

Imputing Missing Data using SAS

Imputing Missing Data using SAS ABSTRACT Paper 3295-2015 Imputing Missing Data using SAS Christopher Yim, California Polytechnic State University, San Luis Obispo Missing data is an unfortunate reality of statistics. However, there are

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.cs.toronto.edu/~rsalakhu/ Lecture 6 Three Approaches to Classification Construct

More information

Linear Classification. Volker Tresp Summer 2015

Linear Classification. Volker Tresp Summer 2015 Linear Classification Volker Tresp Summer 2015 1 Classification Classification is the central task of pattern recognition Sensors supply information about an object: to which class do the object belong

More information

Centre for Central Banking Studies

Centre for Central Banking Studies Centre for Central Banking Studies Technical Handbook No. 4 Applied Bayesian econometrics for central bankers Andrew Blake and Haroon Mumtaz CCBS Technical Handbook No. 4 Applied Bayesian econometrics

More information

Introduction to General and Generalized Linear Models

Introduction to General and Generalized Linear Models Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby

More information

Logistic Regression (1/24/13)

Logistic Regression (1/24/13) STA63/CBB540: Statistical methods in computational biology Logistic Regression (/24/3) Lecturer: Barbara Engelhardt Scribe: Dinesh Manandhar Introduction Logistic regression is model for regression used

More information

Notes on Factoring. MA 206 Kurt Bryan

Notes on Factoring. MA 206 Kurt Bryan The General Approach Notes on Factoring MA 26 Kurt Bryan Suppose I hand you n, a 2 digit integer and tell you that n is composite, with smallest prime factor around 5 digits. Finding a nontrivial factor

More information

Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression

Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression Objectives: To perform a hypothesis test concerning the slope of a least squares line To recognize that testing for a

More information

Bayesian Updating with Discrete Priors Class 11, 18.05, Spring 2014 Jeremy Orloff and Jonathan Bloom

Bayesian Updating with Discrete Priors Class 11, 18.05, Spring 2014 Jeremy Orloff and Jonathan Bloom 1 Learning Goals Bayesian Updating with Discrete Priors Class 11, 18.05, Spring 2014 Jeremy Orloff and Jonathan Bloom 1. Be able to apply Bayes theorem to compute probabilities. 2. Be able to identify

More information

CHAPTER 3 EXAMPLES: REGRESSION AND PATH ANALYSIS

CHAPTER 3 EXAMPLES: REGRESSION AND PATH ANALYSIS Examples: Regression And Path Analysis CHAPTER 3 EXAMPLES: REGRESSION AND PATH ANALYSIS Regression analysis with univariate or multivariate dependent variables is a standard procedure for modeling relationships

More information

PREDICTIVE DISTRIBUTIONS OF OUTSTANDING LIABILITIES IN GENERAL INSURANCE

PREDICTIVE DISTRIBUTIONS OF OUTSTANDING LIABILITIES IN GENERAL INSURANCE PREDICTIVE DISTRIBUTIONS OF OUTSTANDING LIABILITIES IN GENERAL INSURANCE BY P.D. ENGLAND AND R.J. VERRALL ABSTRACT This paper extends the methods introduced in England & Verrall (00), and shows how predictive

More information

SAS Software to Fit the Generalized Linear Model

SAS Software to Fit the Generalized Linear Model SAS Software to Fit the Generalized Linear Model Gordon Johnston, SAS Institute Inc., Cary, NC Abstract In recent years, the class of generalized linear models has gained popularity as a statistical modeling

More information

LAGUARDIA COMMUNITY COLLEGE CITY UNIVERSITY OF NEW YORK DEPARTMENT OF MATHEMATICS, ENGINEERING, AND COMPUTER SCIENCE

LAGUARDIA COMMUNITY COLLEGE CITY UNIVERSITY OF NEW YORK DEPARTMENT OF MATHEMATICS, ENGINEERING, AND COMPUTER SCIENCE LAGUARDIA COMMUNITY COLLEGE CITY UNIVERSITY OF NEW YORK DEPARTMENT OF MATHEMATICS, ENGINEERING, AND COMPUTER SCIENCE MAT 119 STATISTICS AND ELEMENTARY ALGEBRA 5 Lecture Hours, 2 Lab Hours, 3 Credits Pre-

More information

Imputing Values to Missing Data

Imputing Values to Missing Data Imputing Values to Missing Data In federated data, between 30%-70% of the data points will have at least one missing attribute - data wastage if we ignore all records with a missing value Remaining data

More information

STATISTICA Formula Guide: Logistic Regression. Table of Contents

STATISTICA Formula Guide: Logistic Regression. Table of Contents : Table of Contents... 1 Overview of Model... 1 Dispersion... 2 Parameterization... 3 Sigma-Restricted Model... 3 Overparameterized Model... 4 Reference Coding... 4 Model Summary (Summary Tab)... 5 Summary

More information

Auxiliary Variables in Mixture Modeling: 3-Step Approaches Using Mplus

Auxiliary Variables in Mixture Modeling: 3-Step Approaches Using Mplus Auxiliary Variables in Mixture Modeling: 3-Step Approaches Using Mplus Tihomir Asparouhov and Bengt Muthén Mplus Web Notes: No. 15 Version 8, August 5, 2014 1 Abstract This paper discusses alternatives

More information

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Module No. #01 Lecture No. #15 Special Distributions-VI Today, I am going to introduce

More information

1 Maximum likelihood estimation

1 Maximum likelihood estimation COS 424: Interacting with Data Lecturer: David Blei Lecture #4 Scribes: Wei Ho, Michael Ye February 14, 2008 1 Maximum likelihood estimation 1.1 MLE of a Bernoulli random variable (coin flips) Given N

More information

Epipolar Geometry. Readings: See Sections 10.1 and 15.6 of Forsyth and Ponce. Right Image. Left Image. e(p ) Epipolar Lines. e(q ) q R.

Epipolar Geometry. Readings: See Sections 10.1 and 15.6 of Forsyth and Ponce. Right Image. Left Image. e(p ) Epipolar Lines. e(q ) q R. Epipolar Geometry We consider two perspective images of a scene as taken from a stereo pair of cameras (or equivalently, assume the scene is rigid and imaged with a single camera from two different locations).

More information

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

More information

Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.

Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not. Statistical Learning: Chapter 4 Classification 4.1 Introduction Supervised learning with a categorical (Qualitative) response Notation: - Feature vector X, - qualitative response Y, taking values in C

More information

Lecture 8: Signal Detection and Noise Assumption

Lecture 8: Signal Detection and Noise Assumption ECE 83 Fall Statistical Signal Processing instructor: R. Nowak, scribe: Feng Ju Lecture 8: Signal Detection and Noise Assumption Signal Detection : X = W H : X = S + W where W N(, σ I n n and S = [s, s,...,

More information

Markov Chain Monte Carlo Simulation Made Simple

Markov Chain Monte Carlo Simulation Made Simple Markov Chain Monte Carlo Simulation Made Simple Alastair Smith Department of Politics New York University April2,2003 1 Markov Chain Monte Carlo (MCMC) simualtion is a powerful technique to perform numerical

More information

The HB. How Bayesian methods have changed the face of marketing research. Summer 2004

The HB. How Bayesian methods have changed the face of marketing research. Summer 2004 The HB How Bayesian methods have changed the face of marketing research. 20 Summer 2004 Reprinted with permission from Marketing Research, Summer 2004, published by the American Marketing Association.

More information

99.37, 99.38, 99.38, 99.39, 99.39, 99.39, 99.39, 99.40, 99.41, 99.42 cm

99.37, 99.38, 99.38, 99.39, 99.39, 99.39, 99.39, 99.40, 99.41, 99.42 cm Error Analysis and the Gaussian Distribution In experimental science theory lives or dies based on the results of experimental evidence and thus the analysis of this evidence is a critical part of the

More information

Model-based Synthesis. Tony O Hagan

Model-based Synthesis. Tony O Hagan Model-based Synthesis Tony O Hagan Stochastic models Synthesising evidence through a statistical model 2 Evidence Synthesis (Session 3), Helsinki, 28/10/11 Graphical modelling The kinds of models that

More information

PS 271B: Quantitative Methods II. Lecture Notes

PS 271B: Quantitative Methods II. Lecture Notes PS 271B: Quantitative Methods II Lecture Notes Langche Zeng zeng@ucsd.edu The Empirical Research Process; Fundamental Methodological Issues 2 Theory; Data; Models/model selection; Estimation; Inference.

More information

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4) Summary of Formulas and Concepts Descriptive Statistics (Ch. 1-4) Definitions Population: The complete set of numerical information on a particular quantity in which an investigator is interested. We assume

More information

Normal distribution. ) 2 /2σ. 2π σ

Normal distribution. ) 2 /2σ. 2π σ Normal distribution The normal distribution is the most widely known and used of all distributions. Because the normal distribution approximates many natural phenomena so well, it has developed into a

More information

Statistics 104: Section 6!

Statistics 104: Section 6! Page 1 Statistics 104: Section 6! TF: Deirdre (say: Dear-dra) Bloome Email: dbloome@fas.harvard.edu Section Times Thursday 2pm-3pm in SC 109, Thursday 5pm-6pm in SC 705 Office Hours: Thursday 6pm-7pm SC

More information

Bayesian Machine Learning (ML): Modeling And Inference in Big Data. Zhuhua Cai Google, Rice University caizhua@gmail.com

Bayesian Machine Learning (ML): Modeling And Inference in Big Data. Zhuhua Cai Google, Rice University caizhua@gmail.com Bayesian Machine Learning (ML): Modeling And Inference in Big Data Zhuhua Cai Google Rice University caizhua@gmail.com 1 Syllabus Bayesian ML Concepts (Today) Bayesian ML on MapReduce (Next morning) Bayesian

More information

Monte Carlo-based statistical methods (MASM11/FMS091)

Monte Carlo-based statistical methods (MASM11/FMS091) Monte Carlo-based statistical methods (MASM11/FMS091) Jimmy Olsson Centre for Mathematical Sciences Lund University, Sweden Lecture 5 Sequential Monte Carlo methods I February 5, 2013 J. Olsson Monte Carlo-based

More information

Monte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMS091)

Monte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMS091) Monte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMS091) Magnus Wiktorsson Centre for Mathematical Sciences Lund University, Sweden Lecture 5 Sequential Monte Carlo methods I February

More information

Marketing Mix Modelling and Big Data P. M Cain

Marketing Mix Modelling and Big Data P. M Cain 1) Introduction Marketing Mix Modelling and Big Data P. M Cain Big data is generally defined in terms of the volume and variety of structured and unstructured information. Whereas structured data is stored

More information

Part 2: One-parameter models

Part 2: One-parameter models Part 2: One-parameter models Bernoilli/binomial models Return to iid Y 1,...,Y n Bin(1, θ). The sampling model/likelihood is p(y 1,...,y n θ) =θ P y i (1 θ) n P y i When combined with a prior p(θ), Bayes

More information

Univariate Regression

Univariate Regression Univariate Regression Correlation and Regression The regression line summarizes the linear relationship between 2 variables Correlation coefficient, r, measures strength of relationship: the closer r is

More information

Handling attrition and non-response in longitudinal data

Handling attrition and non-response in longitudinal data Longitudinal and Life Course Studies 2009 Volume 1 Issue 1 Pp 63-72 Handling attrition and non-response in longitudinal data Harvey Goldstein University of Bristol Correspondence. Professor H. Goldstein

More information

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 17 Shannon-Fano-Elias Coding and Introduction to Arithmetic Coding

More information

Data Modeling & Analysis Techniques. Probability & Statistics. Manfred Huber 2011 1

Data Modeling & Analysis Techniques. Probability & Statistics. Manfred Huber 2011 1 Data Modeling & Analysis Techniques Probability & Statistics Manfred Huber 2011 1 Probability and Statistics Probability and statistics are often used interchangeably but are different, related fields

More information

Lecture 9: Bayesian hypothesis testing

Lecture 9: Bayesian hypothesis testing Lecture 9: Bayesian hypothesis testing 5 November 27 In this lecture we ll learn about Bayesian hypothesis testing. 1 Introduction to Bayesian hypothesis testing Before we go into the details of Bayesian

More information

CCNY. BME I5100: Biomedical Signal Processing. Linear Discrimination. Lucas C. Parra Biomedical Engineering Department City College of New York

CCNY. BME I5100: Biomedical Signal Processing. Linear Discrimination. Lucas C. Parra Biomedical Engineering Department City College of New York BME I5100: Biomedical Signal Processing Linear Discrimination Lucas C. Parra Biomedical Engineering Department CCNY 1 Schedule Week 1: Introduction Linear, stationary, normal - the stuff biology is not

More information

Applications of R Software in Bayesian Data Analysis

Applications of R Software in Bayesian Data Analysis Article International Journal of Information Science and System, 2012, 1(1): 7-23 International Journal of Information Science and System Journal homepage: www.modernscientificpress.com/journals/ijinfosci.aspx

More information

Chapter 3 RANDOM VARIATE GENERATION

Chapter 3 RANDOM VARIATE GENERATION Chapter 3 RANDOM VARIATE GENERATION In order to do a Monte Carlo simulation either by hand or by computer, techniques must be developed for generating values of random variables having known distributions.

More information

Regression III: Advanced Methods

Regression III: Advanced Methods Lecture 16: Generalized Additive Models Regression III: Advanced Methods Bill Jacoby Michigan State University http://polisci.msu.edu/jacoby/icpsr/regress3 Goals of the Lecture Introduce Additive Models

More information

INDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition)

INDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition) INDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition) Abstract Indirect inference is a simulation-based method for estimating the parameters of economic models. Its

More information

Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab

Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab Monte Carlo Simulation: IEOR E4703 Fall 2004 c 2004 by Martin Haugh Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab 1 Overview of Monte Carlo Simulation 1.1 Why use simulation?

More information

BAYESIAN ANALYSIS OF AN AGGREGATE CLAIM MODEL USING VARIOUS LOSS DISTRIBUTIONS. by Claire Dudley

BAYESIAN ANALYSIS OF AN AGGREGATE CLAIM MODEL USING VARIOUS LOSS DISTRIBUTIONS. by Claire Dudley BAYESIAN ANALYSIS OF AN AGGREGATE CLAIM MODEL USING VARIOUS LOSS DISTRIBUTIONS by Claire Dudley A dissertation submitted for the award of the degree of Master of Science in Actuarial Management School

More information

Bayesian probability theory

Bayesian probability theory Bayesian probability theory Bruno A. Olshausen arch 1, 2004 Abstract Bayesian probability theory provides a mathematical framework for peforming inference, or reasoning, using probability. The foundations

More information

10.2 Series and Convergence

10.2 Series and Convergence 10.2 Series and Convergence Write sums using sigma notation Find the partial sums of series and determine convergence or divergence of infinite series Find the N th partial sums of geometric series and

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

Towards running complex models on big data

Towards running complex models on big data Towards running complex models on big data Working with all the genomes in the world without changing the model (too much) Daniel Lawson Heilbronn Institute, University of Bristol 2013 1 / 17 Motivation

More information

2.2 Scientific Notation: Writing Large and Small Numbers

2.2 Scientific Notation: Writing Large and Small Numbers 2.2 Scientific Notation: Writing Large and Small Numbers A number written in scientific notation has two parts. A decimal part: a number that is between 1 and 10. An exponential part: 10 raised to an exponent,

More information

Ordinal Regression. Chapter

Ordinal Regression. Chapter Ordinal Regression Chapter 4 Many variables of interest are ordinal. That is, you can rank the values, but the real distance between categories is unknown. Diseases are graded on scales from least severe

More information

BayesX - Software for Bayesian Inference in Structured Additive Regression

BayesX - Software for Bayesian Inference in Structured Additive Regression BayesX - Software for Bayesian Inference in Structured Additive Regression Thomas Kneib Faculty of Mathematics and Economics, University of Ulm Department of Statistics, Ludwig-Maximilians-University Munich

More information

Fairfield Public Schools

Fairfield Public Schools Mathematics Fairfield Public Schools AP Statistics AP Statistics BOE Approved 04/08/2014 1 AP STATISTICS Critical Areas of Focus AP Statistics is a rigorous course that offers advanced students an opportunity

More information

Section 14 Simple Linear Regression: Introduction to Least Squares Regression

Section 14 Simple Linear Regression: Introduction to Least Squares Regression Slide 1 Section 14 Simple Linear Regression: Introduction to Least Squares Regression There are several different measures of statistical association used for understanding the quantitative relationship

More information

What is Statistics? Lecture 1. Introduction and probability review. Idea of parametric inference

What is Statistics? Lecture 1. Introduction and probability review. Idea of parametric inference 0. 1. Introduction and probability review 1.1. What is Statistics? What is Statistics? Lecture 1. Introduction and probability review There are many definitions: I will use A set of principle and procedures

More information

Note on growth and growth accounting

Note on growth and growth accounting CHAPTER 0 Note on growth and growth accounting 1. Growth and the growth rate In this section aspects of the mathematical concept of the rate of growth used in growth models and in the empirical analysis

More information

Chapter 4. Probability and Probability Distributions

Chapter 4. Probability and Probability Distributions Chapter 4. robability and robability Distributions Importance of Knowing robability To know whether a sample is not identical to the population from which it was selected, it is necessary to assess the

More information

Using SAS PROC MCMC to Estimate and Evaluate Item Response Theory Models

Using SAS PROC MCMC to Estimate and Evaluate Item Response Theory Models Using SAS PROC MCMC to Estimate and Evaluate Item Response Theory Models Clement A Stone Abstract Interest in estimating item response theory (IRT) models using Bayesian methods has grown tremendously

More information

JUST THE MATHS UNIT NUMBER 1.8. ALGEBRA 8 (Polynomials) A.J.Hobson

JUST THE MATHS UNIT NUMBER 1.8. ALGEBRA 8 (Polynomials) A.J.Hobson JUST THE MATHS UNIT NUMBER 1.8 ALGEBRA 8 (Polynomials) by A.J.Hobson 1.8.1 The factor theorem 1.8.2 Application to quadratic and cubic expressions 1.8.3 Cubic equations 1.8.4 Long division of polynomials

More information

Chapter 5. Random variables

Chapter 5. Random variables Random variables random variable numerical variable whose value is the outcome of some probabilistic experiment; we use uppercase letters, like X, to denote such a variable and lowercase letters, like

More information

Introduction to Monte Carlo. Astro 542 Princeton University Shirley Ho

Introduction to Monte Carlo. Astro 542 Princeton University Shirley Ho Introduction to Monte Carlo Astro 542 Princeton University Shirley Ho Agenda Monte Carlo -- definition, examples Sampling Methods (Rejection, Metropolis, Metropolis-Hasting, Exact Sampling) Markov Chains

More information

Imperfect Debugging in Software Reliability

Imperfect Debugging in Software Reliability Imperfect Debugging in Software Reliability Tevfik Aktekin and Toros Caglar University of New Hampshire Peter T. Paul College of Business and Economics Department of Decision Sciences and United Health

More information

Gaussian Processes to Speed up Hamiltonian Monte Carlo

Gaussian Processes to Speed up Hamiltonian Monte Carlo Gaussian Processes to Speed up Hamiltonian Monte Carlo Matthieu Lê Murray, Iain http://videolectures.net/mlss09uk_murray_mcmc/ Rasmussen, Carl Edward. "Gaussian processes to speed up hybrid Monte Carlo

More information

Estimation and comparison of multiple change-point models

Estimation and comparison of multiple change-point models Journal of Econometrics 86 (1998) 221 241 Estimation and comparison of multiple change-point models Siddhartha Chib* John M. Olin School of Business, Washington University, 1 Brookings Drive, Campus Box

More information

Simple Regression Theory II 2010 Samuel L. Baker

Simple Regression Theory II 2010 Samuel L. Baker SIMPLE REGRESSION THEORY II 1 Simple Regression Theory II 2010 Samuel L. Baker Assessing how good the regression equation is likely to be Assignment 1A gets into drawing inferences about how close the

More information

Course: Model, Learning, and Inference: Lecture 5

Course: Model, Learning, and Inference: Lecture 5 Course: Model, Learning, and Inference: Lecture 5 Alan Yuille Department of Statistics, UCLA Los Angeles, CA 90095 yuille@stat.ucla.edu Abstract Probability distributions on structured representation.

More information

Introduction to the Monte Carlo method

Introduction to the Monte Carlo method Some history Simple applications Radiation transport modelling Flux and Dose calculations Variance reduction Easy Monte Carlo Pioneers of the Monte Carlo Simulation Method: Stanisław Ulam (1909 1984) Stanislaw

More information

Basic Bayesian Methods

Basic Bayesian Methods 6 Basic Bayesian Methods Mark E. Glickman and David A. van Dyk Summary In this chapter, we introduce the basics of Bayesian data analysis. The key ingredients to a Bayesian analysis are the likelihood

More information

5.4 Solving Percent Problems Using the Percent Equation

5.4 Solving Percent Problems Using the Percent Equation 5. Solving Percent Problems Using the Percent Equation In this section we will develop and use a more algebraic equation approach to solving percent equations. Recall the percent proportion from the last

More information

Interpretation of Somers D under four simple models

Interpretation of Somers D under four simple models Interpretation of Somers D under four simple models Roger B. Newson 03 September, 04 Introduction Somers D is an ordinal measure of association introduced by Somers (96)[9]. It can be defined in terms

More information

CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA

CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA We Can Early Learning Curriculum PreK Grades 8 12 INSIDE ALGEBRA, GRADES 8 12 CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA April 2016 www.voyagersopris.com Mathematical

More information

E3: PROBABILITY AND STATISTICS lecture notes

E3: PROBABILITY AND STATISTICS lecture notes E3: PROBABILITY AND STATISTICS lecture notes 2 Contents 1 PROBABILITY THEORY 7 1.1 Experiments and random events............................ 7 1.2 Certain event. Impossible event............................

More information

Statistics in Retail Finance. Chapter 6: Behavioural models

Statistics in Retail Finance. Chapter 6: Behavioural models Statistics in Retail Finance 1 Overview > So far we have focussed mainly on application scorecards. In this chapter we shall look at behavioural models. We shall cover the following topics:- Behavioural

More information

Foundations of Statistics Frequentist and Bayesian

Foundations of Statistics Frequentist and Bayesian Mary Parker, http://www.austincc.edu/mparker/stat/nov04/ page 1 of 13 Foundations of Statistics Frequentist and Bayesian Statistics is the science of information gathering, especially when the information

More information

Random variables, probability distributions, binomial random variable

Random variables, probability distributions, binomial random variable Week 4 lecture notes. WEEK 4 page 1 Random variables, probability distributions, binomial random variable Eample 1 : Consider the eperiment of flipping a fair coin three times. The number of tails that

More information

HIDDEN MARKOV MODELS FOR ALCOHOLISM TREATMENT TRIAL DATA

HIDDEN MARKOV MODELS FOR ALCOHOLISM TREATMENT TRIAL DATA HIDDEN MARKOV MODELS FOR ALCOHOLISM TREATMENT TRIAL DATA By Kenneth E. Shirley, Dylan S. Small, Kevin G. Lynch, Stephen A. Maisto, and David W. Oslin Columbia University, University of Pennsylvania and

More information