Bayesian Statistics in One Hour. Patrick Lam

Save this PDF as:

Size: px
Start display at page:

Transcription

1 Bayesian Statistics in One Hour Patrick Lam

2 Outline Introduction Bayesian Models Applications Missing Data Hierarchical Models

3 Outline Introduction Bayesian Models Applications Missing Data Hierarchical Models

4 References Western, Bruce and Simon Jackman Bayesian Inference for Comparative Research. American Political Science Review 88(2): Jackman, Simon Estimation and Inference via Bayesian Simulation: An Introduction to Markov Chain Monte Carlo. American Journal of Political Science 44(2): Jackman, Simon Estimation and Inference Are Missing Data Problems: Unifying Social Science Statistics via Bayesian Simulation. Political Analysis 8(4):

5 Introduction Three general approaches to statistics: frequentist (Neyman-Pearson, hypothesis testing) likelihood (what we ve been learning all semester) Bayesian Today s goal: Contrast {frequentist, likelihood} with Bayesian, with emphasis on Bayesian versus likelihood. We ll go over some of the Bayesian critiques of non-bayesian analysis and non-bayesian critiques of Bayesian analysis.

6 Probability Objective view of probability (non-bayesian): The relative frequency of an outcome of an experiment over repeated runs of the experiment. The observed proportion in a population. Subjective view of probability (Bayesian): Individual s degree of belief in a statement Defined personally (how much money would you wager on an outcome?) Can be influenced in many ways (personal beliefs, prior evidence) Bayesian statistics is convenient because it does not require repeated sampling or large n assumptions.

7 Maximum Likelihood p(θ y) = p(y θ)p(θ) p(y) = p(y θ)k(y) p(y θ) L(θ y) = p(y θ) There is a fixed, true value of θ, and we maximize the likelihood to estimate θ and make assumptions to generate uncertainty about our estimate of θ.

8 Bayesian p(θ y) = p(y θ)p(θ) p(y) p(y θ)p(θ) θ is a random variable. θ is stochastic and changes from time to time. θ is truly fixed, but we want to reflect our uncertainty about it. We have a prior subjective belief about θ, which we update with the data to form posterior beliefs about θ. The posterior is a probability distribution that must integrate to 1. The prior is usually a probability distribution that integrates to 1 (proper prior).

9 θ as Fixed versus as a Random Variable Non-Bayesian approach (θ fixed): Estimate θ with measures of uncertainty (SE, CIs) 95% Confidence Interval: 95% of the time, θ is in the 95% interval that is estimated each time. P(θ 95% CI) = 0 or 1 P(θ > 2) = 0 or 1 Bayesian approach (θ random): Find the posterior distribution of θ. Take quantities of interest from the distribution (posterior mean, posterior SD, posterior credible intervals) We can make probability statements regarding θ. 95% Credible Interval: P(θ 95% CI) = 0.95 P(θ > 2) = (0, 1)

10 Critiques Posterior = Evidence Prior NB: Bayesians introduce priors that are not justifiable. B: Non-Bayesians are just doing Bayesian statistics with uninformative priors, which may be equally unjustifiable. NB: Unjustified Bayesian priors are driving the results. B: Bayesian results non-bayesian results as n gets larger (the data overwhelm the prior). NB: Bayesian is too hard. Why use it? B: Bayesian methods allow us to easily estimate models that are too hard to estimate (cannot computationally find the MLE) or unidentified (no unique MLE exists) with non-bayesian methods. Bayesian methods also allow us to incorporate prior/qualitative information into the model.

11 Outline Introduction Bayesian Models Applications Missing Data Hierarchical Models

12 Running a Model Non-Bayesian: 1. Specify a probability model (distribution for Y ). 2. Find MLE ˆθ and measures of uncertainty (SE, CI). Assume ˆθ follows a (multivariate) normal distribution. 3. Estimate quantities of interest analytically or via simulation. Bayesian: 1. Specify a probability model (distribution for Y and priors on θ). 2. Solve for posterior and summarize it (mean, SD, credible interval, etc.). We can do both analytically or via simulation. 3. Estimate quantities of interest analytically or via simulation. There is a Bayesian way to do any non-bayesian parametric model.

13 A Simple (Beta-Binomial) Model The Los Angeles Lakers play 82 games during a regular NBA season. In the season, they won 65 games. Suppose the Lakers win each game with probability π. Estimate π. We have 82 Bernoulli observations or one observation Y, where with n = 82. Assumptions: Y Binomial(n, π) Each game is a Bernoulli trial. The Lakers have the same probability of winning each game. The outcomes of the games are independent.

14 We can use the beta distribution as a prior for π since it has support over [0,1]. p(π y) p(y π)p(π) = Binomial(n, π) Beta(α, β)! n = π y (1 π) (n y) Γ(α + β) y Γ(α)Γ(β) π(α 1) (1 π) (β 1) π y (1 π) (n y) π (α 1) (1 π) (β 1) p(π y) π y+α 1 (1 π) n y+β 1 The posterior distribution is simply a Beta(y + α, n y + β) distribution. Effectively, our prior is just adding α 1 successes and β 1 failures to the dataset. Bayesian priors are just adding pseudo observations to the data.

15 Since we know the posterior is a Beta(y + α, n y + β) distribution, we can summarize it analytically or via simulation with the following quantities: posterior mean posterior standard deviation posterior credible intervals (credible sets) highest posterior density region

16 Uninformative Beta(1,1) Prior Beta(2,12) Prior Prior Data (MLE) Posterior Prior Data (MLE) Posterior π π Uninformative Beta(1,1) Prior (n=1000) Beta(2,12) Prior (n=1000) Prior Data (MLE) Posterior Prior Data (MLE) Posterior π π

17 In the previous model, we had Beta(y + α, n y + β) = Binomial(n, π) Beta(α, β) p(y) We knew that the likelihood prior produced something that looked like a Beta distribution up to a constant of proportionality. Since the posterior must be a probability distribution, we know that it is a Beta distribution and we can easily solve for the normalizing constant (although we don t need to since we already have the posterior). When the posterior is the same distribution family as the prior, we have conjugacy. Conjugate models are great because we can find the exact posterior, but we almost never have conjugacy except in very simple models.

18 What Happens When We Don t Have Conjugacy? Consider a Poisson regression model with Normal priors on β. ny p(β y) Poisson(λ i ) Normal(µ, Σ) i=1 λ i = exp(x i β) p(β y) ny exp( e xiβ ) exp(x i β) y i y i! i=1 1 exp (2π) d/2 Σ 1/2 «1 2 (β µ)t Σ 1 (β µ) Likelihood prior doesn t look like any distribution we know (non-conjugacy) and normalizing constant is too hard to find, so how do we find our posterior?

19 MCMC to the Rescue Ideal Goal: Produce independent draws from our posterior distribution via simulation and summarize the posterior by using those draws. Markov Chain Monte Carlo (MCMC): a class of algorithms that produce a chain of simulated draws from a distribution where each draw is dependent on the previous draw. Theory: If our chain satisfies some basic conditions, then the chain will eventually converge to a stationary distribution (in our case, the posterior) and we have approximate draws from the posterior. But there is no way to know for sure whether our chain has converged.

20 Algorithm 1: Gibbs Sampler Let θ t = (θ1 t,..., θt k ) be the tth draw of our parameter vector θ. Draw a new vector θ t+1 from the following distributions: θ1 t+1 p(θ 1 θ2, t..., θk, t y) θ2 t+1 p(θ 2 θ1 t+1,..., θk, t y). θ t+1 k p(θ k θ t+1 1,..., θ t+1 k 1, y) Repeat m times to get m draws of our parameters from the approximate posterior (assuming convergence). Requires that we know the conditional distributions for each θ. What if we don t?

21 Algorithm 2: Metropolis-Hastings Draw a new vector θ t+1 in the following way: 1. Specify a jumping distribution J t+1 (θ θ t ) (usually a symmetric distribution such as the multivariate normal). 2. Draw a proposed parameter vector θ from the jumping distribution. 3. Accept θ as θ t+1 with probability min(r,1), where r = p(y θ )p(θ ) p(y θ t )p(θ t ) If θ is rejected, then θ t+1 = θ t. Repeat m times to get m draws of our parameters from the approximate posterior (assuming convergence). M-H always works, but can be very slow.

22 Outline Introduction Bayesian Models Applications Missing Data Hierarchical Models

23 Missing Data Suppose we have missing data. Define D as our data matrix and M as our missingness matrix. D = y x 1 x 2 x ? 219?? 1.9? 1? ? M = y x 1 x 2 x One non-bayesian approach to dealing with missing data is multiple imputation.

24 Multiple Imputation We need a method for filling in the missing cells of D as a first step before we go to the analysis stage. 1. Assume a joint distribution for D obs and M: p(d obs, M φ, γ) = p(d φ)p(m D obs, D mis, γ)dd mis If we assume MAR, then p(d obs, M φ, γ) p(d φ)dd mis

25 2. Find φ, which characterizes the full D. n L(φ D obs ) = p(d i,obs φ)dd mis i=1 How do we do this integral? Assume Normality since the marginals of a multivariate Normal are Normal. L(µ, Σ D obs ) = n N(D i,obs µ obs, Σ obs ) i=1 3. Find ˆµ, ˆΣ and its distribution via EM algorithm and bootstrapping. 4. Draw m µ, Σ values, then use them to predict values for D mis.

26 What we end up with is m datasets with missing values imputed. We then run our regular analyses on the m datasets and combine the results using Rubin s rule.

27 Bayesian Approach to Missing Data Bayesian paradigm: Everything unobserved is a random variable. So we can set up the missing data as a parameter that we need to find. p(d mis, φ, γ D obs, M) p(d obs D mis, φ)p(d mis φ)p(m D obs, D mis, γ) If we assume MAR: p(γ)p(φ) p(d mis, φ, γ D obs, M) p(d obs D mis, φ)p(d mis φ)p(φ) Use Gibbs Sampling or M-H to sample both D mis and φ. We don t have to assume normality of D to integrate over D mis. We can just drop the draws of D mis.

28 We can also incorporate both imputation and analyses in the same model. p(θ, D mis, φ, γ D obs, M) p(y obs, y mis θ)p(d obs D mis, φ)p(d mis φ) p(φ)p(θ) Again, find the posterior via Gibbs Sampling or M-H. Moral: We can easily set up an application specific Bayesian model to incorporate missing data.

29 Multilevel Data Suppose we have data in which we have i = 1,..., n observations that belong to one of j = 1,..., J groups. Examples: students within schools multiple observations per country districts within states We can have covariates on multiple levels. How do we deal with this type of data?

30 The Fixed Effects Model We can let the intercept α vary by group: y i = α j[i] + x i1 β 1 + x i2 β 2 + ɛ i This is known as the fixed effects model. This is equivalent to estimating dummy variables for J 1 groups. We can use this model if we think that there is something inherent about the groups that affects our dependent variable. However, the fixed effects model involves estimating many parameters, and also cannot take into account group-level covariates.

31 Hierarchical Model A more flexible alternative is to use a hierarchical model, also known as a multilevel model, mixed effects model, or random effects model. y i = α j[i] + x i1 β 1 + x i2 β 2 + ɛ i α j N(α, σ 2 α) Instead of assuming a completely different intercept for each group, we can assume that the intercepts are drawn from a common (Normal) distribution.

32 We can incorporate group-level covariates in the following way: y i = α j[i] + x i1 β 1 + x i2 β 2 + ɛ i α j N(γ 0 + u j1 γ 1, σ 2 α) or equivalently y i = α j[i] + x i1 β 1 + x i2 β 2 + ɛ i α j = γ 0 + u j1 γ 1 + η j η j N(0, σα) 2 This is a relatively difficult model to estimate using non-bayesian methods. The lme4() package in R can do it.

33 Bayesian Hierarchical Model y i = α j[i] + x i1 β 1 + x i2 β 2 + ɛ i α j N(γ 0 + u j1 γ 1, σ 2 α) We can do hierarchical models easily using Bayesian methods. p(α, β, γ y) p(y α, β, γ)p(α γ)p(γ)p(β) Solve for the joint posterior using Gibbs Sampling or M-H. We incorporate data with more than two levels easily as well.

34 Conclusion There are pros and cons to using Bayesian statistics. Pros: Incorporate outside/prior knowledge Estimate much more difficult models CI have more intuitive meaning Helps with unidentified models Cons: It s hard(er) Computationally intensive Need defense of priors No guarantee of MCMC convergence Statistical packages for Bayesian are also less developed (MCMCpack() in R, WinBUGS, JAGS).

Parametric Models Part I: Maximum Likelihood and Bayesian Density Estimation

Parametric Models Part I: Maximum Likelihood and Bayesian Density Estimation Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Fall 2015 CS 551, Fall 2015

Lab 8: Introduction to WinBUGS

40.656 Lab 8 008 Lab 8: Introduction to WinBUGS Goals:. Introduce the concepts of Bayesian data analysis.. Learn the basic syntax of WinBUGS. 3. Learn the basics of using WinBUGS in a simple example. Next

Basics of Statistical Machine Learning

CS761 Spring 2013 Advanced Machine Learning Basics of Statistical Machine Learning Lecturer: Xiaojin Zhu jerryzhu@cs.wisc.edu Modern machine learning is rooted in statistics. You will find many familiar

1 Prior Probability and Posterior Probability

Math 541: Statistical Theory II Bayesian Approach to Parameter Estimation Lecturer: Songfeng Zheng 1 Prior Probability and Posterior Probability Consider now a problem of statistical inference in which

Markov Chain Monte Carlo Simulation Made Simple

Markov Chain Monte Carlo Simulation Made Simple Alastair Smith Department of Politics New York University April2,2003 1 Markov Chain Monte Carlo (MCMC) simualtion is a powerful technique to perform numerical

PS 271B: Quantitative Methods II. Lecture Notes

PS 271B: Quantitative Methods II Lecture Notes Langche Zeng zeng@ucsd.edu The Empirical Research Process; Fundamental Methodological Issues 2 Theory; Data; Models/model selection; Estimation; Inference.

Web-based Supplementary Materials for Bayesian Effect Estimation. Accounting for Adjustment Uncertainty by Chi Wang, Giovanni

1 Web-based Supplementary Materials for Bayesian Effect Estimation Accounting for Adjustment Uncertainty by Chi Wang, Giovanni Parmigiani, and Francesca Dominici In Web Appendix A, we provide detailed

Maximum Likelihood Estimation

Math 541: Statistical Theory II Lecturer: Songfeng Zheng Maximum Likelihood Estimation 1 Maximum Likelihood Estimation Maximum likelihood is a relatively simple method of constructing an estimator for

Imputing Missing Data using SAS

ABSTRACT Paper 3295-2015 Imputing Missing Data using SAS Christopher Yim, California Polytechnic State University, San Luis Obispo Missing data is an unfortunate reality of statistics. However, there are

An Introduction to Using WinBUGS for Cost-Effectiveness Analyses in Health Economics

Slide 1 An Introduction to Using WinBUGS for Cost-Effectiveness Analyses in Health Economics Dr. Christian Asseburg Centre for Health Economics Part 1 Slide 2 Talk overview Foundations of Bayesian statistics

Parallelization Strategies for Multicore Data Analysis

Parallelization Strategies for Multicore Data Analysis Wei-Chen Chen 1 Russell Zaretzki 2 1 University of Tennessee, Dept of EEB 2 University of Tennessee, Dept. Statistics, Operations, and Management

Probabilistic Models for Big Data. Alex Davies and Roger Frigola University of Cambridge 13th February 2014

Probabilistic Models for Big Data Alex Davies and Roger Frigola University of Cambridge 13th February 2014 The State of Big Data Why probabilistic models for Big Data? 1. If you don t have to worry about

Bayesian Methods. 1 The Joint Posterior Distribution

Bayesian Methods Every variable in a linear model is a random variable derived from a distribution function. A fixed factor becomes a random variable with possibly a uniform distribution going from a lower

Inference on Phase-type Models via MCMC

Inference on Phase-type Models via MCMC with application to networks of repairable redundant systems Louis JM Aslett and Simon P Wilson Trinity College Dublin 28 th June 202 Toy Example : Redundant Repairable

Statistical Machine Learning from Data

Samy Bengio Statistical Machine Learning from Data 1 Statistical Machine Learning from Data Gaussian Mixture Models Samy Bengio IDIAP Research Institute, Martigny, Switzerland, and Ecole Polytechnique

Validation of Software for Bayesian Models using Posterior Quantiles. Samantha R. Cook Andrew Gelman Donald B. Rubin DRAFT

Validation of Software for Bayesian Models using Posterior Quantiles Samantha R. Cook Andrew Gelman Donald B. Rubin DRAFT Abstract We present a simulation-based method designed to establish that software

Lecture 6: The Bayesian Approach

Lecture 6: The Bayesian Approach What Did We Do Up to Now? We are given a model Log-linear model, Markov network, Bayesian network, etc. This model induces a distribution P(X) Learning: estimate a set

Bayesian Analysis for the Social Sciences

Bayesian Analysis for the Social Sciences Simon Jackman Stanford University http://jackman.stanford.edu/bass November 9, 2012 Simon Jackman (Stanford) Bayesian Analysis for the Social Sciences November

Imperfect Debugging in Software Reliability

Imperfect Debugging in Software Reliability Tevfik Aktekin and Toros Caglar University of New Hampshire Peter T. Paul College of Business and Economics Department of Decision Sciences and United Health

Chapter 9: Hypothesis Testing Sections

Chapter 9: Hypothesis Testing Sections 9.1 Problems of Testing Hypotheses Skip: 9.2 Testing Simple Hypotheses Skip: 9.3 Uniformly Most Powerful Tests Skip: 9.4 Two-Sided Alternatives 9.5 The t Test 9.6

Sampling for Bayesian computation with large datasets

Sampling for Bayesian computation with large datasets Zaiying Huang Andrew Gelman April 27, 2005 Abstract Multilevel models are extremely useful in handling large hierarchical datasets. However, computation

A Basic Introduction to Missing Data

John Fox Sociology 740 Winter 2014 Outline Why Missing Data Arise Why Missing Data Arise Global or unit non-response. In a survey, certain respondents may be unreachable or may refuse to participate. Item

Basic Bayesian Methods

6 Basic Bayesian Methods Mark E. Glickman and David A. van Dyk Summary In this chapter, we introduce the basics of Bayesian data analysis. The key ingredients to a Bayesian analysis are the likelihood

Spatial Statistics Chapter 3 Basics of areal data and areal data modeling

Spatial Statistics Chapter 3 Basics of areal data and areal data modeling Recall areal data also known as lattice data are data Y (s), s D where D is a discrete index set. This usually corresponds to data

EC 6310: Advanced Econometric Theory July 2008 Slides for Lecture on Bayesian Computation in the Nonlinear Regression Model Gary Koop, University of Strathclyde 1 Summary Readings: Chapter 5 of textbook.

Practice problems for Homework 11 - Point Estimation

Practice problems for Homework 11 - Point Estimation 1. (10 marks) Suppose we want to select a random sample of size 5 from the current CS 3341 students. Which of the following strategies is the best:

Bayesian Approaches to Handling Missing Data

Bayesian Approaches to Handling Missing Data Nicky Best and Alexina Mason BIAS Short Course, Jan 30, 2012 Lecture 1. Introduction to Missing Data Bayesian Missing Data Course (Lecture 1) Introduction to

Lecture 3 : Hypothesis testing and model-fitting

Lecture 3 : Hypothesis testing and model-fitting These dark lectures energy puzzle Lecture 1 : basic descriptive statistics Lecture 2 : searching for correlations Lecture 3 : hypothesis testing and model-fitting

Bayesian Statistical Analysis in Medical Research

Bayesian Statistical Analysis in Medical Research David Draper Department of Applied Mathematics and Statistics University of California, Santa Cruz draper@ams.ucsc.edu www.ams.ucsc.edu/ draper ROLE Steering

Gaussian Conjugate Prior Cheat Sheet

Gaussian Conjugate Prior Cheat Sheet Tom SF Haines 1 Purpose This document contains notes on how to handle the multivariate Gaussian 1 in a Bayesian setting. It focuses on the conjugate prior, its Bayesian

Statistics Graduate Courses STAT 7002--Topics in Statistics-Biological/Physical/Mathematics (cr.arr.).organized study of selected topics. Subjects and earnable credit may vary from semester to semester.

Introduction to Hypothesis Testing. Point estimation and confidence intervals are useful statistical inference procedures.

Introduction to Hypothesis Testing Point estimation and confidence intervals are useful statistical inference procedures. Another type of inference is used frequently used concerns tests of hypotheses.

11. Time series and dynamic linear models

11. Time series and dynamic linear models Objective To introduce the Bayesian approach to the modeling and forecasting of time series. Recommended reading West, M. and Harrison, J. (1997). models, (2 nd

The Exponential Family

The Exponential Family David M. Blei Columbia University November 3, 2015 Definition A probability density in the exponential family has this form where p.x j / D h.x/ expf > t.x/ a./g; (1) is the natural

Markov Chain Monte Carlo and Applied Bayesian Statistics: a short course Chris Holmes Professor of Biostatistics Oxford Centre for Gene Function

MCMC Appl. Bayes 1 Markov Chain Monte Carlo and Applied Bayesian Statistics: a short course Chris Holmes Professor of Biostatistics Oxford Centre for Gene Function MCMC Appl. Bayes 2 Objectives of Course

Notes on the Negative Binomial Distribution

Notes on the Negative Binomial Distribution John D. Cook October 28, 2009 Abstract These notes give several properties of the negative binomial distribution. 1. Parameterizations 2. The connection between

Part 2: One-parameter models

Part 2: One-parameter models Bernoilli/binomial models Return to iid Y 1,...,Y n Bin(1, θ). The sampling model/likelihood is p(y 1,...,y n θ) =θ P y i (1 θ) n P y i When combined with a prior p(θ), Bayes

Problem of Missing Data

VASA Mission of VA Statisticians Association (VASA) Promote & disseminate statistical methodological research relevant to VA studies; Facilitate communication & collaboration among VA-affiliated statisticians;

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.cs.toronto.edu/~rsalakhu/ Lecture 6 Three Approaches to Classification Construct

MISSING DATA TECHNIQUES WITH SAS. IDRE Statistical Consulting Group

MISSING DATA TECHNIQUES WITH SAS IDRE Statistical Consulting Group ROAD MAP FOR TODAY To discuss: 1. Commonly used techniques for handling missing data, focusing on multiple imputation 2. Issues that could

Note on the EM Algorithm in Linear Regression Model

International Mathematical Forum 4 2009 no. 38 1883-1889 Note on the M Algorithm in Linear Regression Model Ji-Xia Wang and Yu Miao College of Mathematics and Information Science Henan Normal University

Hypothesis Testing. 1 Introduction. 2 Hypotheses. 2.1 Null and Alternative Hypotheses. 2.2 Simple vs. Composite. 2.3 One-Sided and Two-Sided Tests

Hypothesis Testing 1 Introduction This document is a simple tutorial on hypothesis testing. It presents the basic concepts and definitions as well as some frequently asked questions associated with hypothesis

Gaussian Processes to Speed up Hamiltonian Monte Carlo

Gaussian Processes to Speed up Hamiltonian Monte Carlo Matthieu Lê Murray, Iain http://videolectures.net/mlss09uk_murray_mcmc/ Rasmussen, Carl Edward. "Gaussian processes to speed up hybrid Monte Carlo

BAYESIAN ECONOMETRICS

BAYESIAN ECONOMETRICS VICTOR CHERNOZHUKOV Bayesian econometrics employs Bayesian methods for inference about economic questions using economic data. In the following, we briefly review these methods and

Introduction to Monte Carlo. Astro 542 Princeton University Shirley Ho

Introduction to Monte Carlo Astro 542 Princeton University Shirley Ho Agenda Monte Carlo -- definition, examples Sampling Methods (Rejection, Metropolis, Metropolis-Hasting, Exact Sampling) Markov Chains

Missing Data Dr Eleni Matechou

1 Statistical Methods Principles Missing Data Dr Eleni Matechou matechou@stats.ox.ac.uk References: R.J.A. Little and D.B. Rubin 2nd edition Statistical Analysis with Missing Data J.L. Schafer and J.W.

Meng-Yun Lin mylin@bu.edu

Technical Report No. 2 May 6, 2013 Bayesian Statistics Meng-Yun Lin mylin@bu.edu This paper was published in fulfillment of the requirements for PM931 Directed Study in Health Policy and Management under

Imputation of missing data under missing not at random assumption & sensitivity analysis

Imputation of missing data under missing not at random assumption & sensitivity analysis S. Jolani Department of Methodology and Statistics, Utrecht University, the Netherlands Advanced Multiple Imputation,

Estimating the random coefficients logit model of demand using aggregate data

Estimating the random coefficients logit model of demand using aggregate data David Vincent Deloitte Economic Consulting London, UK davivincent@deloitte.co.uk September 14, 2012 Introduction Estimation

P (x) 0. Discrete random variables Expected value. The expected value, mean or average of a random variable x is: xp (x) = v i P (v i )

Discrete random variables Probability mass function Given a discrete random variable X taking values in X = {v 1,..., v m }, its probability mass function P : X [0, 1] is defined as: P (v i ) = Pr[X =

Introduction to General and Generalized Linear Models

Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby

Final Mathematics 5010, Section 1, Fall 2004 Instructor: D.A. Levin

Final Mathematics 51, Section 1, Fall 24 Instructor: D.A. Levin Name YOU MUST SHOW YOUR WORK TO RECEIVE CREDIT. A CORRECT ANSWER WITHOUT SHOWING YOUR REASONING WILL NOT RECEIVE CREDIT. Problem Points Possible

α α λ α = = λ λ α ψ = = α α α λ λ ψ α = + β = > θ θ β > β β θ θ θ β θ β γ θ β = γ θ > β > γ θ β γ = θ β = θ β = θ β = β θ = β β θ = = = β β θ = + α α α α α = = λ λ λ λ λ λ λ = λ λ α α α α λ ψ + α =

Bayesian Machine Learning (ML): Modeling And Inference in Big Data. Zhuhua Cai Google, Rice University caizhua@gmail.com

Bayesian Machine Learning (ML): Modeling And Inference in Big Data Zhuhua Cai Google Rice University caizhua@gmail.com 1 Syllabus Bayesian ML Concepts (Today) Bayesian ML on MapReduce (Next morning) Bayesian

Reliability estimators for the components of series and parallel systems: The Weibull model

Reliability estimators for the components of series and parallel systems: The Weibull model Felipe L. Bhering 1, Carlos Alberto de Bragança Pereira 1, Adriano Polpo 2 1 Department of Statistics, University

Applications of R Software in Bayesian Data Analysis

Article International Journal of Information Science and System, 2012, 1(1): 7-23 International Journal of Information Science and System Journal homepage: www.modernscientificpress.com/journals/ijinfosci.aspx

38. STATISTICS. 38. Statistics 1. 38.1. Fundamental concepts

38. Statistics 1 38. STATISTICS Revised September 2013 by G. Cowan (RHUL). This chapter gives an overview of statistical methods used in high-energy physics. In statistics, we are interested in using a

Parametric fractional imputation for missing data analysis

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 Biometrika (????,??,?, pp. 1 14 C???? Biometrika Trust Printed in

Sampling and Hypothesis Testing

Population and sample Sampling and Hypothesis Testing Allin Cottrell Population : an entire set of objects or units of observation of one sort or another. Sample : subset of a population. Parameter versus

A Latent Variable Approach to Validate Credit Rating Systems using R

A Latent Variable Approach to Validate Credit Rating Systems using R Chicago, April 24, 2009 Bettina Grün a, Paul Hofmarcher a, Kurt Hornik a, Christoph Leitner a, Stefan Pichler a a WU Wien Grün/Hofmarcher/Hornik/Leitner/Pichler

Tutorial on Markov Chain Monte Carlo

Tutorial on Markov Chain Monte Carlo Kenneth M. Hanson Los Alamos National Laboratory Presented at the 29 th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Technology,

Experimental Design. Power and Sample Size Determination. Proportions. Proportions. Confidence Interval for p. The Binomial Test

Experimental Design Power and Sample Size Determination Bret Hanlon and Bret Larget Department of Statistics University of Wisconsin Madison November 3 8, 2011 To this point in the semester, we have largely

Exact Confidence Intervals

Math 541: Statistical Theory II Instructor: Songfeng Zheng Exact Confidence Intervals Confidence intervals provide an alternative to using an estimator ˆθ when we wish to estimate an unknown parameter

Bayesian Multiple Imputation of Zero Inflated Count Data

Bayesian Multiple Imputation of Zero Inflated Count Data Chin-Fang Weng chin.fang.weng@census.gov U.S. Census Bureau, 4600 Silver Hill Road, Washington, D.C. 20233-1912 Abstract In government survey applications,

Models for Count Data With Overdispersion

Models for Count Data With Overdispersion Germán Rodríguez November 6, 2013 Abstract This addendum to the WWS 509 notes covers extra-poisson variation and the negative binomial model, with brief appearances

Optimizing Prediction with Hierarchical Models: Bayesian Clustering

1 Technical Report 06/93, (August 30, 1993). Presidencia de la Generalidad. Caballeros 9, 46001 - Valencia, Spain. Tel. (34)(6) 386.6138, Fax (34)(6) 386.3626, e-mail: bernardo@mac.uv.es Optimizing Prediction

CHAPTER 2 Estimating Probabilities

CHAPTER 2 Estimating Probabilities Machine Learning Copyright c 2016. Tom M. Mitchell. All rights reserved. *DRAFT OF January 24, 2016* *PLEASE DO NOT DISTRIBUTE WITHOUT AUTHOR S PERMISSION* This is a

Monte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMS091)

Monte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMS091) Magnus Wiktorsson Centre for Mathematical Sciences Lund University, Sweden Lecture 5 Sequential Monte Carlo methods I February

15.0 More Hypothesis Testing

15.0 More Hypothesis Testing 1 Answer Questions Type I and Type II Error Power Calculation Bayesian Hypothesis Testing 15.1 Type I and Type II Error In the philosophy of hypothesis testing, the null hypothesis

Centre for Central Banking Studies

Centre for Central Banking Studies Technical Handbook No. 4 Applied Bayesian econometrics for central bankers Andrew Blake and Haroon Mumtaz CCBS Technical Handbook No. 4 Applied Bayesian econometrics

Introduction to latent variable models

Introduction to latent variable models lecture 1 Francesco Bartolucci Department of Economics, Finance and Statistics University of Perugia, IT bart@stat.unipg.it Outline [2/24] Latent variables and their

Reject Inference in Credit Scoring. Jie-Men Mok

Reject Inference in Credit Scoring Jie-Men Mok BMI paper January 2009 ii Preface In the Master programme of Business Mathematics and Informatics (BMI), it is required to perform research on a business

The Model of Purchasing and Visiting Behavior of Customers in an E-commerce Site for Consumers

DOI: 10.7763/IPEDR. 2012. V52. 15 The Model of Purchasing and Vising Behavior of Customers in an E-commerce Se for Consumers Shota Sato 1+ and Yumi Asahi 2 1 Department of Engineering Management Science,

A Bootstrap Metropolis-Hastings Algorithm for Bayesian Analysis of Big Data

A Bootstrap Metropolis-Hastings Algorithm for Bayesian Analysis of Big Data Faming Liang University of Florida August 9, 2015 Abstract MCMC methods have proven to be a very powerful tool for analyzing

Model Selection and Claim Frequency for Workers Compensation Insurance

Model Selection and Claim Frequency for Workers Compensation Insurance Jisheng Cui, David Pitt and Guoqi Qian Abstract We consider a set of workers compensation insurance claim data where the aggregate

Multivariate Poisson models

Multivariate Poisson models Dimitris Karlis Department of Statistics Athens University of Economics Limburg, October 2002 Outline Motivation Bivariate Poisson models Multivariate Poisson models Fully structure

Multiple random variables

Multiple random variables Multiple random variables We essentially always consider multiple random variables at once. The key concepts: Joint, conditional and marginal distributions, and independence of

Variance of OLS Estimators and Hypothesis Testing. Randomness in the model. GM assumptions. Notes. Notes. Notes. Charlie Gibbons ARE 212.

Variance of OLS Estimators and Hypothesis Testing Charlie Gibbons ARE 212 Spring 2011 Randomness in the model Considering the model what is random? Y = X β + ɛ, β is a parameter and not random, X may be

SYSM 6304: Risk and Decision Analysis Lecture 3 Monte Carlo Simulation

SYSM 6304: Risk and Decision Analysis Lecture 3 Monte Carlo Simulation M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas Email: M.Vidyasagar@utdallas.edu September 19, 2015 Outline

Nonlinear Regression:

Zurich University of Applied Sciences School of Engineering IDP Institute of Data Analysis and Process Design Nonlinear Regression: A Powerful Tool With Considerable Complexity Half-Day : Improved Inference

Modeling the Distribution of Environmental Radon Levels in Iowa: Combining Multiple Sources of Spatially Misaligned Data

Modeling the Distribution of Environmental Radon Levels in Iowa: Combining Multiple Sources of Spatially Misaligned Data Brian J. Smith, Ph.D. The University of Iowa Joint Statistical Meetings August 10,

PATTERN MIXTURE MODELS FOR MISSING DATA. Mike Kenward. London School of Hygiene and Tropical Medicine. Talk at the University of Turku,

PATTERN MIXTURE MODELS FOR MISSING DATA Mike Kenward London School of Hygiene and Tropical Medicine Talk at the University of Turku, April 10th 2012 1 / 90 CONTENTS 1 Examples 2 Modelling Incomplete Data

Electronic Theses and Dissertations UC Riverside

Electronic Theses and Dissertations UC Riverside Peer Reviewed Title: Bayesian and Non-parametric Approaches to Missing Data Analysis Author: Yu, Yao Acceptance Date: 01 Series: UC Riverside Electronic

Handling attrition and non-response in longitudinal data

Longitudinal and Life Course Studies 2009 Volume 1 Issue 1 Pp 63-72 Handling attrition and non-response in longitudinal data Harvey Goldstein University of Bristol Correspondence. Professor H. Goldstein

Bayesian Statistics: Indian Buffet Process

Bayesian Statistics: Indian Buffet Process Ilker Yildirim Department of Brain and Cognitive Sciences University of Rochester Rochester, NY 14627 August 2012 Reference: Most of the material in this note

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN

Beckman HLM Reading Group: Questions, Answers and Examples Carolyn J. Anderson Department of Educational Psychology I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN Linear Algebra Slide 1 of

Comparison of frequentist and Bayesian inference. Class 20, 18.05, Spring 2014 Jeremy Orloff and Jonathan Bloom

Comparison of frequentist and Bayesian inference. Class 20, 18.05, Spring 2014 Jeremy Orloff and Jonathan Bloom 1 Learning Goals 1. Be able to explain the difference between the p-value and a posterior

Big data challenges for physics in the next decades

Big data challenges for physics in the next decades David W. Hogg Center for Cosmology and Particle Physics, New York University 2012 November 09 punchlines Huge data sets create new opportunities. they

Measures of explained variance and pooling in multilevel models

Measures of explained variance and pooling in multilevel models Andrew Gelman, Columbia University Iain Pardoe, University of Oregon Contact: Iain Pardoe, Charles H. Lundquist College of Business, University

Modeling and Analysis of Call Center Arrival Data: A Bayesian Approach

Modeling and Analysis of Call Center Arrival Data: A Bayesian Approach Refik Soyer * Department of Management Science The George Washington University M. Murat Tarimcilar Department of Management Science

Objections to Bayesian statistics

Bayesian Analysis (2008) 3, Number 3, pp. 445 450 Objections to Bayesian statistics Andrew Gelman Abstract. Bayesian inference is one of the more controversial approaches to statistics. The fundamental

Computational Statistics for Big Data

Lancaster University Computational Statistics for Big Data Author: 1 Supervisors: Paul Fearnhead 1 Emily Fox 2 1 Lancaster University 2 The University of Washington September 1, 2015 Abstract The amount

Introduction to mixed model and missing data issues in longitudinal studies

Introduction to mixed model and missing data issues in longitudinal studies Hélène Jacqmin-Gadda INSERM, U897, Bordeaux, France Inserm workshop, St Raphael Outline of the talk I Introduction Mixed models

Using SAS PROC MCMC to Estimate and Evaluate Item Response Theory Models

Using SAS PROC MCMC to Estimate and Evaluate Item Response Theory Models Clement A Stone Abstract Interest in estimating item response theory (IRT) models using Bayesian methods has grown tremendously

Generalized Linear Models. Today: definition of GLM, maximum likelihood estimation. Involves choice of a link function (systematic component)

Generalized Linear Models Last time: definition of exponential family, derivation of mean and variance (memorize) Today: definition of GLM, maximum likelihood estimation Include predictors x i through

Extreme Value Modeling for Detection and Attribution of Climate Extremes

Extreme Value Modeling for Detection and Attribution of Climate Extremes Jun Yan, Yujing Jiang Joint work with Zhuo Wang, Xuebin Zhang Department of Statistics, University of Connecticut February 2, 2016

Sampling via Moment Sharing: A New Framework for Distributed Bayesian Inference for Big Data

Sampling via Moment Sharing: A New Framework for Distributed Bayesian Inference for Big Data (Oxford) in collaboration with: Minjie Xu, Jun Zhu, Bo Zhang (Tsinghua) Balaji Lakshminarayanan (Gatsby) Bayesian

MATH4427 Notebook 2 Spring 2016. 2 MATH4427 Notebook 2 3. 2.1 Definitions and Examples... 3. 2.2 Performance Measures for Estimators...

MATH4427 Notebook 2 Spring 2016 prepared by Professor Jenny Baglivo c Copyright 2009-2016 by Jenny A. Baglivo. All Rights Reserved. Contents 2 MATH4427 Notebook 2 3 2.1 Definitions and Examples...................................