EC 6310: Advanced Econometric Theory


 Darleen Carroll
 2 years ago
 Views:
Transcription
1 EC 6310: Advanced Econometric Theory July 2008 Slides for Lecture on Bayesian Computation in the Nonlinear Regression Model Gary Koop, University of Strathclyde
2 1 Summary Readings: Chapter 5 of textbook. Nonlinear regression model is of interest in own right, but also will allow us to introduce some widely useful Bayesian computational tools MetropolisHastings algorithms (a way of doing posterior simulation). Posterior predictive pvalues (a way of comparing models which does not involve marginal likelihoods). GelfandDey method of marginal likelihood calculation.
3 2 The Nonlinear Regression Model Researchers typically work with linear regression model: y i = x i2 + ::: + k x ik + " i ; In some cases nonlinear models can be made linear by transformation. For instance: y = 1 x 2 2 ::x k k : can be logged to produce linear functional form: ln (y i ) = ln (x i2 ) + ::: + k ln (x ik ) + " i ; where 1 = ln ( 1 ).
4 But some functional forms are intrinsically nonlinear E.g. constant elasticity of substitution (CES) production function: y i = kx j=1 j x k+1 ij 1 A 1 k+1 : No way to transform CES to make linear. Nonlinear regression model: y i = kx j=1 j x k+1 ij 1 A 1 k+1 + "i :
5 General form: y = f (X; ) + "; where y; X and " are de ned as in linear regression model (i.e. " is N(0 N ; h 1 I N )) f (X; ) is an Nvector of functions Properties of Normal distribution gives us likelihood function: p(yj; h) = n h exp h N 2 (2) N 2 h 2 fy f (X; )g 0 fy f (X; )g io :
6 Prior: any can be used. so let us just call it p (; h) Posterior is proportional to likelihood times prior: p(; hjy) / p (; h) h N n h 2 exp h 2 fy f (X; )g 0 fy f (X; )g io (2) N 2 No way to simplify this expression or recognize it as having a familiar form for (e.g. it is not Normal or tdistribution, etc.). How to do posterior simulation? Importance sampling is one possibility, but here we introduce another: MetropolisHastings
7 3 The MetropolisHastings Algorithm Notation: is a vector of parameters and p (yj) ; p () and p (jy) are the likelihood, prior and posterior, respectively. MetropolisHastings algorithm takes draws from a convenient candidate generating density. Let indicate a draw taken from this density which we denote as q (s 1) ;. Notation: is a draw taken of the random variable whose density depends on (s 1). Notation: like the Gibbs sampler (but unlike importance sampling), the current draw depends on the previous draw. A "chain of draws" is produced. Thus, "Markov Chain Monte Carlo (MCMC)".
8 Importance sampling corrects for the fact that the importance function di ered from the posterior by weighting the draws di erently from one another. With MetropolisHastings, we weight all draws equally, but not all the candidate draws are accepted.
9 The MetropolisHastings algorithm always takes the following form: Step 1: Choose a starting value, (0). Step 2: Take a candidate draw, from the candidate generating density, q (s 1) ;. Step 3: Calculate an acceptance probability, (s 1) ;. Step 4: Set (s) = with probability (s 1) ; and set (s) = (s 1) with probability 1 (s 1) ; : Step 5: Repeat Steps 1, 2 and 3 S times. Step 6: Take the average of the S draws g (1) ; :::; g (S). These steps will yield an estimate of E [g()jy] for any function of interest.
10 Note: As with Gibbs sampling, the MetropolisHastings algorithm usually requires the choice of a starting value, (0). To make sure that the e ect of this starting value has vanished, it is usually wise to discard S 0 initial draws. Intuition for acceptance probability, (s 1) ;, given in textbook (pages 9394). (s 1) ; = min 2 4 p(= jy)q ;= p = (s 1) jy q (s 1) (s 1) ;= ; :
11 3.1 The Independence Chain Metropolis Hastings Algorithm The Independence Chain MetropolisHastings algorithm uses a candidate generating density which is independent across draws. That is, q (s 1) ; = q () and the candidate generating density does not depend on (s 1). Useful in cases where a convenient approximation exists to the posterior. This convenient approximation can be used as a candidate generating density. Acceptance probability simpli es to: (s 2 (s 1) 1) ; = min 4 p ( = jy) q = p = (s 1) jy q ( = ) ; :
12 The independence chain MetropolisHastings algorithm is closely related to importance sampling. This can be seen by noting that, if we de ne weights analogous to the importance sampling weights (see Chapter 4, equation 4.38): w A = p = A jy q ( = A ) ; the acceptance probability in (5.9) can be written as: (s 2 1) ; = min 4 w ( ) w (s 1) ; 1 In words, the acceptance probability is simply the ratio of importance sampling weights evaluated at the old and candidate draws. 3 5 :
13 Setting q () = f N j b \ ML ; var b! ML can work well in some cases where ML denotes maximum likelihood estimates. See textbook pages for more detail on choosing candidate generating densities.
14 3.2 The Random Walk Chain Metropolis Hastings Algorithm The Random Walk Chain MetropolisHastings algorithm is useful when you cannot nd a good approximating density for the posterior. No attempt made to approximate posterior, rather candidate generating density is chosen to wander widely, taking draws proportionately in various regions of the posterior. Generates candidate draws according to: = (s 1) + z; where z is called the increment random variable.
15 The acceptance probability simpli es to: (s 1) ; = min 4 p ( = jy) p = (s 1) jy ; Choice of density for z determines form of candidate generating density. Common choice is Normal. (s 1) is the mean and researcher must choose covariance matrix () q (s 1) ; = f N (j (s 1) ; ): Researcher must select. Should be selected so that the acceptance probability tends to be neither too high nor too low.
16 There is no general rule which gives the optimal acceptance rate. A rule of thumb is that the acceptance probability should be roughly 0:5. A common approach is to to set = c where c is a scalar and is an estimate of posterior covariance matrix of. You can experiment with di erent values of c until you nd one which yields reasonable acceptance probability. This approach requires nding, an estimate of \ var (jy) (e.g. var b ML )
17 3.3 MetropoliswithinGibbs Remember: the Gibbs sampler involved sequentially drawing from p (1) jy; (2) and p (2) jy; (1). Using a MetropolisHastings algorithm for either (or both) of the posterior conditionals used in the Gibbs sampler, p (1) jy; (2) and p (2) jy; (1), is perfectly acceptable. This statement is also true if the Gibbs sampler involves more than two blocks. Such MetropoliswithinGibbs algorithms are common since many models have posteriors where most of the conditionals are easy to draw from, but one or two conditionals do not have convenient form.
18 4 A Measure of Model Fit: The Posterior Predictive PValue Bayesians usually use marginal likelihoods/bayes factors/marginal likelihoods to compare models But these can be sensitive to choice of prior and often cannot be used with noninformative priors. Also, they can only be used to compare models relative to each other (e.g. Model 1 is better than Model 2 ). Cannot be used as diagnostics of absolute model performance (e.g. cannot say Model 1 is tting well ) Posterior predictive pvalue okay with noninformative priors and absolute measure of performance
19 Notation: y is data actually observed, and y y, observable data which could be generated from model under study g (:) is function of interest. Its posterior, p(g(y y )jy) summarizes everything our model says about g(y y ) after seeing the data. Tells us the types of data sets that our model can generate. Can calculate g (y). If g(y) is in extreme tails of p(g(y y )jy), then g (y) is not the sort of data characteristic that can plausibly be generated by the model.
20 Formally, tail area probabilities similar to frequentist pvalue calculations can be obtained. Posterior predictive pvalue is the probability of a model yielding a data set more than g (y) To get p(g(y y )jy) use simulation methods similar to predictive simulation Draw from posterior, then simulate y at each draw
21 5 Example: Posterior Predictive P values in Nonlinear Regression Model Need to choose function of interest, g (:). Example: y y i = f (X i; ) + " i ; We have assumed Normal errors. Is this a good assumption? Normal errors imply skewness and kurtosis measures below are zero: p P N Ni=1 " 3 Skew = i h PNi=1 " 2 i3 2 i
22 Kurt = N P N i=1 " 4 i h PNi=1 " 2 i i 2 3: Use these as our functions of interest g (y) = E [Skewjy] or E [Kurtjy] and g y y = E h Skewjy yi or E h Kurtjy yi.
23 Can show (by integrating out h) that p y y j = f t y y jf (X; ) ; s 2 I N ; N ; (*) where s 2 = [y f (X; )]0 [y f (X; )] : N A program for doing this for Skew has following form (Kurt is similar).
24 Step 1: Take a draw, (s) ; using the posterior simulator. Step 2: Generate a representative data set, y y(s), from p y y j (s) using (*) Step 3: Set " (s) i = y i f X i ; (s) for i = 1; ::; N and evaluate Skew (s). Step 4: Set " y(s) i = y y(s) i f X i ; (s) for i = 1; ::; N and evaluate Skew y(s). Step 5: Repeat Steps 1, 2, 3 and 4 S times. Step 6: Take the average of the S draws Skew (1) ; :::; Skew (S to get E [Skewjy].
25 Step 7: Calculate the proportion of the S draws Skew y(1) ; :::; Skew y(s) which are smaller than your estimate of E [Skewjy] from Step 6. If Step 7 less than 0:5, this is posterior predictive pvalue. Otherwise it is one minus this number. If posterior predictive pvalue is less than 0:05 (or 0:01), the this is evidence against a model (i.e. this model is unlikely to have generated data sets of the sort that was observed).
26 5.1 Example Textbook pages has an empirical example with nonlinear regression model (CES production function) For skewness yields a posterior predictive pvalue of 0.37 For kurtosis yields a posterior predictive pvalue of 0.38 Evidence that this model is tting these features of the data well. See gures
27
28
29 6 Calculating Marginal Likelihoods: The GelfandDey Method Other main method of model comparison (posterior odds/bayes factors) based on marginal likelihoods Marginal likelihoods can be hard to calculate Sometimes can work out analytical formula (e.g. Normal linear regression model with natural conjugate prior). If one model is nested inside another, SavageDickey density ratio can be used. But with nonlinear regression model, may wish to compare di erent choices for f (:): nonnested
30 There are a few methods which use posterior simulator output to calculate marginal likelihoods for general cases GelfandDey is one such method Idea: inverse of the marginal likelihood for a model, M i, which depends on parameter vector,, can be written as E [g () jy; M i ] for a particular choice of g (:). Posterior simulators such as Gibbs sampler or Metropolis Hastings designed precisely to estimate such quantities.
31 Theorem 5.1: The GelfandDey Method of Marginal Likelihood Calculation Let p (jm i ) ; p (yj; M i ) and p (jy; M i ) denote the prior, likelihood and posterior, respectively, for model M i de ned on the region. If f () is any p.d.f. with support contained in, then E " f () p (jm i ) p (yj; M i ) jy; M i # = 1 p (yjm i ) : Proof: see textbook page 105
32 Theorem says for any p.d.f. f (), we can simply set: g () = f () p (jm i ) p (yj; M i ) and use posterior simulator output to estimate E [g () jy; M i ] Even f () = 1 works (in theory) But, to work well in practice, f () must be chosen very carefully. Theory says it converges best if f() p(jm i )p(yj;m i ) bounded. In practice, p (jm i ) p (yj; M i ) can be near zero in tails of posterior
33 One strategy: let f (:) be a Normal density similar to posterior, but with the tails chopped o. Let b and b be estimates of E (jy; M i ) and var (jy; M i ) obtained from the posterior simulator. For some probability, p 2 (0; 1), let b denote the support of f () which is de ned by b = : b 0 b 1 b 2 1 p (k) ; In words: chop o tails with p probability in them Let f () be this Normal density density truncated to the region b
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written
More information1 Another method of estimation: least squares
1 Another method of estimation: least squares erm: estim.tex, Dec8, 009: 6 p.m. (draft  typos/writos likely exist) Corrections, comments, suggestions welcome. 1.1 Least squares in general Assume Y i
More information11. Time series and dynamic linear models
11. Time series and dynamic linear models Objective To introduce the Bayesian approach to the modeling and forecasting of time series. Recommended reading West, M. and Harrison, J. (1997). models, (2 nd
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.cs.toronto.edu/~rsalakhu/ Lecture 6 Three Approaches to Classification Construct
More informationIntroduction to Markov Chain Monte Carlo
Introduction to Markov Chain Monte Carlo Monte Carlo: sample from a distribution to estimate the distribution to compute max, mean Markov Chain Monte Carlo: sampling using local information Generic problem
More informationCentre for Central Banking Studies
Centre for Central Banking Studies Technical Handbook No. 4 Applied Bayesian econometrics for central bankers Andrew Blake and Haroon Mumtaz CCBS Technical Handbook No. 4 Applied Bayesian econometrics
More informationMarkov Chain Monte Carlo Simulation Made Simple
Markov Chain Monte Carlo Simulation Made Simple Alastair Smith Department of Politics New York University April2,2003 1 Markov Chain Monte Carlo (MCMC) simualtion is a powerful technique to perform numerical
More informationBias in the Estimation of Mean Reversion in ContinuousTime Lévy Processes
Bias in the Estimation of Mean Reversion in ContinuousTime Lévy Processes Yong Bao a, Aman Ullah b, Yun Wang c, and Jun Yu d a Purdue University, IN, USA b University of California, Riverside, CA, USA
More informationBayesian Statistics in One Hour. Patrick Lam
Bayesian Statistics in One Hour Patrick Lam Outline Introduction Bayesian Models Applications Missing Data Hierarchical Models Outline Introduction Bayesian Models Applications Missing Data Hierarchical
More informationThe Dynamics of UK and US In ation Expectations
The Dynamics of UK and US In ation Expectations Deborah Gefang Department of Economics University of Lancaster email: d.gefang@lancaster.ac.uk Simon M. Potter Gary Koop Department of Economics University
More informationEstimating Industry Multiples
Estimating Industry Multiples Malcolm Baker * Harvard University Richard S. Ruback Harvard University First Draft: May 1999 Rev. June 11, 1999 Abstract We analyze industry multiples for the S&P 500 in
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationStatistics Graduate Courses
Statistics Graduate Courses STAT 7002Topics in StatisticsBiological/Physical/Mathematics (cr.arr.).organized study of selected topics. Subjects and earnable credit may vary from semester to semester.
More informationCAPM, Arbitrage, and Linear Factor Models
CAPM, Arbitrage, and Linear Factor Models CAPM, Arbitrage, Linear Factor Models 1/ 41 Introduction We now assume all investors actually choose meanvariance e cient portfolios. By equating these investors
More informationAn Introduction to Using WinBUGS for CostEffectiveness Analyses in Health Economics
Slide 1 An Introduction to Using WinBUGS for CostEffectiveness Analyses in Health Economics Dr. Christian Asseburg Centre for Health Economics Part 1 Slide 2 Talk overview Foundations of Bayesian statistics
More informationCombining information from different survey samples  a case study with data collected by world wide web and telephone
Combining information from different survey samples  a case study with data collected by world wide web and telephone Magne Aldrin Norwegian Computing Center P.O. Box 114 Blindern N0314 Oslo Norway Email:
More informationSTAT/MTHE 353: Probability II. STAT/MTHE 353: Multiple Random Variables. Review. Administrative details. Instructor: TamasLinder
STAT/MTHE 353: Probability II STAT/MTHE 353: Multiple Random Variables Administrative details Instructor: TamasLinder Email: linder@mast.queensu.ca T. Linder ueen s University Winter 2012 O ce: Je ery
More informationDEPARTMENT OF ECONOMICS. Unit ECON 12122 Introduction to Econometrics. Notes 4 2. R and F tests
DEPARTMENT OF ECONOMICS Unit ECON 11 Introduction to Econometrics Notes 4 R and F tests These notes provide a summary of the lectures. They are not a complete account of the unit material. You should also
More informationMaster s Theory Exam Spring 2006
Spring 2006 This exam contains 7 questions. You should attempt them all. Each question is divided into parts to help lead you through the material. You should attempt to complete as much of each problem
More informationChapter 3: The Multiple Linear Regression Model
Chapter 3: The Multiple Linear Regression Model Advanced Econometrics  HEC Lausanne Christophe Hurlin University of Orléans November 23, 2013 Christophe Hurlin (University of Orléans) Advanced Econometrics
More informationImposing Curvature Restrictions on a Translog Cost Function using a Markov Chain Monte Carlo Simulation Approach
Imposing Curvature Restrictions on a Translog Cost Function using a Markov Chain Monte Carlo Simulation Approach Kranti Mulik Graduate Student 333 A Waters Hall Department of Agricultural Economics Kansas
More informationSYSM 6304: Risk and Decision Analysis Lecture 3 Monte Carlo Simulation
SYSM 6304: Risk and Decision Analysis Lecture 3 Monte Carlo Simulation M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas Email: M.Vidyasagar@utdallas.edu September 19, 2015 Outline
More informationMANBITESDOG BUSINESS CYCLES ONLINE APPENDIX
MANBITESDOG BUSINESS CYCLES ONLINE APPENDIX KRISTOFFER P. NIMARK The next section derives the equilibrium expressions for the beauty contest model from Section 3 of the main paper. This is followed by
More informationSpatial Statistics Chapter 3 Basics of areal data and areal data modeling
Spatial Statistics Chapter 3 Basics of areal data and areal data modeling Recall areal data also known as lattice data are data Y (s), s D where D is a discrete index set. This usually corresponds to data
More informationBayesian Machine Learning (ML): Modeling And Inference in Big Data. Zhuhua Cai Google, Rice University caizhua@gmail.com
Bayesian Machine Learning (ML): Modeling And Inference in Big Data Zhuhua Cai Google Rice University caizhua@gmail.com 1 Syllabus Bayesian ML Concepts (Today) Bayesian ML on MapReduce (Next morning) Bayesian
More informationBasics of Statistical Machine Learning
CS761 Spring 2013 Advanced Machine Learning Basics of Statistical Machine Learning Lecturer: Xiaojin Zhu jerryzhu@cs.wisc.edu Modern machine learning is rooted in statistics. You will find many familiar
More informationCHAPTER 2 Estimating Probabilities
CHAPTER 2 Estimating Probabilities Machine Learning Copyright c 2016. Tom M. Mitchell. All rights reserved. *DRAFT OF January 24, 2016* *PLEASE DO NOT DISTRIBUTE WITHOUT AUTHOR S PERMISSION* This is a
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More informationParallelization Strategies for Multicore Data Analysis
Parallelization Strategies for Multicore Data Analysis WeiChen Chen 1 Russell Zaretzki 2 1 University of Tennessee, Dept of EEB 2 University of Tennessee, Dept. Statistics, Operations, and Management
More informationProbability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur
Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Module No. #01 Lecture No. #15 Special DistributionsVI Today, I am going to introduce
More informationGaussian Processes to Speed up Hamiltonian Monte Carlo
Gaussian Processes to Speed up Hamiltonian Monte Carlo Matthieu Lê Murray, Iain http://videolectures.net/mlss09uk_murray_mcmc/ Rasmussen, Carl Edward. "Gaussian processes to speed up hybrid Monte Carlo
More informationLab 8: Introduction to WinBUGS
40.656 Lab 8 008 Lab 8: Introduction to WinBUGS Goals:. Introduce the concepts of Bayesian data analysis.. Learn the basic syntax of WinBUGS. 3. Learn the basics of using WinBUGS in a simple example. Next
More informationThe Exponential Family
The Exponential Family David M. Blei Columbia University November 3, 2015 Definition A probability density in the exponential family has this form where p.x j / D h.x/ expf > t.x/ a./g; (1) is the natural
More information3.8 Finding Antiderivatives; Divergence and Curl of a Vector Field
3.8 Finding Antiderivatives; Divergence and Curl of a Vector Field 77 3.8 Finding Antiderivatives; Divergence and Curl of a Vector Field Overview: The antiderivative in one variable calculus is an important
More informationPREDICTIVE DISTRIBUTIONS OF OUTSTANDING LIABILITIES IN GENERAL INSURANCE
PREDICTIVE DISTRIBUTIONS OF OUTSTANDING LIABILITIES IN GENERAL INSURANCE BY P.D. ENGLAND AND R.J. VERRALL ABSTRACT This paper extends the methods introduced in England & Verrall (00), and shows how predictive
More informationRegression Using Support Vector Machines: Basic Foundations
Regression Using Support Vector Machines: Basic Foundations Technical Report December 2004 Aly Farag and Refaat M Mohamed Computer Vision and Image Processing Laboratory Electrical and Computer Engineering
More informationDURATION ANALYSIS OF FLEET DYNAMICS
DURATION ANALYSIS OF FLEET DYNAMICS Garth Holloway, University of Reading, garth.holloway@reading.ac.uk David Tomberlin, NOAA Fisheries, david.tomberlin@noaa.gov ABSTRACT Though long a standard technique
More informationLinear Dependence Tests
Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks
More informationBayesian Statistics: Indian Buffet Process
Bayesian Statistics: Indian Buffet Process Ilker Yildirim Department of Brain and Cognitive Sciences University of Rochester Rochester, NY 14627 August 2012 Reference: Most of the material in this note
More informationMaximum Likelihood Estimation of an ARMA(p,q) Model
Maximum Likelihood Estimation of an ARMA(p,q) Model Constantino Hevia The World Bank. DECRG. October 8 This note describes the Matlab function arma_mle.m that computes the maximum likelihood estimates
More information1 Prior Probability and Posterior Probability
Math 541: Statistical Theory II Bayesian Approach to Parameter Estimation Lecturer: Songfeng Zheng 1 Prior Probability and Posterior Probability Consider now a problem of statistical inference in which
More informationChapter 1. Vector autoregressions. 1.1 VARs and the identi cation problem
Chapter Vector autoregressions We begin by taking a look at the data of macroeconomics. A way to summarize the dynamics of macroeconomic data is to make use of vector autoregressions. VAR models have become
More informationImputing Missing Data using SAS
ABSTRACT Paper 32952015 Imputing Missing Data using SAS Christopher Yim, California Polytechnic State University, San Luis Obispo Missing data is an unfortunate reality of statistics. However, there are
More informationLogistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression
Logistic Regression Department of Statistics The Pennsylvania State University Email: jiali@stat.psu.edu Logistic Regression Preserve linear classification boundaries. By the Bayes rule: Ĝ(x) = arg max
More informationContinued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
More informationStatistical Machine Learning
Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes
More informationRepresentation of functions as power series
Representation of functions as power series Dr. Philippe B. Laval Kennesaw State University November 9, 008 Abstract This document is a summary of the theory and techniques used to represent functions
More informationMultivariate Normal Distribution
Multivariate Normal Distribution Lecture 4 July 21, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2 Lecture #47/21/2011 Slide 1 of 41 Last Time Matrices and vectors Eigenvalues
More informationTutorial on Markov Chain Monte Carlo
Tutorial on Markov Chain Monte Carlo Kenneth M. Hanson Los Alamos National Laboratory Presented at the 29 th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Technology,
More informationExercises with solutions (1)
Exercises with solutions (). Investigate the relationship between independence and correlation. (a) Two random variables X and Y are said to be correlated if and only if their covariance C XY is not equal
More informationIncreasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.
1. Differentiation The first derivative of a function measures by how much changes in reaction to an infinitesimal shift in its argument. The largest the derivative (in absolute value), the faster is evolving.
More informationThe basic unit in matrix algebra is a matrix, generally expressed as: a 11 a 12. a 13 A = a 21 a 22 a 23
(copyright by Scott M Lynch, February 2003) Brief Matrix Algebra Review (Soc 504) Matrix algebra is a form of mathematics that allows compact notation for, and mathematical manipulation of, highdimensional
More informationAN ACCESSIBLE TREATMENT OF MONTE CARLO METHODS, TECHNIQUES, AND APPLICATIONS IN THE FIELD OF FINANCE AND ECONOMICS
Brochure More information from http://www.researchandmarkets.com/reports/2638617/ Handbook in Monte Carlo Simulation. Applications in Financial Engineering, Risk Management, and Economics. Wiley Handbooks
More information171:290 Model Selection Lecture II: The Akaike Information Criterion
171:290 Model Selection Lecture II: The Akaike Information Criterion Department of Biostatistics Department of Statistics and Actuarial Science August 28, 2012 Introduction AIC, the Akaike Information
More information1 Teaching notes on GMM 1.
Bent E. Sørensen January 23, 2007 1 Teaching notes on GMM 1. Generalized Method of Moment (GMM) estimation is one of two developments in econometrics in the 80ies that revolutionized empirical work in
More informationMeasuring the tracking error of exchange traded funds: an unobserved components approach
Measuring the tracking error of exchange traded funds: an unobserved components approach Giuliano De Rossi Quantitative analyst +44 20 7568 3072 UBS Investment Research June 2012 Analyst Certification
More informationL4: Bayesian Decision Theory
L4: Bayesian Decision Theory Likelihood ratio test Probability of error Bayes risk Bayes, MAP and ML criteria Multiclass problems Discriminant functions CSCE 666 Pattern Analysis Ricardo GutierrezOsuna
More informationInference on Phasetype Models via MCMC
Inference on Phasetype Models via MCMC with application to networks of repairable redundant systems Louis JM Aslett and Simon P Wilson Trinity College Dublin 28 th June 202 Toy Example : Redundant Repairable
More information4.6 Null Space, Column Space, Row Space
NULL SPACE, COLUMN SPACE, ROW SPACE Null Space, Column Space, Row Space In applications of linear algebra, subspaces of R n typically arise in one of two situations: ) as the set of solutions of a linear
More informationNOTES ON LINEAR TRANSFORMATIONS
NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all
More informationBayesX  Software for Bayesian Inference in Structured Additive Regression
BayesX  Software for Bayesian Inference in Structured Additive Regression Thomas Kneib Faculty of Mathematics and Economics, University of Ulm Department of Statistics, LudwigMaximiliansUniversity Munich
More informationApplying MCMC Methods to Multilevel Models submitted by William J Browne for the degree of PhD of the University of Bath 1998 COPYRIGHT Attention is drawn tothefactthatcopyright of this thesis rests with
More informationRedwood Building, Room T204, Stanford University School of Medicine, Stanford, CA 943055405.
W hittemoretxt050806.tex A Bayesian False Discovery Rate for Multiple Testing Alice S. Whittemore Department of Health Research and Policy Stanford University School of Medicine Correspondence Address:
More informationPortfolio Optimization with VaR, CVaR, Skew and Kurtosis. Dan dibartolomeo Northfield Webinar, July 16, 2014
Portfolio Optimization with VaR, CVaR, Skew and Kurtosis Dan dibartolomeo Northfield Webinar, July 16, 2014 Why is this Issue Important? Since the theoretical advent of meanvariance, portfolio optimization
More informationINDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition)
INDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition) Abstract Indirect inference is a simulationbased method for estimating the parameters of economic models. Its
More informationOptimal linearquadratic control
Optimal linearquadratic control Martin Ellison 1 Motivation The lectures so far have described a general method  value function iterations  for solving dynamic programming problems. However, one problem
More information3 Does the Simplex Algorithm Work?
Does the Simplex Algorithm Work? In this section we carefully examine the simplex algorithm introduced in the previous chapter. Our goal is to either prove that it works, or to determine those circumstances
More informationSENSITIVITY ANALYSIS AND INFERENCE. Lecture 12
This work is licensed under a Creative Commons AttributionNonCommercialShareAlike License. Your use of this material constitutes acceptance of that license and the conditions of use of materials on this
More information1 Short Introduction to Time Series
ECONOMICS 7344, Spring 202 Bent E. Sørensen January 24, 202 Short Introduction to Time Series A time series is a collection of stochastic variables x,.., x t,.., x T indexed by an integer value t. The
More informationTilburg University. Publication date: 1998. Link to publication
Tilburg University Time Series Analysis of NonGaussian Observations Based on State Space Models from Both Classical and Bayesian Perspectives Durbin, J.; Koopman, S.J.M. Publication date: 1998 Link to
More informationNormalization and Mixed Degrees of Integration in Cointegrated Time Series Systems
Normalization and Mixed Degrees of Integration in Cointegrated Time Series Systems Robert J. Rossana Department of Economics, 04 F/AB, Wayne State University, Detroit MI 480 EMail: r.j.rossana@wayne.edu
More informationon Defaulted Loans and Bonds
A Flexible Approach to Modeling Ultimate Recoveries on Defaulted Loans and Bonds Edward Altman Egon Kalotay September 30, 2010 Stern School of Business, New York University. Email: ealtman@stern.nyu.edu,
More informationBinomial distribution From Wikipedia, the free encyclopedia See also: Negative binomial distribution
Binomial distribution From Wikipedia, the free encyclopedia See also: Negative binomial distribution In probability theory and statistics, the binomial distribution is the discrete probability distribution
More informationJava Modules for Time Series Analysis
Java Modules for Time Series Analysis Agenda Clustering Nonnormal distributions Multifactor modeling Implied ratings Time series prediction 1. Clustering + Cluster 1 Synthetic Clustering + Time series
More informationParameter estimation for nonlinear models: Numerical approaches to solving the inverse problem. Lecture 12 04/08/2008. Sven Zenker
Parameter estimation for nonlinear models: Numerical approaches to solving the inverse problem Lecture 12 04/08/2008 Sven Zenker Assignment no. 8 Correct setup of likelihood function One fixed set of observation
More informationc 2008 Je rey A. Miron We have described the constraints that a consumer faces, i.e., discussed the budget constraint.
Lecture 2b: Utility c 2008 Je rey A. Miron Outline: 1. Introduction 2. Utility: A De nition 3. Monotonic Transformations 4. Cardinal Utility 5. Constructing a Utility Function 6. Examples of Utility Functions
More informationProbability and Statistics
CHAPTER 2: RANDOM VARIABLES AND ASSOCIATED FUNCTIONS 2b  0 Probability and Statistics Kristel Van Steen, PhD 2 Montefiore Institute  Systems and Modeling GIGA  Bioinformatics ULg kristel.vansteen@ulg.ac.be
More informationProbabilistic Models for Big Data. Alex Davies and Roger Frigola University of Cambridge 13th February 2014
Probabilistic Models for Big Data Alex Davies and Roger Frigola University of Cambridge 13th February 2014 The State of Big Data Why probabilistic models for Big Data? 1. If you don t have to worry about
More information1.2 Solving a System of Linear Equations
1.. SOLVING A SYSTEM OF LINEAR EQUATIONS 1. Solving a System of Linear Equations 1..1 Simple Systems  Basic De nitions As noticed above, the general form of a linear system of m equations in n variables
More informationService courses for graduate students in degree programs other than the MS or PhD programs in Biostatistics.
Course Catalog In order to be assured that all prerequisites are met, students must acquire a permission number from the education coordinator prior to enrolling in any Biostatistics course. Courses are
More informationChenfeng Xiong (corresponding), University of Maryland, College Park (cxiong@umd.edu)
Paper Author (s) Chenfeng Xiong (corresponding), University of Maryland, College Park (cxiong@umd.edu) Lei Zhang, University of Maryland, College Park (lei@umd.edu) Paper Title & Number Dynamic Travel
More informationLecture 3: Linear methods for classification
Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,
More informationHomework 5 Solutions
Homework 5 Solutions 4.2: 2: a. 321 = 256 + 64 + 1 = (01000001) 2 b. 1023 = 512 + 256 + 128 + 64 + 32 + 16 + 8 + 4 + 2 + 1 = (1111111111) 2. Note that this is 1 less than the next power of 2, 1024, which
More informationA Coefficient of Variation for Skewed and HeavyTailed Insurance Losses. Michael R. Powers[ 1 ] Temple University and Tsinghua University
A Coefficient of Variation for Skewed and HeavyTailed Insurance Losses Michael R. Powers[ ] Temple University and Tsinghua University Thomas Y. Powers Yale University [June 2009] Abstract We propose a
More informationTESTING THE ONEPART FRACTIONAL RESPONSE MODEL AGAINST AN ALTERNATIVE TWOPART MODEL
TESTING THE ONEPART FRACTIONAL RESPONSE MODEL AGAINST AN ALTERNATIVE TWOPART MODEL HARALD OBERHOFER AND MICHAEL PFAFFERMAYR WORKING PAPER NO. 201101 Testing the OnePart Fractional Response Model against
More informationEcon 371 Problem Set #3 Answer Sheet
Econ 371 Problem Set #3 Answer Sheet 4.3 In this question, you are told that a OLS regression analysis of average weekly earnings yields the following estimated model. AW E = 696.7 + 9.6 Age, R 2 = 0.023,
More information88 CHAPTER 2. VECTOR FUNCTIONS. . First, we need to compute T (s). a By definition, r (s) T (s) = 1 a sin s a. sin s a, cos s a
88 CHAPTER. VECTOR FUNCTIONS.4 Curvature.4.1 Definitions and Examples The notion of curvature measures how sharply a curve bends. We would expect the curvature to be 0 for a straight line, to be very small
More informationMarkov random fields and Gibbs measures
Chapter Markov random fields and Gibbs measures 1. Conditional independence Suppose X i is a random element of (X i, B i ), for i = 1, 2, 3, with all X i defined on the same probability space (.F, P).
More informationCommon sense, and the model that we have used, suggest that an increase in p means a decrease in demand, but this is not the only possibility.
Lecture 6: Income and Substitution E ects c 2009 Je rey A. Miron Outline 1. Introduction 2. The Substitution E ect 3. The Income E ect 4. The Sign of the Substitution E ect 5. The Total Change in Demand
More informationL10: Probability, statistics, and estimation theory
L10: Probability, statistics, and estimation theory Review of probability theory Bayes theorem Statistics and the Normal distribution Least Squares Error estimation Maximum Likelihood estimation Bayesian
More informationThe Binomial Distribution
The Binomial Distribution James H. Steiger November 10, 00 1 Topics for this Module 1. The Binomial Process. The Binomial Random Variable. The Binomial Distribution (a) Computing the Binomial pdf (b) Computing
More informationProbability and Statistics
Probability and Statistics Syllabus for the TEMPUS SEE PhD Course (Podgorica, April 4 29, 2011) Franz Kappel 1 Institute for Mathematics and Scientific Computing University of Graz Žaneta Popeska 2 Faculty
More information1 Error in Euler s Method
1 Error in Euler s Method Experience with Euler s 1 method raises some interesting questions about numerical approximations for the solutions of differential equations. 1. What determines the amount of
More informationIntroduction. Agents have preferences over the two goods which are determined by a utility function. Speci cally, type 1 agents utility is given by
Introduction General equilibrium analysis looks at how multiple markets come into equilibrium simultaneously. With many markets, equilibrium analysis must take explicit account of the fact that changes
More informationTHE USE OF STATISTICAL DISTRIBUTIONS TO MODEL CLAIMS IN MOTOR INSURANCE
THE USE OF STATISTICAL DISTRIBUTIONS TO MODEL CLAIMS IN MOTOR INSURANCE Batsirai Winmore Mazviona 1 Tafadzwa Chiduza 2 ABSTRACT In general insurance, companies need to use data on claims gathered from
More informationIDENTIFICATION IN A CLASS OF NONPARAMETRIC SIMULTANEOUS EQUATIONS MODELS. Steven T. Berry and Philip A. Haile. March 2011 Revised April 2011
IDENTIFICATION IN A CLASS OF NONPARAMETRIC SIMULTANEOUS EQUATIONS MODELS By Steven T. Berry and Philip A. Haile March 2011 Revised April 2011 COWLES FOUNDATION DISCUSSION PAPER NO. 1787R COWLES FOUNDATION
More informationBayesian Phylogeny and Measures of Branch Support
Bayesian Phylogeny and Measures of Branch Support Bayesian Statistics Imagine we have a bag containing 100 dice of which we know that 90 are fair and 10 are biased. The
More informationTailDependence an Essential Factor for Correctly Measuring the Benefits of Diversification
TailDependence an Essential Factor for Correctly Measuring the Benefits of Diversification Presented by Work done with Roland Bürgi and Roger Iles New Views on Extreme Events: Coupled Networks, Dragon
More informationIntroduction to General and Generalized Linear Models
Introduction to General and Generalized Linear Models General Linear Models  part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK2800 Kgs. Lyngby
More informationAnalysis of Bayesian Dynamic Linear Models
Analysis of Bayesian Dynamic Linear Models Emily M. Casleton December 17, 2010 1 Introduction The main purpose of this project is to explore the Bayesian analysis of Dynamic Linear Models (DLMs). The main
More information