Single Agent Dynamics: Dynamic Discrete Choice Models



Similar documents
Lecture notes: single-agent dynamics 1

STA 4273H: Statistical Machine Learning

INDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition)

Lecture notes: single-agent dynamics 1

A HYBRID GENETIC ALGORITHM FOR THE MAXIMUM LIKELIHOOD ESTIMATION OF MODELS WITH MULTIPLE EQUILIBRIA: A FIRST REPORT

Statistical Machine Learning

Uncovering Consumer Decision Rules under Complex Dynamic Environments: The Case of Coalition Loyalty Programs

The primary goal of this thesis was to understand how the spatial dependence of

Lecture 3: Linear methods for classification

On the Efficiency of Competitive Stock Markets Where Traders Have Diverse Information

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

1 Short Introduction to Time Series

Multivariate Normal Distribution

Moral Hazard. Itay Goldstein. Wharton School, University of Pennsylvania

Marketing Mix Modelling and Big Data P. M Cain

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Choice under Uncertainty

Applications to Data Smoothing and Image Processing I

Linear Programming Notes V Problem Transformations

CHAPTER 2 Estimating Probabilities

Logistic Regression (1/24/13)

Linear Threshold Units

STATISTICA Formula Guide: Logistic Regression. Table of Contents

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression

Inner Product Spaces

FIXED EFFECTS AND RELATED ESTIMATORS FOR CORRELATED RANDOM COEFFICIENT AND TREATMENT EFFECT PANEL DATA MODELS

IDENTIFICATION IN A CLASS OF NONPARAMETRIC SIMULTANEOUS EQUATIONS MODELS. Steven T. Berry and Philip A. Haile. March 2011 Revised April 2011

ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE

Inference of Probability Distributions for Trust and Security applications

Introduction to Logistic Regression

The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series.

We shall turn our attention to solving linear systems of equations. Ax = b

Financial Time Series Analysis (FTSA) Lecture 1: Introduction

PS 271B: Quantitative Methods II. Lecture Notes

3 Introduction to Assessing Risk

Betting Strategies, Market Selection, and the Wisdom of Crowds

General Framework for an Iterative Solution of Ax b. Jacobi s Method

Notes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand

BayesX - Software for Bayesian Inference in Structured Additive Regression

Statistics in Retail Finance. Chapter 6: Behavioural models

Basics of Statistical Machine Learning

(Quasi-)Newton methods

Nonparametric adaptive age replacement with a one-cycle criterion

HETEROGENEOUS AGENTS AND AGGREGATE UNCERTAINTY. Daniel Harenberg University of Mannheim. Econ 714,

Master s Theory Exam Spring 2006

Detection of changes in variance using binary segmentation and optimal partitioning

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include

11. Time series and dynamic linear models

OPTIMAL DESIGN OF A MULTITIER REWARD SCHEME. Amir Gandomi *, Saeed Zolfaghari **

Factor analysis. Angela Montanari

Introduction to time series analysis

The Basics of Graphical Models

How To Find Out What Search Strategy Is Used In The U.S. Auto Insurance Industry

APPLIED MISSING DATA ANALYSIS

Models for Product Demand Forecasting with the Use of Judgmental Adjustments to Statistical Forecasts

2DI36 Statistics. 2DI36 Part II (Chapter 7 of MR)

Chapter 21: The Discounted Utility Model

Lecture 2: Universality

Auxiliary Variables in Mixture Modeling: 3-Step Approaches Using Mplus

Econometrics Simple Linear Regression

Chapter 6: Multivariate Cointegration Analysis

Lecture 8: Signal Detection and Noise Assumption

Joint Product Signals of Quality. James E. McClure and Lee C. Spector. Ball State University Muncie, Indiana. September 1991.

Notes on Factoring. MA 206 Kurt Bryan

C(t) (1 + y) 4. t=1. For the 4 year bond considered above, assume that the price today is 900$. The yield to maturity will then be the y that solves

Hedging Illiquid FX Options: An Empirical Analysis of Alternative Hedging Strategies

Monte Carlo Methods in Finance

Math 4310 Handout - Quotient Vector Spaces

Dirichlet Processes A gentle tutorial

LOGISTIC REGRESSION. Nitin R Patel. where the dependent variable, y, is binary (for convenience we often code these values as

Analysis of Bayesian Dynamic Linear Models

Probabilistic Models for Big Data. Alex Davies and Roger Frigola University of Cambridge 13th February 2014

A linear algebraic method for pricing temporary life annuities

Adaptive Online Gradient Descent

NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS

The HB. How Bayesian methods have changed the face of marketing research. Summer 2004

Random graphs with a given degree sequence

Statistics Graduate Courses

Model-based Synthesis. Tony O Hagan

MATLAB and Big Data: Illustrative Example

Machine Learning and Pattern Recognition Logistic Regression

Using simulation to calculate the NPV of a project

The Method of Least Squares

Big Data - Lecture 1 Optimization reminders

Should we Really Care about Building Business. Cycle Coincident Indexes!

(Refer Slide Time: 1:42)

In recent years, Federal Reserve (Fed) policymakers have come to rely

ON THE ROBUSTNESS OF FIXED EFFECTS AND RELATED ESTIMATORS IN CORRELATED RANDOM COEFFICIENT PANEL DATA MODELS

ECON 459 Game Theory. Lecture Notes Auctions. Luca Anderlini Spring 2015

Asian Option Pricing Formula for Uncertain Financial Market

Introduction to General and Generalized Linear Models

Chapter 7. BANDIT PROBLEMS.

Missing Data: Part 1 What to Do? Carol B. Thompson Johns Hopkins Biostatistics Center SON Brown Bag 3/20/13

CPC/CPA Hybrid Bidding in a Second Price Auction

Transcription:

Single Agent Dynamics: Dynamic Discrete Choice Models Part : Estimation Günter J. Hitsch The University of Chicago Booth School of Business 013 1/83 Overview The Estimation Problem; Structural vs Reduced Form of the Model dentification Nested Fixed Point (NFP) Estimators Unobserved State Variables Example: The Bayesian Learning Model Model Estimation in the Presence of Unobserved State Variables Unobserved Heterogeneity Postscript: Sequential vs. Myopic Learning The MPEC Approach to ML Estimation Example: Estimation of the Durable Goods Adoption Problem Using NFP and MPEC Approaches Other Topics References /83

Overview The Estimation Problem; Structural vs Reduced Form of the Model dentification Nested Fixed Point (NFP) Estimators Unobserved State Variables Example: The Bayesian Learning Model Model Estimation in the Presence of Unobserved State Variables Unobserved Heterogeneity Postscript: Sequential vs. Myopic Learning The MPEC Approach to ML Estimation Example: Estimation of the Durable Goods Adoption Problem Using NFP and MPEC Approaches Other Topics References 3/83 Data We observe the choice behavior of N individuals For each individual i, define a vector of states and choices for periods t =0,...,T i : Q i =(x it,a it ) T i i=1 The full data vector is Q =(Q 1,...,Q N ) We will initially assume that all individuals are identical and that the components of the state x are observed to us We will discuss the case of (permanent) heterogeneity and unobserved state variables later 4/83

Data-generating process We assume that the choice data are generated based on the choice-specific value functions Z v j (x) =u j (x)+ w(x 0 )f(x 0 x, j) dx 0 We observe action a it conditional on the state x it if and only if v k (x it )+ kit v j (x it )+ jit for all j A, j6= k The CCP k (x it ) is the probability that the inequality above holds, given the distribution of the latent utility components g( ) 5/83 Static vs dynamic discrete choice models Using the concept of choice-specific value functions, the predictions of a dynamic discrete choice model can be expressed in the same manner as the predictions of a static discrete choice model Therefore, it appears that we can simply estimate each v j (x) by approximating it using a flexible functional form, e.g.a polynomial or a linear combination of basis functions: v j (x) j (x; ) n a static discrete choice model, we typically start directly with a parametric specification of each choice-specific utility function, u j (x; ) Unlike u j (x), v j (x) is not a structural object, but the solution of a dynamic decision process that depends on the model primitives, u j (x), f(x 0 x, a),, and g( ). Why is this important? Because it affects the questions that the estimated model can answer 6/83

Example: The storable goods demand problem Remember that x t (i t,p t ) it {0, 1,...,}, andp t {P (1),...,P (L) } The random utility components are Type Extreme Value, hence the discrete values v j (x) can be estimated just as in a standard multinomial logit model Suppose we estimate v j (x) based on data generated from the high/low promotion process discussed in the example in Part We want to evaluate a policy where the price is permanently set at the low price level Will knowledge of v j (x) allow us to answer this question? 7/83 CCP s Pricing with promotions 1 0.9 Small pack size Promoted price Regular price 1 0.9 Large pack size Promoted price Regular price 0.8 0.8 Purchase probability 0.7 0.6 0.5 0.4 0.3 Purchase probability 0.7 0.6 0.5 0.4 0.3 0. 0. 0.1 0.1 0 0 5 10 15 0 nventory units 0 0 5 10 15 0 nventory units 8/83

CCP s only low price 1 0.9 Small pack size Promoted price 1 0.9 Large pack size Promoted price 0.8 0.8 Purchase probability 0.7 0.6 0.5 0.4 0.3 Purchase probability 0.7 0.6 0.5 0.4 0.3 0. 0. 0.1 0.1 0 0 5 10 15 0 nventory units 0 0 5 10 15 0 nventory units 9/83 Example: The storable goods demand problem Based on the data generated under the price promotion process, the sales volume ratio between promoted and regular price periods is 3.78/0.358 = 10.6 However, when we permanently lower the price the sales ratio is only 0.991/0.358 =.8 10 / 83

Example: The storable goods demand problem Evaluation of a policy where the price is permanently set at the low price level: We are not just changing a component of x (the price), but also the price expectations, f(x 0 x, a) Hence, vj (x) will also change to ṽ j (x) Unless consumer choices under the alternative price process are observed in our data, we cannot estimate ṽ j (x) nstead, we must predict ṽj (x) using knowledge of the model primitives, u j (x), f(x 0 x, a),, and g( ) The same problem does not arise in a static discrete choice model (if the static discrete choice model accurately describes consumer behavior) 11 / 83 Structural and reduced form of the model (u j (x),f(x 0 x, a),,g( )) is the structural form of the dynamic discrete choice model (v j (x),g( )) is the reduced form of the model The reduced form describes the joint distribution of the data, but typically cannot predict the causal effect of a marketing policy intervention Policy predictions are therefore not only dependent on the statistical properties of the model and model parameters, but also on the behavioral assumptions we make about how decisions are made Good background reading on structural estimation and the concept of inferring causality from data that are non-experimentally generated: Reiss and Wolak (007) 1 / 83

Overview The Estimation Problem; Structural vs Reduced Form of the Model dentification Nested Fixed Point (NFP) Estimators Unobserved State Variables Example: The Bayesian Learning Model Model Estimation in the Presence of Unobserved State Variables Unobserved Heterogeneity Postscript: Sequential vs. Myopic Learning The MPEC Approach to ML Estimation Example: Estimation of the Durable Goods Adoption Problem Using NFP and MPEC Approaches Other Topics References 13 / 83 dentification All behavioral implications of the model are given by the collection of CCP s { j (x) :j A,x X} Suppose we have infinitely many data points, so that we can observe the CCP s { j (x)} For example, if X is finite we can estimate j(x) as the frequency of observing j conditional on x We assume we know the distribution of the random utility components, g( ) Could we then uniquely infer the model primitives describing behavior from the data: u j (x), f(x 0 x, j),?.e., is the structural form of the model identified (in a non-parametric sense)? 14 / 83

dentification Hotz and Miller (1993): f has a density with respect to the Lebesgue measure on R K+1 and is strictly positive, then we can invert the observed CCP s to infer the choice-specific value function differences: v j (x) v 0 (x) = 1 j ( (x)) for all j A f j is Type Extreme Value distributed, this inversion has a closed form: v j (x) v 0 (x) = log( j (x)) log( 0 (x)) We see that =0,u 0 (x) 0, and u j (x) log( j (x)) log( 0 (x)) completely rationalize the data! 15 / 83 dentification Theorem Suppose we know the distribution of the random utility components, g( ), the consumers beliefs about the evolution of the state vector, f(x 0 x, j), and the discount factor. Assume that u 0 (x) 0. Let the CCP s, j(x), be given for all x and j A. Then: (i) We can infer the unique choice-specific value functions, v j (x), consistent with the consumer decision model. (ii) The utilities u j (x) are identified for all states x and choices j. 16 / 83

(Non)identification proof Define the difference in choice-specific value functions: ṽ j (x) v j (x) v 0 (x) Hotz-Miller (1993) inversion theorem: ṽ j (x) = 1 j ( (x)) for all j A The expected value function can be expressed as a function of the data and the reference alternative: Z w(x) = max {v k(x)+ k }g( )d ka Z = max {ṽ k(x)+ k }g( )d + v 0 (x) ka 17 / 83 (Non)identification proof Choice-specific value of reference alternative: Z v 0 (x) =u 0 (x)+ w(x 0 )f(x 0 x, 0)dx 0 Z = max {ṽ k(x 0 )+ k }g( )f(x 0 x, 0)d dx 0 ka Z + v 0 (x 0 )f(x 0 x, 0)dx 0 Defines a contraction mapping and hence has a unique solution Recover all choice-specific value functions: v j (x) = j( (x)) + v 0 (x) Recover the utility functions: u j (x) =v j (x) Z w(x 0 )f(x 0 x, j)dx 18 / 83

(Non)identification The proof of the proposition shows how to calculate the utilities, u j (x), from the data and knowledge of and f(x 0 x, j) Proof shows that if 0 6= or f 0 (x 0 x, j) 6= f(x 0 x, j) ) u 0 j (x) 6= u j(x) in general mplications f either the discount factor or the consumer s belief about the state evolution is unknown, the utility function is not identified.e., the model primitives uj (x),,f(x 0 x, j) are non-parametrically unidentified 19 / 83 mplications of (non)identification result n practice: Researchers assume a given discount factor calibrated from some overall interest rate r, such that =1/(1 + r) u 0 (c t )= E t [(1 + r)u 0 (c t+1 )] Typically, discount factor corresponding to 5%-10% interest rate is used Assume rational expectations: the consumer s subjective belief f(x 0 x, j) coincides with the actual transition process of x t Allows us to estimate f(x 0 x, j) from the data 0 / 83

dentifying based on exclusion restrictions A folk theorem : is identified if there are states that do not affect the current utility but the transition probability of x More formally: Suppose there are two states x 1 and x such that u j (x 1 )=u j (x ) but f(x 0 x 1,j) 6= f(x 0 x,j) ntuition: Variation in x does not change the current utility but the future expected value, and thus is identified: Z v j (x) =u j (x)+ w(x 0 )f(x 0 x, j) dx 0 1 / 83 dentifying based on exclusion restrictions The folk theorem above is usually attributed to Magnac and Thesmar (00), but the actual statement provided in their paper is more complicated Define the current value function, the expected difference between (i) choosing action j today, action 0 tomorrow, and then behaving optimally afterwards, and (ii) choosing action 0 today and tomorrow and behaving optimally afterwards Z U j (x) u j (x)+ v 0 (x 0 )p(x 0 x, j)dx 0 Z u 0 (x)+ v 0 (x 0 )p(x 0 x, 0)dx 0 Also, define Z V j (x) (w(x 0 ) v 0 (x 0 )) p(x 0 x, j)dx 0 / 83

dentifying based on exclusion restrictions The proposition proved in Magnac and Thesmar (00): Theorem Suppose there are states x 1 and x,x 1 6= x, such that U j (x 1 )=U j (x ) for some action j. Furthermore, suppose that (V j (x 1 ) V 0 (x 1 )) (V j (x )+V 0 (x )) 6= 0. Then the discount factor is identified. Note: The assumptions of the theorem require knowledge of the solution of the decision process and are thus difficult to verify 3 / 83 dentifying based on exclusion restrictions s the folk theorem true? Although widely credited to Magnac and Thesmar (00), believe this attribution is false However, a recent paper by Fang and Wang (013), Estimating Dynamic Discrete Choice Models with Hyperbolic Discounting, with an Application to Mammography Decisions, seems to prove the claim in the folk theorem 4 / 83

dentification of from data on static and dynamic decisions Yao et al. (01) Observe consumer choice data across two scenarios Examples: Static: Current choice does not affect future payoffs Dynamic Cell phone customers are initially on a linear usage plan, and are then switched to a three-part tariff Under the three-part tariff current cell phone usage affects future per-minute rate Any finite horizon problem Yao et al. prove identification of for continuous controls No proof provided for discrete choices, but (my guess) statement is true more generally 5 / 83 dentification of using stated choice data Dubé, Hitsch, and Jindal (013) Conjoint design to infer product adoption choices Present subjects with forecasts of future states (e.g. prices) Collect stated choice data on adoption timing dentification assumption: Subjects take the forecast of future states as given Allows identification of discount factor discount function (t) ntuition, or more generally a Treatments: Manipulations of states that change current period utilities by the same amount in period t =0and t>0 Relative effect on choice probabilities and hence choice-specific value differences at t>0 versus t =0identifies (t) 6 / 83

Overview The Estimation Problem; Structural vs Reduced Form of the Model dentification Nested Fixed Point (NFP) Estimators Unobserved State Variables Example: The Bayesian Learning Model Model Estimation in the Presence of Unobserved State Variables Unobserved Heterogeneity Postscript: Sequential vs. Myopic Learning The MPEC Approach to ML Estimation Example: Estimation of the Durable Goods Adoption Problem Using NFP and MPEC Approaches Other Topics References 7 / 83 Goal of estimation We would like to recover the structural form of the dynamic discrete choice model, (u j (x),f(x 0 x, a),,g( )) We assume: g( ) is known The decision makers have rational expectations, and thus f(x 0 x, a) is the true transition density of the data Typically we also assume that is known Assume that the utility functions and transition densities are parametric functions indexed by : u j (x; ) and f(x 0 x, a; ) Goal: Develop an estimator for 8 / 83

Likelihood function The likelihood contribution of individual i is l i (Q i ; ) = YT i t=0 The likelihood function is Pr{a it x it ; } f(x it x i,t 1,a i,t 1 ; ) Pr{a i0 x i0 ; } (x i0 ; ) l(q; ) = NY l i (Q i ; ) i=1 Define the maximum likelihood estimator, NFP = arg max{log(l(q; ))}!... 9 / 83 The nested fixed point estimator NFP is a nested fixed point estimator We employ a maximization algorithm that searches over possible values of Given, we first solve for w(x; ) as the fixed point of the integrated Bellman equation Given w(x; ), we calculate the choice-specific value functions and then the CCP s j(x; ) Allows us to assemble l(q; ) The solution of the fixed point w(x; ) is nested in the maximization algorithm This estimator is computationally intensive, as we need to solve for the expected value function at each! 30 / 83

Estimating in two steps Separate =( u, f ) into components that affect the utility functions, u j (x; u ), and the transition densities f(x 0 x, a; f ) Note the log-likelihood contribution of individual i: XT i log(l i (Q i ; )) = log(pr{a it x it ; u, f })+... t=1 XT i t= log(f(x it x i,t 1,a i,t 1 ; f )) This expression suggests that we can estimate in two steps: 1. Find a consistent estimator for f (need not be a ML estimator). Conditional on ˆ f, maximize the sum in the first row above to find ˆ u (adjust standard errors to account for sampling error in first step) 31 / 83 Overview The Estimation Problem; Structural vs Reduced Form of the Model dentification Nested Fixed Point (NFP) Estimators Unobserved State Variables Example: The Bayesian Learning Model Model Estimation in the Presence of Unobserved State Variables Unobserved Heterogeneity Postscript: Sequential vs. Myopic Learning The MPEC Approach to ML Estimation Example: Estimation of the Durable Goods Adoption Problem Using NFP and MPEC Approaches Other Topics References 3 / 83

Bayesian learning models in marketing Large literature on demand for experience goods Following Erdem and Keane (1996) Product (or service) needs to be consumed or used to fully ascertain its utility Examples Product with unknown flavor or texture Pharmaceutical drug with unknown match value, e.g. effectiveness or side effects mportance: Consumer learning may cause inertia in brand choices nertia = state dependence in a purely statistical sense f true, has implications for pricing and other marketing actions Optimal sequential learning about demand Hitsch (006) 33 / 83 Bayesian learning: General model structure Learning about an unknown parameter vector # R N nformation acquisition: Sampling (choosing) j A yields a signal j f j ( #) Knowledge: Prior t (#) Knowledge about # before any additional information in period t is sampled Learning through Bayesian updating: t+1 (#) t (# jt ) / f j ( jt #) t (#) Posterior at end of period t is prior at the beginning of period t +1 34 / 83

General model structure Notation: t t (#) State vector x t =( t,z t ) gnore for now that t is infinite-dimensional in general Utility: Z u j (x t )=E(u j (z t, jt, #) x t )= u j (z t,, #)f j ( #) t (#) d d# Expected utility given zt and the agents belief t (#) about # 35 / 83 General model structure We assume the decision maker is able to anticipate how her knowledge evolves, conditional on the potential information that she may receive in this period Allows to define a corresponding Markov transition probability f( t+1 t,a t ) ) learning model is a special case of the dynamic discrete choice framework 36 / 83

Review: Normal linear regression model with conjugate priors Sample y N(X, ), y R m, R k Suppose is known Can be generalized with inverse-wishart prior on, but rarely (never?) used in extant consumer learning literature Goal: nference about Prior: N( 0, 0 ), Then the posterior is also normal: p( y, X, ) =N( n, n ) n =( 1 0 + X T 1 X) 1 n =( 1 0 + X T 1 X) 1 ( 1 0 0 + X T 1 y) 37 / 83 The Erdem and Keane (1996) learning model Consumers make purchase (= consumption) decisions over time, t =0, 1,... # j is the mean attribute level (quality) of product j Affects utility Consumers face uncertainty over #j Realized utility from consumption in period t affected by realized attribute level: jt = # j + jt, jt N(0, ) nherent variability in attribute level Variability in consumer s perception of attribute level Utility: u j (z t, jt )= jt r jt P jt r>0:risk aversion 38 / 83

nformation sources Assume for notational simplicity that there is only one product with unknown attribute level, and hence we can drop the j index Learning from consumption: t = # + t, t N(0, ) Let t 1,t,... denote the time periods when a consumer receives a consumption signal Let H t = {t k : t k <t} be the time periods prior to t when a consumption signal was received, and N t = H t be the corresponding number of signals Mean consumption signal prior to t : t = 1 N t X H t 39 / 83 Prior Prior in the first decision period: 0 (#) =N(µ 0, 0 )=N( #, 0 ) # is the product class mean attribute level Refer back to the slide on the normal linear regression model with conjugate priors, and define: = # X =(1,...,1) (vector of length N t ) = diagonal matrix with elements 0 = # 0 = 0 Elements in correspond to the consumption and advertising signals in H t (order does not matter) 40 / 83

Posterior Results on normal linear regression model show that prior t = posterior given the signals for <tis also normal: t (#) =N(µ t, t ) t = n = 1 0 µ t = n = 1 0 + N t + N t 1 1 1 0 µ 0 + N t t Remember: nverse of covariance matrix also called the precision matrix Precisions add up in the normal linear regression model with conjugate priors Shows that we can equate the prior in period t with the mean and variance of a normal distribution: t =(µ t, t ) 41 / 83 Transition of prior Conditional on the current prior t =(µ t, t ), and only in periods t when the consumer receives a consumption signal t : t+1 = 1 t µ t+1 = 1 t + 1 1 + 1 1 1 t µ t + 1 t Can correspondingly derive the Markov transition density f( t+1 t,a t ) t evolves deterministically µt+1 is normally distributed with mean E(µ t+1 t,a t )=µ t Note that var(µ t+1 t,a t ) 6= t+1 4 / 83

Model solution Reintroduce j subscript Priors are independent, and t =( 1t,..., Jt ) Recall that jt = # j + jt Expected utility: u j (x t )=E(u j (z t, jt ) t,z t )= µ jt rµ jt r( jt + ) P jt State transition: f(x t+1 x t,a t )=f(z t+1 z t,a t ) JY f( j,t+1 jt,a t ) j=1 ) well-defined dynamic discrete choice model with unique choice-specific value functions characterizing optimal decisions 43 / 83 Estimation Conditional on x t =(z t, t ) we can calculate the CCP s Pr{a t = j z t, t } = j (z t, t ) Data Qt = a t, Q t =(Q 0,...,Q t ),andq =(Q 0,...,Q T ) zt =(z 0,...,z t ) and z =(z 0,...,z T ) Assume f(z t+1 z t,a t ) is known Estimated from data in a preliminary estimation step Difficulty in constructing the likelihood: t is an unobserved state Need to integrate out the unobserved states from the likelihood function 44 / 83

Constructing the priors Define t =( 1t,..., Jt ) and =( 0,..., T 1 ) Conditional on # j (an estimated parameter) and, we know the signals jt = # j + jt By assumption j0 (# j )=N(µ j0, j0 )=N( #, 0 ) Given we can then infer the sequence of priors 0, 1,..., T 1 : j,t+1 = µ j,t+1 = 1 jt 1 jt + 1! 1 + 1! 1! 1 µ jt + 1 jt jt 45 / 83 Likelihood function is a vector of model parameters We know from the discussion above that Pr{a t = j z t, t ; } =Pr{a t = j Q t 1, z t, ; } Define Z l( Q, z) f(q z; ) = T Y t=0 Pr{Q t Q t 1, z t, ; }! f( ; )d High-dimensional integral Simulation estimator Draw (r) Average over draws to simulate the likelihood l( Q, z) =l i ( Q i, z i ) is the likelihood contribution for one household i Likelihood components conditionally independent across households given ) joint likelihood is l( Q, z) = Q i l i( Q i, z i ) 46 / 83

Recap: Unobserved states and the likelihood function General approach Note 1. Find a vector of random variables with density p( ; ) that allows to reconstruct the sequence of unobserved states. Formulate the likelihood conditional on 3. ntegrate over to calculate the likelihood conditional on data and only Requires that numerical integration with respect to or simulation from p( ; ) is possible 47 / 83 nitial conditions Assumption so far: The first sample period is the first period when the consumers start buying the products What if the consumers made product choices before the sample period? Would j0 ( ) =N( #, Solution in Erdem and Keane (1996) Note 0) be a valid assumption? Split the sample periods into two parts, t =0,...,T0 1 and t = T 0,...,T Make some assumption about the prior in period t =0 Simulate the evolution of the priors conditional on the observed product choices and for periods t apple T 0 Conditional on the simulated draw of j0 ( ) formulate the likelihood using data from periods t = T 0,...,T The underlying assumption is that T0 is large enough such that the effect of the initial condition in t =0vanishes 48 / 83

Unobserved heterogeneity The most common methods to incorporate unobserved heterogeneity in dynamic discrete choice models are based on a finite mixture or latent class approach Assume there is a finite number of M consumer/household types, each characterized by a parameter m Let m be the fraction of consumers of type m in the population f there is no correlation between a consumer s type and the initial state, we can define the likelihood contribution for i as l i (Q i ;, ) = MX l i (Q i ; m ) m m=1 Here, =( 1,..., M 1 ) Obviously, the computational overhead increases in M 49 / 83 Unobserved heterogeneity Matters are more complicated if individuals are systematically in specific states depending on their type For example, preference for a brand may be correlated with product experience or stockpiling of that brand n that case, m (x i1 )=Pr(i is of type m x i1 ) 6= m An easy solution (Heckman 1981) is to form an auxiliary model for this probability, i.e. make m (x i1 ; ) a parametrically specified function with to be estimated Example: m (x i1 ; ) = exp(x i1 m ) 1+ P M 1 n=1 exp(x i1 n ) for m =1,...,M 1 50 / 83

Continuous types We now consider a more general form of consumer heterogeneity, which allows for a continuous distribution of types Consumer i s parameter vector # i is a function of a common component and some idiosyncratic component! i, which is drawn from a distribution with density ( ) is known, i.e., does not depend on any parameters that are estimated Example: =(µ, L), and # i = µ + L! i,! i N(0,) 51 / 83 Continuous types Given (! i, ), we can compute the choice specific value functions, calculate the CCP s Pr{a it x it ;! i, }, and then obtain the likelihood contribution for individual i : Z l i (Q i ; ) = l i (Q i ;!, ) (!)d! f the integral is high-dimensional, we need to calculate it by simulation (! (r) ) l i (Q i ; ) 1 R RX l i (Q i ;! (r), ) The problem with this approach: instead of just re-solving for the value functions once when we change, we need to re-solve for the value function R times r=1 5 / 83

Change of variables and importance sampling Ackerberg (009) proposes a method based on a change of variables and importance sampling to overcome this computational challenge (see Hartmann 006 for an application) Assume # i = (! i, ) (could additionally allow parameters to be a function of household characteristics) # i fully summarizes the behavior (choice-specific value functions) of household i Let p(# i ) be the density of # i The support of p must be the same for each n our example, # i N(µ, LL T ) 53 / 83 Change of variables and importance sampling Let g( ) be a density that is positive on the support of p Then For example, we could choose g( ) =p( 0 ) where 0 is some arbitrary parameter Z l i (Q i ; ) = Z = Z = l i (Q i ;!, ) (!)d! l i (Q i ; #)p(# )d# l i (Q i ; #) p(# ) g(#) g(#)d# n the second line we use a change of variables n the third line we use importance sampling 54 / 83

Change of variables and importance sampling We can now simulate the integral as follows: l i (Q i ; ) 1 R RX r=1 l i (Q i ; # (r) ) p(#(r) ) g(# (r) ) Note that # (r) is drawn from g, a distribution that does not depend on the parameter vector Hence, as we change, only the weights p(# (r) )/g(# (r) ) change, but not l i! We only need to calculate the value functions R times ntuition behind this approach: Change only the weights on different possible household types, not directly the households as we search for an optimal Based on the simulated likelihood function, we can define an SML (simulated maximum likelihood) estimator for 55 / 83 Understanding sequential, forward-looking learning Example: A firm launches a product, but is uncertain about its profitability Firm s profits from the product given by # # {# L, # H } could be either negative, # L < 0, or positive, # H > 0 t : prior probability that profits are negative, t =Pr{# = # L } Firm decisions a t {0, 1} at =0denotes that the firm scraps the product, and receives the payoff 0 at =1denotes that the firm stays in the market, and receives the payoff # Assumption: f the firm stays in the market it observes profits and immediately learns the true value of # 56 / 83

Expected profit: u 0 ( t )=0 u 1 ( t )=E(# t )= t # L +(1 t )# H No latent payoff terms jt in this model Suppose that the firm has information that the product is not profitable, that is E(# t )= t # L +(1 t )# H < 0 What action should the firm take scrap or stay in the market? 57 / 83 Myopic vs. forward-looking decision making f the firm only cares about current profits, scrap the product: u 1 ( t ) < 0=u 0 ( t ) =0, then it should But what if > 0? Let s approach this considering the impact of the current decision on future information, and the optimal decision that the firm can take based on future information 58 / 83

Choice-specific value functions f the firm stays in the market and sells the product, it learns the true level of profits Hence, in the next period, t +1, either t+1 =0or t+1 =1 No more uncertainty about the profit level For the case of certainty we can easily calculate the value function: v(1) = 0 v(0) = 1 1 # H Choice-specific value functions for arbitrary t : v 0 ( t )=0 v 1 ( t )=u 1 ( t )+ E (v( t+1 ) t,a 1 = 1) =( t # L +(1 t )# H )+ t 0+(1 t ) 1 = t # L +(1 1 t ) 1 # H 1 # H 59 / 83 Optimal forward-looking decision making under learning Keep the product in the market if and only if v 1 ( t )= t # L +(1 t ) 1 1 # H > 0=v 0 ( t ) Reduces to myopic decision rule if =0 Example #L = 1 and # H =1 Static decision making: Product will be scrapped iff t 0.5 Suppose the firm is forward-looking, =0.9. Then the product will be scrapped iff t 10/11 0.909 60 / 83

Overview The Estimation Problem; Structural vs Reduced Form of the Model dentification Nested Fixed Point (NFP) Estimators Unobserved State Variables Example: The Bayesian Learning Model Model Estimation in the Presence of Unobserved State Variables Unobserved Heterogeneity Postscript: Sequential vs. Myopic Learning The MPEC Approach to ML Estimation Example: Estimation of the Durable Goods Adoption Problem Using NFP and MPEC Approaches Other Topics References 61 / 83 Overview Su and Judd (01) propose an alternative approach to obtain the ML estimator for a dynamic decision process Their method also works and has additional advantages for games They note that the nested fixed point approach is really only a special way of finding the solution of a more general constrained optimization problem MPEC (mathematical programming with equilibrium constraints) approach There will often be better algorithms to solve an MPEC problem, which allows for faster and/or more robust computation of the ML estimate compared to the NFP approach 6 / 83

mplementation details Let s be precise on the computational steps we take to calculate the likelihood function We calculate the expected value function, w(x; ), in order to derive the choice probabilities which allow us to match model predictions and observations On our computer, we will use some interpolation or approximation method to represent w Representation will depend on a set of parameters, represents the value of w at specific state points (interpolation), or coefficients on basis functions (approximation) On our computer, completely defines w, w, 63 / 83 mplementation details The expected value function satisfies w = (w; ) We can alternatively express this relationship as = ( ; ) Value function iteration on our computer proceeds by computing the sequence (n+1) = ( (n) (0) ; ), given some starting value For each parameter vector, there is a unique that satisfies = ( ; ) Denote this relationship as = ( ) 64 / 83

mplementation details Recall how we calculate the likelihood function: Using the expected value function, w,, calculate the choice-specific values, Z v j (x; ) =u j (x; )+ w(x 0 ; )f(x 0 x, j; ) dx 0 Then calculate the CCP s The equation above presumes that we use w = the choice-specific values But we could alternatively use some arbitrary guess for w (w; ) to calculate Let be the parameter vector that summarizes some arbitrary w Let vj (x;, ) be the corresponding choice-specific value function 65 / 83 MPEC approach We can use v j (x;, ) to calculate the CCP s and then the augmented likelihood function, l(q;, ) This expressions clearly shows what the likelihood function depends on: Parameter values The expected value function w, describing how the decision maker behaves Our assumption of rational behavior imposes that the decision maker does not follow some arbitrary decision rule, but rather the rule corresponding to = ( ; ), denoted by = ( ) 66 / 83

MPEC approach We can thus express the likelihood estimation problem in its general form: max (, ) log(l(q;, )) s.t. ( ; ) =0 Solve for the parameter vector (, ) The constraint is an equilibrium constraint, thus the term MPEC The nested fixed point algorithm is a special way of formulating this problem: max log(l(q; )) log(l(q;, ( ))) Note that both problem formulations define the same ML estimator 67 / 83 Discussion Benefits of MPEC estimation: Avoid having to find an exact solution of the value function, = ( ; ), for each ) speed advantage The MPEC formulation allows for more robust convergence to the solution, because derivatives (of the objective and constraint) are easier to compute than in the NFP approach 68 / 83

Discussion sn t a large state space ) with many elements (say 100,000) an obvious obstacle to using the MPEC approach? Good solvers, such as SNOPT or KNTRO can handle such problems Will require a sparse Jacobian of the constraint, i.e. r ( ; ) needs to be sparse Will require that we are able to compute the Jacobian, r ( ; ), which is an L L matrix, where L is the number of elements in 69 / 83 Discussion How to calculate the Jacobian of the constraint? Analytic derivatives! can be cumbersome and error-prone Automatic differentiation (AD), available in C, C++, FORTRAN, and MATLAB! virtually no extra programming effort Works fine for small-medium scale problems May be difficult to implement (computer speed/memory requirements) given currently available AD software for large-scale problems 70 / 83

Overview The Estimation Problem; Structural vs Reduced Form of the Model dentification Nested Fixed Point (NFP) Estimators Unobserved State Variables Example: The Bayesian Learning Model Model Estimation in the Presence of Unobserved State Variables Unobserved Heterogeneity Postscript: Sequential vs. Myopic Learning The MPEC Approach to ML Estimation Example: Estimation of the Durable Goods Adoption Problem Using NFP and MPEC Approaches Other Topics References 71 / 83 Overview Data are generated from the durable goods adoption model with two products discussed in Part of the lecture We solve for w using bilinear interpolation Exercise: Re-write the code using Chebyshev approximation, which is much more efficient in this example 7 / 83

Code documentation Main.m Defines model parameters and price process parameters Sets values for interpolator and initializes Gauss-Hermite quadrature information f create_data=1, solves the decision process given parameters and simulates a new data set. Call to simulate_price_process to simulate prices and simulate_adoptions to simulate the corresponding adoption decisions. Plots aggregate adoption data based on the output of calculate_sales Use script display_dp_solution to show choice probabilities and relative choice-specific value functions, v j (x) v 0 (x) 73 / 83 Code documentation Main.m contains three estimation approaches: 1. Use NFP estimator and MATLAB s built-in Nelder-Meade simplex search algorithm. Use the TOMLAB package to find the NFP estimator. Numerical gradients are used 3. Use TOMLAB and the MPEC approach Allows for use of automatic differentiation using the TOMLAB/MAD module TOMLAB can be obtained at http://tomopt.com/tomlab/ (ask for trial license) 74 / 83

Code documentation Solution of the optimal decision rule Based on iteration on the integrated value function to solve for the expect value function Bellman_equation_rhs updates the current guess of the expected value function Bellman_equation_rhs uses interpolate_d when taking the expectation of the future value 75 / 83 Code documentation NFP algorithm Calls log_likelihood, which solves for the expected value function to calculate the CCP s MPEC algorithm Calls log_likelihood_augmented with, w supplied as parameters Bellman_equation_constraint implements the equilibrium constraint, ( ; ) =0 76 / 83

Overview The Estimation Problem; Structural vs Reduced Form of the Model dentification Nested Fixed Point (NFP) Estimators Unobserved State Variables Example: The Bayesian Learning Model Model Estimation in the Presence of Unobserved State Variables Unobserved Heterogeneity Postscript: Sequential vs. Myopic Learning The MPEC Approach to ML Estimation Example: Estimation of the Durable Goods Adoption Problem Using NFP and MPEC Approaches Other Topics References 77 / 83 Omitted topics This lecture omits two recent estimation approaches: 1. Two-step estimators: Attempt to alleviate the computational burden inherent in NFP and MPEC estimation approaches (e.g. Pesendorfer and Schmidt-Dengler 008). Bayesian estimation using a new algorithm combining MCMC and value function iteration (mai, Jain, and Ching 009; Norets 009) 78 / 83

Overview The Estimation Problem; Structural vs Reduced Form of the Model dentification Nested Fixed Point (NFP) Estimators Unobserved State Variables Example: The Bayesian Learning Model Model Estimation in the Presence of Unobserved State Variables Unobserved Heterogeneity Postscript: Sequential vs. Myopic Learning The MPEC Approach to ML Estimation Example: Estimation of the Durable Goods Adoption Problem Using NFP and MPEC Approaches Other Topics References 79 / 83 Estimation and dentification Abbring, J. H. (010): dentification of Dynamic Discrete Choice Models, Annual Review of Economics,, 367-394 Arcidiacono, P., and P. E. Ellickson (011): Practical Methods for Estimation of Dynamic Discrete Choice Models, Annual Review of Economics, 3, 363-394 Ackerberg, D. A. (009): "A new use of importance sampling to reduce computational burden in simulation estimation," Quantitative Marketing and Economics, 7(4), 343-376 Aguirregabiria, V. and Mira, P. (010): Dynamic discrete choice structural models: A survey, Journal of Econometrics, 156, 38-67 Dubé, J.-P., Hitsch, G. J., and Jindal, P. (013): The Joint dentification of Utility and Discount Functions from Stated Choice Data: An Application to Durable Goods Adoption, manuscript Fang, H., and Y. Wang (013): Estimating Dynamic Discrete Choice Models with Hyperbolic Discounting, with an Application to Mammography Decisions, manuscript Heckman, J. (1981): The ncidental Parameters Problem and the Problem of nitial Conditions in Estimating a Discrete Time-Discrete Data Stochastic Process, in C. Manski and D. L. McFadden (eds.): Structural Analysis of Discrete Data with Econometric Applications. MT Press 80 / 83

Estimation and dentification (Cont.) Hotz, J. V., and R. A. Miller (1993): Conditional Choice Probabilities and the Estimation of Dynamic Models, Review of Economic Studies, 60, 497-59 Magnac, T. and Thesmar, D. (00): dentifying Dynamic Discrete Decision Processes, Econometrica, 70(), 801-816 Reiss, P. C., and F. A. Wolak (007): Structural Econometric Modeling: Rationales and Examples from ndustrial Organization, in Handbook of Econometrics, Vol. 6A, ed. by J. J. Heckman and E. E. Leamer. Elsevier B. V. Rust, J. (1987): Optimal Replacement of GMC Bus Engines: An Empirical Model of Harold Zurcher, Econometrica, 55 (5), 999-1033 Rust, J. (1994): Structural Estimation of Markov Decision Processes, in Handbook of Econometrics, Vol. V, ed. by R. F. Engle and D. L. McFadden. Amsterdam and New York: North-Holland Su, C.-L., and K. L. Judd (01): Constrained Optimization Approaches to Estimation of Structural Models, Econometrica, 80(5), 13-30 Yao, S., C. F. Mela, J. Chiang, and Y. Chen (01): Determining Consumers Discount Rates with Field Studies, Journal of Marketing Research, 49(6), 8-841 81 / 83 Other Estimation Approaches (Two-Step, Bayesian,...) Aguirregabiria, V., and P. Mira (00): Swapping the Nested Fixed Point Algorithm: A Class of Estimators for Discrete Markov Decision Models, Econometrica, 70 (4), 1519-1543 Aguirregabiria, V. and Mira, P. (007): Sequential Estimation of Dynamic Discrete Games, Econometrica, 75(1), 1-53 Arcidiacono, P. and Miller, R. (008): CCP Estimation of Dynamic Discrete Choice Models with Unobserved Heterogeneity, manuscript Bajari, P., L. Benkard, and J. Levin (007): Estimating Dynamic Models of mperfect Competition, Econometrica, 75 (5), 1331 1370 mai, S., Jain, N., and Ching, A. (009): Bayesian Estimation of Dynamic Discrete Choice Models, Econometrica, 77(6), 1865-1899 Norets, A. (009): nference in Dynamic Discrete Choice Models With Serially Correlated Unobserved State Variables, Econometrica, 77(5), 1665-168 Pesendorfer, M. and Schmidt-Dengler, P. (008): Asymptotic Least Squares Estimators for Dynamic Games, Review of Economic Studies, 75, 901-98 8 / 83

Other Applications Erdem, T., and M. P. Keane (1996): Decision-Making Under Uncertainty: Capturing Brand Choice Processes in Turbulent Consumer Goods Markets, Marketing Science, 15 (1), 1 0 Hartmann, W. R. (006): ntertemporal Effects of Consumption and Their mplications for Demand Elasticity Estimates, Quantitative Marketing and Economics, 4, 35-349 Hartmann, W. R. and Viard, V. B. (008): "Do frequency reward programs create switching costs? A dynamic structural analysis of demand in a reward program," Quantitative Marketing and Economics, 6(), 109-137 Hitsch, G. J. (006): "An Empirical Model of Optimal Dynamic Product Launch and Exit Under Demand Uncertainty," Marketing Science, 5(1), 5-50 Misra, S. and Nair, H. (011): "A Structural Model of Sales-Force Compensation Dynamics: Estimation and Field mplementation," Quantitative Marketing and Economics, 9, 11-57 83 / 83