Panel Data Models. Chapter 5. Financial Econometrics. Michael Hauser WS15/16

Similar documents
What s New in Econometrics? Lecture 8 Cluster and Stratified Sampling

SYSTEMS OF REGRESSION EQUATIONS

Chapter 10: Basic Linear Unobserved Effects Panel Data. Models:

Chapter 2. Dynamic panel data models

Chapter 4: Vector Autoregressive Models

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

2. Linear regression with multiple regressors

Clustering in the Linear Model

Chapter 6: Multivariate Cointegration Analysis

1 Teaching notes on GMM 1.

Panel Data: Linear Models

Practical. I conometrics. data collection, analysis, and application. Christiana E. Hilmer. Michael J. Hilmer San Diego State University

The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series.

Econometrics Simple Linear Regression

FULLY MODIFIED OLS FOR HETEROGENEOUS COINTEGRATED PANELS

FIXED EFFECTS AND RELATED ESTIMATORS FOR CORRELATED RANDOM COEFFICIENT AND TREATMENT EFFECT PANEL DATA MODELS

Econometric Methods for Panel Data

Module 5: Multiple Regression Analysis

COURSES: 1. Short Course in Econometrics for the Practitioner (P000500) 2. Short Course in Econometric Analysis of Cointegration (P000537)

Wooldridge, Introductory Econometrics, 3d ed. Chapter 12: Serial correlation and heteroskedasticity in time series regressions

Multiple Linear Regression in Data Mining

Association Between Variables

MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS

ON THE ROBUSTNESS OF FIXED EFFECTS AND RELATED ESTIMATORS IN CORRELATED RANDOM COEFFICIENT PANEL DATA MODELS

LOGIT AND PROBIT ANALYSIS

MISSING DATA TECHNIQUES WITH SAS. IDRE Statistical Consulting Group

ECON 142 SKETCH OF SOLUTIONS FOR APPLIED EXERCISE #2

Marketing Mix Modelling and Big Data P. M Cain

Factor analysis. Angela Montanari

Financial Risk Management Exam Sample Questions/Answers

Correlated Random Effects Panel Data Models

Introduction to Regression Models for Panel Data Analysis. Indiana University Workshop in Methods October 7, Professor Patricia A.

Introduction to General and Generalized Linear Models

The Loss in Efficiency from Using Grouped Data to Estimate Coefficients of Group Level Variables. Kathleen M. Lang* Boston College.

EFFECT OF INVENTORY MANAGEMENT EFFICIENCY ON PROFITABILITY: CURRENT EVIDENCE FROM THE U.S. MANUFACTURING INDUSTRY

Econometric analysis of the Belgian car market

Simple Linear Regression Inference

Note 2 to Computer class: Standard mis-specification tests

FORECASTING DEPOSIT GROWTH: Forecasting BIF and SAIF Assessable and Insured Deposits

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

INDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition)

Panel Data Analysis in Stata

IMPACT EVALUATION: INSTRUMENTAL VARIABLE METHOD

Part 2: Analysis of Relationship Between Two Variables

Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression

Employer-Provided Health Insurance and Labor Supply of Married Women

Chapter 1. Vector autoregressions. 1.1 VARs and the identi cation problem

Panel Data Econometrics

1. THE LINEAR MODEL WITH CLUSTER EFFECTS

APPROXIMATING THE BIAS OF THE LSDV ESTIMATOR FOR DYNAMIC UNBALANCED PANEL DATA MODELS. Giovanni S.F. Bruno EEA

Introduction to Regression and Data Analysis

I n d i a n a U n i v e r s i t y U n i v e r s i t y I n f o r m a t i o n T e c h n o l o g y S e r v i c e s

The Effect of R&D Expenditures on Stock Returns, Price and Volatility

A Subset-Continuous-Updating Transformation on GMM Estimators for Dynamic Panel Data Models

Lecture 3: Differences-in-Differences

Statistics 2014 Scoring Guidelines

problem arises when only a non-random sample is available differs from censored regression model in that x i is also unobserved

From the help desk: Swamy s random-coefficients model

Department of Economics

4. Simple regression. QBUS6840 Predictive Analytics.

Identifying Non-linearities In Fixed Effects Models

Simple Regression Theory II 2010 Samuel L. Baker

Testing for Granger causality between stock prices and economic growth

An Investigation of the Statistical Modelling Approaches for MelC

3.1 Least squares in matrix form

State Space Time Series Analysis

Please follow the directions once you locate the Stata software in your computer. Room 114 (Business Lab) has computers with Stata software

Longitudinal Meta-analysis

STATISTICA Formula Guide: Logistic Regression. Table of Contents

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Financial TIme Series Analysis: Part II

Regression Analysis (Spring, 2000)

Chapter 3: The Multiple Linear Regression Model

Multiple Regression: What Is It?

Lasso on Categorical Data

Solución del Examen Tipo: 1

MODELS FOR PANEL DATA Q

Example: Boats and Manatees

Vector Time Series Model Representations and Analysis with XploRe

Exact Nonparametric Tests for Comparing Means - A Personal Summary

Module 3: Correlation and Covariance

Premium Copayments and the Trade-off between Wages and Employer-Provided Health Insurance. February 2011

Causal Forecasting Models

Introduction to Quantitative Methods

1 Short Introduction to Time Series

Multivariate normal distribution and testing for means (see MKB Ch 3)

Accounting for Time-Varying Unobserved Ability Heterogeneity within Education Production Functions

POLYNOMIAL AND MULTIPLE REGRESSION. Polynomial regression used to fit nonlinear (e.g. curvilinear) data into a least squares linear regression model.

Quadratic forms Cochran s theorem, degrees of freedom, and all that

Good luck! BUSINESS STATISTICS FINAL EXAM INSTRUCTIONS. Name:

Multivariate Analysis of Health Survey Data

Robust Inferences from Random Clustered Samples: Applications Using Data from the Panel Survey of Income Dynamics

Testing The Quantity Theory of Money in Greece: A Note

Online Appendices to the Corporate Propensity to Save

1 Introduction. 2 The Econometric Model. Panel Data: Fixed and Random Effects. Short Guides to Microeconometrics Fall 2015

Minimum LM Unit Root Test with One Structural Break. Junsoo Lee Department of Economics University of Alabama

PITFALLS IN TIME SERIES ANALYSIS. Cliff Hurvich Stern School, NYU

Linear Models for Continuous Data

Overview of Factor Analysis

Spatial panel models

Transcription:

1 / 63 Panel Data Models Chapter 5 Financial Econometrics Michael Hauser WS15/16

2 / 63 Content Data structures: Times series, cross sectional, panel data, pooled data Static linear panel data models: fixed effects, random effects, estimation, testing Dynamic panel data models: estimation

Data structures 3 / 63

4 / 63 Data structures We distinguish the following data structures Time series data: {xt, t = 1,..., T }, univariate series, e.g. a price series: Its path over time is modeled. The path may also depend on third variables. Multivariate, e.g. several price series: Their individual as well as their common dynamics is modeled. Third variables may be included. Cross sectional data are observed at a single point of time for several individuals, countries, assets, etc., x i, i = 1,..., N. The interest lies in modeling the distinction of single individuals, the heterogeneity across individuals.

5 / 63 Data structures: Pooling Pooling data refers to two or more independent data sets of the same type. Pooled time series: We observe e.g. return series of several sectors, which are assumed to be independent of each other, together with explanatory variables. The number of sectors, N, is usually small. Observations are viewed as repeated measures at each point of time. So parameters can be estimated with higher precision due to an increased sample size.

6 / 63 Data structures: Pooling Pooled cross sections: Mostly these type of data arise in surveys, where people are asked about e.g. their attitudes to political parties. This survey is repeated, T times, before elections every week. T is usually small. So we have several cross sections, but the persons asked are chosen randomly. Hardly any person of one cross section is member of another one. The cross sections are independent. Only overall questions can be answered, like the attitudes within males or women, but no individual (even anonymous) paths can be identified.

7 / 63 Data structures: Panel data A panel data set (also longitudinal data) has both a cross-sectional and a time series dimension, where all cross section units are observed during the whole time period. x it, i = 1,..., N, t = 1,..., T. T is usually small. We can distinguish between balanced and unbalanced panels. Example for a balanced panel: The Mirkozensus in Austria is a household, hh, survey, with the same size of 22.500 each quarter. Each hh has to record its consumption expenditures for 5 quarters. So each quarter 4500 members enter/leave the Mikrozensus. This is a balanced panel.

8 / 63 Data structures: Panel data A special case of a balance panel is a fixed panel. Here we require that all individuals are present in all periods. An unbalanced panel is one where individuals are observed a different number of times, e.g. because of missing values. We are concerned only with balanced panels. In general panel data models are more efficient than pooling cross-sections, since the observation of one individual for several periods reduces the variance compared to repeated random selections of individuals.

9 / 63 Pooling time series: estimation We consider T relatively large, N small. y it = α + β x it + u it In case of heteroscedastic errors, σi 2 σ 2 (= σu), 2 individuals with large errors will dominate the fit. A correction is necessary. It is similar to a GLS and can be performed in 2 steps. First estimate under assumption of const variance for each indiv i and calculate the individual residual variances, s 2 i. s 2 i = 1 T (y it a b x it ) 2 t

10 / 63 Pooling time series: estimation Secondly, normalize the data with s i and estimate (y it /s i ) = α (1/s i ) + β (x it /s i ) + ũ it ũ it = u it /s i has (possibly) the required constant variance, is homoscedastic. Remark: V(ũ it ) = V(u it /s i ) 1 Dummies may be used for different cross sectional intercepts.

Panel data modeling 11 / 63

12 / 63 Example Say, we observe the weekly returns of 1000 stocks in two consecutive weeks. The pooling model is appropriate, if the stocks are chosen randomly in each period. The panel model applies, if the same stocks are observed in both periods. We could ask the question, what are the characteristics of stocks with high/low returns in general. For panel models we could further analyze, whether a stock with high/low return in the first period also has a high/low return in the second.

13 / 63 Panel data model The standard static model with i = 1,..., N, t = 1,..., T is y it = β 0 + x it β + ϵ it x it is a K -dimensional vector of explanatory variables, without a const term. β 0, the intercept, is independent of i and t. β, a (K 1) vector, the slopes, is independent of i and t. ϵ it, the error, varies over i and t. Individual characteristics (which do not vary over time), z i, may be included y it = β 0 + x it β 1 + z i β 2 + ϵ it

14 / 63 Two problems: endogeneity and autocorr in the errors Consistency/exogeneity: Assuming iid errors and applying OLS we get consistent estimates, if E(ϵ it ) = 0 and E(x it ϵ it ) = 0, if the x it are weakly exogenous. Autocorrelation in the errors: Since individual i is repeatedly observed (contrary to pooled data) with s t is very likely. Then, Corr(ϵ i,s, ϵ i,t ) 0 standard errors are misleading (similar to autocorr residuals), OLS is inefficient (cp. GLS).

15 / 63 Common solution for individual unobserved heterogeneity Unobserved (const) individual factors, i.e. if not all z i variables are available, may be captured by α i. E.g. we decompose ϵ it in ϵ it = α i + u it with u it iid(0, σ 2 u) u it has mean 0, is homoscedastic and not serially correlated. In this decomposition all individual characteristics - including all observed, z i β 2, as well as all unobserved ones, which do not vary over time - are summarized in the α i s. We distinguish fixed effects (FE), and random effects (RE) models.

16 / 63 Fixed effects model, FE Fixed effects model, FE: α i are individual intercepts (fixed for given N). y it = α i + x it β + u it No overall intercept is (usually) included in the model. Under FE, consistency does not require, that the individual intercepts (whose coefficients are the α i s) and x it are uncorrelated. Only E(x it u it ) = 0 must hold. There are N 1 additional parameters for capturing the individual heteroscedasticity.

17 / 63 Random effects model, RE Random effects model, RE: α i iid(0, σ 2 α) y it = β 0 + x it β + α i + u it, u it iid(0, σ 2 u) The α i s are rvs with the same variance. The value α i is specific for individual i. The α s of different indivs are independent, have a mean of zero, and their distribution is assumed to be not too far away from normality. The overall mean is captured in β 0. α i is time invariant and homoscedastic across individuals. There is only one additional parameter σ 2 α. Only α i contributes to Corr(ϵ i,s, ϵ i,t ). α i determines both ϵ i,s and ϵ i,t.

18 / 63 RE some discussion Consistency: As long as E[x it ϵ it ] = E[x it (α i + u it )] = 0, i.e. x it are uncorrelated with α i and u it, the explanatory vars are exogenous, the estimates are consistent. There are relevant cases where this exogeneity assumption is likely to be violated: E.g. when modeling investment decisions the firm specific heteroscedasticity α i might correlate with (the explanatory variable of) the cost of capital of firm i. The resulting inconsistency can be avoided by considering a FE model instead. Estimation: The model can be estimated by (feasible) GLS which is in general more efficient than OLS.

19 / 63 The static linear model the fixed effects model 3 Estimators: Least square dummy variable estimator, LSDV Within estimator, FE First difference estimator, FD

20 / 63 [LSDV] Fixed effects model: LSDV estimator We can write the FE model using N dummy vars indicating the individuals. y it = N j=1 α j d j it + x it β + u it u it iid(0, σ 2 u) with dummies d j, where d j it = 1 if i = j, and 0 else. The parameters can be estimated by OLS. The implied estimator for β is called the LS dummy variable estimator, LSDV. Instead of exploding computer storage by increasing the number of dummy variables for large N the within estimator is used.

21 / 63 [LSDV] Testing the significance of the group effects Apart from t-tests for single α i (which are hardly used) we can test, whether the indivs have the same intercepts wrt some have different intercepts by an F -test. The pooled model (all intercepts are restricted to be the same), H 0, is y it = β 0 + x it β + u it the fixed effects model (intercepts may be different, are unrestricted), H A, y it = α i + x it β + u it i = 1,..., N The F ratio for comparing the pooled with the FE model is F N 1,N T N K = (R2 LSDV R2 Pooled )/(N 1) (1 RLSDV 2 )/(N T N K )

22 / 63 [FE] Within transformation, within estimator The FE estimator for β is obtained, if we use the deviations from the individual means as variables. The model in individual means is with ȳ i = t y it/t and ᾱ i = α i, ū i = 0 ȳ i = α i + x i β + ū i Subtraction from y it = α i + x it β + u it gives y it ȳ i = (x it x i ) β + (u it ū i ) where the intercepts vanish. Here the deviation of y it from ȳ i is explained (not the difference between different individuals, ȳ i and ȳ j ). The estimator for β is called the within or FE estimator. Within refers to the variability (over time) among observations of individual i.

23 / 63 [FE] Within/FE estimator ˆβ FE The within/fe estimator is ( ) 1 ˆβ FE = (x it x i )(x it x i ) (x it x i )(y it ȳ i ) i t This expression is identical to the well known formula ˆβ = (X X) 1 X y for N (demeaned wrt individual i) data with T repeated observations. i t

24 / 63 [FE] Finite samples properties of ˆβ FE Finite samples: Unbiasedness: if all x it are independent of all u js, strictly exogenous. Normality: if in addition u it is normal.

25 / 63 [FE] Asymptotic samples properties of ˆβ FE Asymptotically: Consistency wrt N, T fix: if E[(x it x i )u it ] = 0. I.e. both x it and x i are uncorrelated with the error, u it. This (as x i = x i,t /T ) implies that x it is strictly exogenous: E[x is u it ] = 0 for all s, t x it has not to depend on current, past or future values of the error term of individual i. This excludes lagged dependent vars any xit, which depends on the history of y Even large N do not mitigate these violations. Asymptotic normality: under consistency and weak conditions on u.

26 / 63 [FE] Properties of the N intercepts, ˆα i, and V( ˆβ FE ) Consistency of ˆα i wrt T : if E[(x it x i )u it ] = 0. There is no convergence wrt to N, even if N gets large. Cp. ȳ i = (1/T ) t y it. The estimates for the N intercepts, ˆα i, are simply ˆα i = ȳ i x i ˆβ FE Reliable estimates for V( ˆβ FE ) are obtained from the LSDV model. A consistent estimate for σu 2 is ˆσ u 2 1 = ûit 2 N T N K with û it = y it ˆα i x it ˆβ FE i t

27 / 63 [FD] The first difference, FD, estimator for the FE model An alternative way to eliminate the individual effects α i is to take first differences (wrt time) of the FE model. or y it y i,t 1 = (x it x i,t 1 ) β + (u it u i,t 1 ) y it = x it β + u it Here also, all variables, which are only indiv specific - they do not change with t - drop out. Estimation with OLS gives the first-difference estimator ˆβ FD = ( i T t=2 x it x it ) 1 i T x it y it t=2

28 / 63 [FD] Properties of the FD estimator Consistency for N : if E[ x it u it ] = 0. This is slightly less demanding than for the within estimator, which is based on strict exogeneity. FD allows e.g. correlation between x it and u i,t 2. The FD estimator is slightly less efficient than the FE, as u it exhibits serial correlation, even if u it are uncorrelated. FD looses one time dimension for each i. FE looses one degree of freedom for each i by using ȳ i, x i.

The static linear model estimation of the random effects model 29 / 63

30 / 63 Estimation of the random effects model y it = β 0 + x it β + α i + u it, u it iid(0, σ 2 u), α i iid(0, σ 2 α) where (α i + u it ) is an error consisting of 2 components: an individual specific component, which does not vary over time, α i. a remainder, which is uncorrelated wrt i and t, u it. α i and u it are mutually independent, and indep of all x js. As simple OLS does not take this special error structure into account, so GLS is used.

31 / 63 [RE] GLS estimation In the following we stack the errors of individual i, α i and u it. Ie, we put each of them into a column vector. (α i + u it ) reads then as α i ι T + u i α i is constant for individual i ι T = (1,..., 1) is a (T 1) vector of only ones. u i = (u i1,..., u it ) is a vector collecting all u it s for indiv i. I T is the T -dimensional identity matrix. Since the errors of different indiv s are independent, the covar-matrix consists of N identical blocks in the diagonal. Block i is V(α i ι T + u i ) = Ω = σ 2 α ι T ι T + σ2 u I T Remark: The inverse of a block-diag matrix is a block-diag matrix with the inverse blocks in the diag. So it is enough to consider indiv blocks.

32 / 63 [RE] GLS estimation GLS corresponds to premultiplying the vectors (y i1,..., y it ), etc. by Ω 1/2. Where [ ] Ω 1 = σu 2 I T ψ ι T ι T σ 2 α with ψ = σu 2 + T σα 2 and ψ = σ 2 u σ 2 u + T σ 2 α Remark: The GLS for the classical regression model is where u.(0, Ω) is heteroscedastic. ˆβ GLS = (X Ω 1 X) 1 X Ω 1 y

33 / 63 [RE] GLS estimator The GLS estimator can be written as ( ˆβ GLS = (x it x i )(x it x i ) + ψ i t i ( (x it x i )(y it ȳ i ) + ψ i t i T ( x i x)( x i x) ) 1 T ( x i x)(ȳ i ȳ) ) If ψ = 0, (σ 2 u = 0), the GLS estimator is equal to the FE estimator. As if N individual intercepts were included. If ψ = 1, (σ 2 α = 0). The GLS becomes the OLS estimator. Only 1 overall intercept is included, as in the pooled model. If T then ψ 0. I.e.: The FE and RE estimators for β are equivalent for large T. (But not for T fix and N.) Remark: ( i t + i T )/(N T ) is finite as N or T or both.

34 / 63 Interpretation of within and between, ψ = 1 Within refers to the variation within one individual (over time), between measures the variation between the individuals. The within and between components (y it ȳ) = (y it ȳ i ) + (ȳ i ȳ) are orthogonal. So (y it ȳ) 2 = i t i (y it ȳ i ) 2 + T t i (ȳ i ȳ) 2 The variance of y may be decomposed into the sum of variance within and the variance between. ȳ i = (1/T ) t y it and ȳ = (1/(N T ) i t y it

35 / 63 [RE] Properties of the GLS GLS is unbiased, if the x s are independent of all u it and α i. The GLS will be more efficient than OLS in general under the RE assumptions. Consistency for N (T fix, or T ): if E[(x it x i )u it ] = 0 and E[ x i α i ] = 0 holds. Under weak conditions (errors need not be normal) the feasible GLS is asymptotically normal.

Comparison and testing of FE and RE 36 / 63

37 / 63 Interpretation: FE and RE Fixed effects: The distribution of y it is seen conditional on x it and individual dummies d i. E[y it x it, d i ] = x it β + α i This is plausible if the individuals are not a random draw, like in samples of large companies, countries, sectors. Random effects: The distribution of y it is not conditional on single individual characteristics. Arbitrary indiv effects have a fixed variance. Conclusions wrt a population are drawn. E[y it x it, 1] = x it β + β 0 If the d i are expected to be correlated with the x s, FE is preferred. RE increases only efficiency, if exogeneity is guaranteed.

38 / 63 Interpretation: FE and RE (T fix) The FE estimator, ˆβ FE, is consistent (N ) and efficient under the FE model assumptions. consistent, but not efficient, under the RE model assumptions, as the correlation structure of the errors is not taken into account (replaced only by different intercepts). As the error variance is not estimated consistently, the t-values are not correct. The RE estimator is consistent (N ) and efficient under the assumptions of the RE model. not consistent under the FE assumptions, as the true explanatory vars (d i, x it ) are correlated with (α i + u it ) [with ˆψ 0 and x i x biased for fix T ].

39 / 63 Hausman test Hausman tests the H 0 that x it and α i are uncorrelated. We compare therefore two estimators one, that is consistent under both hypotheses, and one, that is consistent (and efficient) only under the null. A significant difference between both indicates that H 0 is unlikely to hold. H 0 is the RE model H A is the FE model y it = β 0 + x it β + α i + u it y it = α i + x it β + u it ˆβ RE is consistent (and efficient), under H 0, not under H A. ˆβFE is consistent, under H 0 and H A.

40 / 63 Test FE against RE: Hausman test We consider the difference of both estimators. If it is large, we reject the null. Since ˆβ RE is efficient under H 0, it holds The Hausman statistic under H 0 is ˆβ FE ˆβ RE V( ˆβ FE ˆβ RE ) = V( ˆβ FE ) V( ˆβ RE ) ( ˆβ FE ˆβ RE ) [ˆV( ˆβ FE ) ˆV( ˆβ RE )] 1 ( ˆβ FE ˆβ RE ) asy χ 2 (K ) If in finite samples [ˆV( ˆβ FE ) ˆV( ˆβ RE )] is not positive definite, testing is performed only for a subset of β.

41 / 63 Comparison via goodness-of-fit, R 2 wrt ˆβ We are often interested in a distinction between the within R 2 and the between R 2. The within R 2 for any arbitrary estimator is given by R 2 within ( ˆβ) = Corr 2 [ŷ it ŷ i, y it ȳ i ] and between R 2 by R 2 between ( ˆβ) = Corr 2 [ŷ i, ȳ i ] where ŷ it = x it ˆβ, ŷ i = x i ˆβ and ŷ it ŷ i = (x it x i ) ˆβ. The overall R 2 is R 2 overall ( ˆβ) = Corr 2 [ŷ it, y it ]

42 / 63 Goodness-of-fit, R 2 The usual R 2 is valid for comparison of the pooled model estimated by OLS and the FE model. The comparison of the FE and RE via R 2 is not valid as in the FE the α i s are considered as explanatory variables, while in the RE model they belong to the unexplained error. Comparison via R 2 should be done only within the same class of models and estimators.

43 / 63 Testing for autocorrelation A generalization of the Durbin-Watson statistic for autocorr of order 1 can be formulated T i t=2 DW p = (û it û i,t 1 ) 2 i t û2 it The critical values depend on T, N and K. For T [6, 10] and K [3, 9] the following approximate lower and upper bounds can be used. N = 100 N = 500 N = 1000 d L d U d L d U d L d U 1.86 1.89 1.94 1.95 1.96 1.97

44 / 63 Testing for heteroscedasticity A generalization of the Breusch-Pagan test is applicable. σ 2 u is tested whether it depends on a set of J third variables z. V(u it ) = σ 2 h(z it γ) where for the function h(.), h(0) = 1 and h(.) > 0 holds. The null hypothesis is γ = 0. The N(T 1) multiple of R 2 u of the auxiliary regression û 2 it = σ 2 h(z it γ) + v it is distributed under the H 0 asymptotically χ 2 (J) with J degrees of freedom. N(T 1)R 2 u a χ 2 (J)

Dynamic panel data models 45 / 63

46 / 63 Dynamic panel data models The dynamic model with one lagged dependent without exogenous variables, γ < 1, is y it = γ y i,t 1 + α i + u it, u it iid(0, σ 2 u)

47 / 63 Endogeneity problem Here, y i,t 1 depends positively on α i : This is simple to see, when inspecting the model for period (t 1) y i,t 1 = γ y i,t 2 + α i + u i,t 1 There is an endogeneity problem. OLS or GLS will be inconsistent for N and T fixed, both for FE and RE. (Easy to see for RE.) The finite sample bias can be substantial for small T. E.g. if γ = 0.5, T = 10, and N plim N ˆγ FE = 0.33 However, as in the univariate model, if in addition T, we obtain a consistent estimator. But T is i.g. small for panel data.

48 / 63 The first difference estimator (FD) Using the 1st difference estimator (FD), which eliminates the α i s y it = γ y i,t 1 + u it y it y i,t 1 = γ(y i,t 1 y i,t 2 ) + (u it u i,t 1 ) is no help, since y i,t 1 and u i,t 1 are correlated even when T. We stay with the FD model, as the exogeneity requirements are less restrictive, and use an IV estimator. For that purpose we look also at y i,t 1 y i,t 1 y i,t 2 = γ(y i,t 2 y i,t 3 ) + (u i,t 1 u i,t 2 )

49 / 63 [FD] IV estimation, Anderson-Hsiao Instrumental variable estimators, IV, have been proposed by Anderson-Hsiao, as they are consistent with N and finite T. Choice of the instruments for (y i,t 1 y i,t 2 ): Instrument y i,t 2 as proxy is correlated with (y i,t 1 y i,t 2 ), but not with u i,t 1 or u i,t, and so u i,t. Instrument (y i,t 2 y i,t 3 ) as proxy for (y i,t 1 y i,t 2 ) sacrifies one more sample period.

50 / 63 [FD] GMM estimation, Arellano-Bond The Arellano-Bond (also Arellano-Bover) method of moments estimator is consistent. The moment conditions use the properties of the instruments y i,t j, j 2 to be uncorrelated with the future errors u it and u i,t 1. We obtain an increasing number of moment conditions for t = 3, 4,..., T. t = 3 : E[(u i,3 u i,2 )y i,1 ] = 0 t = 4 : E[(u i,4 u i,3 )y i,2 ] = 0, E[(u i,4 u i,3 )y i,1 ] = 0 t = 5 : E[(u i,5 u i,4 )y i,3 ] = 0,..., E[(u i,5 u i,4 )y i,1 ] = 0.........

51 / 63 [FD] GMM estimation, Arellano-Bond We define the (T 2) 1 vector u i = [(u i,3 u i,2 ),..., (u i,t u i,t 1 )] and a (T 2) (T 2) matrix of instruments y i,1 y i,1... y i,1 Z i 0 y = i,2... y i,2. 0 0... 0... 0 y i,t 2

[FD] GMM estimation, Arellano-Bond, without x s Ignoring exogenous variables, for y it = γ y i,t 1 + u it, E[Z i u i ] = E[Z i ( y i γ y i, 1 )] = 0 The number of moment conditions are 1 + 2 +... + (T 2). This number exceeds i.g. the number of unknown coefficients, so γ is estimated by minimizing the quadratic expression [ 1 min γ N i ] [ 1 Z i ( y i γ y i, 1 ) W N N i ] Z i ( y i γ y i, 1 ) with a weighting matrix W N. The optimal matrix W N yielding an asy efficient estimator is the inverse of the covariance matrix of the sample moments. 52 / 63

53 / 63 [FD] GMM estimation, Arellano-Bond, without x s The W N matrix can be estimated directly from the data after a first consistent estimation step. Under weak regularity conditions the GMM estimator is asy normal for N for fixed T, T > 2 using our instruments. It is also consistent for N and T, though the number of moment conditions as T. It is advisable to limit the number of moment conditions.

54 / 63 [FD] GMM estimation, Arellano-Bond, with x s The dynamic panel data model with exogenous variables is y it = x it β + γ y i,t 1 + α i + u it, u it iid(0, σ 2 u) As also exogenous x s are included in the model additional moment conditions can be formulated: For strictly exogenous vars, E[x is u it ] = 0 for all s, t, E[x is u it ] = 0 For predetermined (not strictly exogenous) vars, E[x is u it ] = 0 for s t E[x i,t j u it ] = 0 j = 1,..., t 1

55 / 63 GMM estimation, Arellano-Bond, with x s So there are a lot of possible moment restrictions both for differences as well as for levels, and so a variety of GMM estimators. GMM estimation may be combined with both FE and RE. Here also, the RE estimator is identical to the FE estimator with T.

Further topics 56 / 63

57 / 63 Further topics We augment the FE model with dummy variables indicating both individuals and time periods. y it = δ i d i + δ t d t + x it β + u it Common unit roots can be tested and modeled. which is transformed to y it = α i + γ i y i,t 1 + u it and tested wrt H 0 : π i = 0 for all i y it = α i + π i y i,t 1 + u it

58 / 63 Panel data models: further topics Cointegration tests. Limited dependent models, like binary choice, logit, probit models, etc.

Exercises and References 59 / 63

60 / 63 Exercises Choose one out of (1, 2), and 3 out of (3G, 4G, 5G, 6G). 7G is compulsory. 1 The fixed effects model with group and time effects is y it = α i d i + δ t d t + x it β + u it where d i are dummies for individuals, and d t dummies for time periods. Specify the model correctly when an overall intercept is included. 2 Show that holds. (y it ȳ) 2 = t i i (y it ȳ i ) 2 + T t i (ȳ i ȳ) 2

61 / 63 Exercises 3G Choose a subset of firms for the Grunfeld data and compare the pooling and the fixed effects model. Use Ex5_panel_data_R.txt. 4G Choose a subset of firms for the Grunfeld data and compare the fixed effects and the random effects model. Use Ex5_panel_data_R.txt. 5G Choose a subset of firms for the Grunfeld/EmplUK data and estimate a suitable dynamic panel data model. Use Ex5_panel_data_R.txt. 6G Compute the Durbin-Watson statistic for panel data in R. Test for serial correlation in the fixed effects model of a data set of your choice.

62 / 63 Exercises 7G (Compulsory) Prepare a short summary about the Illustration 10.5: Explaining capital structure, in Verbeek, p.383-388, covering the following issues: the empirical problem/question posed the model(s) under consideration/ the relations between the variables suggested by economic theory the formal model the data (source, coverage, proxies for the variables in the theoretical model) the estimation method(s), empirical specification of the model, comparison of models results/conclusions Delivery of an electronic version of 7G not later than 1 week after the final test.

63 / 63 References Verbeek 10