Econometric Methods for Panel Data



Similar documents
SYSTEMS OF REGRESSION EQUATIONS

Chapter 10: Basic Linear Unobserved Effects Panel Data. Models:

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Clustering in the Linear Model

Chapter 2. Dynamic panel data models

2. Linear regression with multiple regressors

Data Mining: Algorithms and Applications Matrix Math Review

FIXED EFFECTS AND RELATED ESTIMATORS FOR CORRELATED RANDOM COEFFICIENT AND TREATMENT EFFECT PANEL DATA MODELS

Marketing Mix Modelling and Big Data P. M Cain

Panel Data: Linear Models

The Loss in Efficiency from Using Grouped Data to Estimate Coefficients of Group Level Variables. Kathleen M. Lang* Boston College.

ON THE ROBUSTNESS OF FIXED EFFECTS AND RELATED ESTIMATORS IN CORRELATED RANDOM COEFFICIENT PANEL DATA MODELS

Panel Data Econometrics

Chapter 3: The Multiple Linear Regression Model

1 Teaching notes on GMM 1.

Department of Economics

What s New in Econometrics? Lecture 8 Cluster and Stratified Sampling

COURSES: 1. Short Course in Econometrics for the Practitioner (P000500) 2. Short Course in Econometric Analysis of Cointegration (P000537)

Econometrics Simple Linear Regression

Factor analysis. Angela Montanari

Analysis of Bayesian Dynamic Linear Models

Fitting Subject-specific Curves to Grouped Longitudinal Data

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Chapter 4: Vector Autoregressive Models

Longitudinal (Panel and Time Series Cross-Section) Data

Panel Data Analysis in Stata

From the help desk: Swamy s random-coefficients model

ECON 142 SKETCH OF SOLUTIONS FOR APPLIED EXERCISE #2

Variance Reduction. Pricing American Options. Monte Carlo Option Pricing. Delta and Common Random Numbers

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

1 Introduction to Matrices

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.

Is Infrastructure Capital Productive? A Dynamic Heterogeneous Approach.

Statistical Machine Learning

Basics of Statistical Machine Learning

MODELS FOR PANEL DATA Q

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Regression Analysis LECTURE 2. The Multiple Regression Model in Matrices Consider the regression equation. (1) y = β 0 + β 1 x β k x k + ε,

Estimating an ARMA Process

Factorization Theorems

Regression Analysis. Regression Analysis MIT 18.S096. Dr. Kempthorne. Fall 2013

A Subset-Continuous-Updating Transformation on GMM Estimators for Dynamic Panel Data Models

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression

Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.

Correlated Random Effects Panel Data Models

Fixed Effects Bias in Panel Data Estimators

MATHEMATICAL METHODS OF STATISTICS

problem arises when only a non-random sample is available differs from censored regression model in that x i is also unobserved

FULLY MODIFIED OLS FOR HETEROGENEOUS COINTEGRATED PANELS

Part 2: Analysis of Relationship Between Two Variables

Introduction to Matrix Algebra

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

Quadratic forms Cochran s theorem, degrees of freedom, and all that

An extension of the factoring likelihood approach for non-monotone missing data

Chapter 1. Linear Panel Models and Heterogeneity

Simple Regression Theory II 2010 Samuel L. Baker

Introduction to Regression Models for Panel Data Analysis. Indiana University Workshop in Methods October 7, Professor Patricia A.

Least Squares Estimation

Lecture 3: Linear methods for classification

Orthogonal Diagonalization of Symmetric Matrices

Multiple Linear Regression in Data Mining

1 Theory: The General Linear Model

Similarity and Diagonalization. Similar Matrices

A Basic Introduction to Missing Data

MAT188H1S Lec0101 Burbulla

State of Stress at Point

Implementing Panel-Corrected Standard Errors in R: The pcse Package

Linear Classification. Volker Tresp Summer 2015

I n d i a n a U n i v e r s i t y U n i v e r s i t y I n f o r m a t i o n T e c h n o l o g y S e r v i c e s

[1] Diagonal factorization

Introduction to General and Generalized Linear Models

The Basic Two-Level Regression Model

INDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition)

α = u v. In other words, Orthogonal Projection

Missing Data: Part 1 What to Do? Carol B. Thompson Johns Hopkins Biostatistics Center SON Brown Bag 3/20/13

Lecture 5: Singular Value Decomposition SVD (1)

3.1 Least squares in matrix form

Introduction to Path Analysis

Inner Product Spaces

Simple Linear Regression Inference

Least-Squares Intersection of Lines

Chapter 1. Vector autoregressions. 1.1 VARs and the identi cation problem

Web-based Supplementary Materials for Bayesian Effect Estimation. Accounting for Adjustment Uncertainty by Chi Wang, Giovanni

Dimensionality Reduction: Principal Components Analysis

Note 2 to Computer class: Standard mis-specification tests

Models for Longitudinal and Clustered Data

Standardization and Estimation of the Number of Factors for Panel Data

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

Econometric analysis of the Belgian car market

Spatial Statistics Chapter 3 Basics of areal data and areal data modeling

Gaussian Conjugate Prior Cheat Sheet

APPROXIMATING THE BIAS OF THE LSDV ESTIMATOR FOR DYNAMIC UNBALANCED PANEL DATA MODELS. Giovanni S.F. Bruno EEA

University of Lille I PC first year list of exercises n 7. Review

Transcription:

Based on the books by Baltagi: Econometric Analysis of Panel Data and by Hsiao: Analysis of Panel Data Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna May 4, 2010

Outline Introduction Fixed effects The LSDV estimator The algebra of the LSDV estimator Properties of the LSDV estimator Pooled regression in the FE model Random effects The GLS estimator for the RE model feasible GLS in the RE model Properties of the RE estimator Two-way panels The two-way fixed-effects model The two-way random-effects model

The definition of panel data The word panel has a Dutch origin and it essentially means a board. Data for a variable on a board is two-dimensional, the variable X has two subscripts. One dimension is an individual index (i), and the other dimension is time (t): X 11... X 1T. X it. X N1... X NT

Long and broad boards X 11......... X 1T.... X it.... X N1......... X NT X 11.... X 1T... X it.... X N1... X NT If T N, the panel is a time-series panel, as it is often encountered in macroeconomics. If N T, it is a cross-section panel, as it is common in microeconomics.

Not quite panels longitudinal data sets look like panels but the time index may not be common across individuals. For examples, growing plants may be measured according to individual time; in pseudo-panels, individuals may change between time points: X it and X i,t+1 may relate to different persons; in unbalanced panels, T differs among individuals and is replaced by T i : no more matrix or board shape.

The two dimensions have different quality In the time dimension t, the panel behaves like a time series: natural ordering, systematic dependence over time, asymptotics depend on stationarity, ergodicity etc. In the cross-section dimension i, there is no natural ordering, cross-section dependence may play a role ( second generation ), otherwise asymptotics may also be simple assuming independence ( first generation ). Sometimes, e.g. in spatial panels, i may have structure and natural ordering.

Advantages of panel data analysis Panels are more informative than simple time series of aggregates, as they allow tracking individual histories. A 10% unemployment rate is less informative than a panel of individuals with all of them unemployed 10% of the time or one with 10% always unemployed; Panels are more informative than cross-sections, as they reflect dynamics and Granger causality across variables.

The heterogeneity problem N = 3, T = 20. For each i, there is an identical, positive slope in a linear relationship between Y and X. For the whole sample, the relationship is slightly falling and nonlinear. If interest focuses on the former model, naive estimation over the whole sample results in a heterogeneity bias.

Books on panels Baltagi, B. Econometric Analysis of Panel Data, Wiley. (textbook) Hsiao, C. Analysis of Panel Data, Cambridge University Press. (textbook) Arellano, M. Panel Data Econometrics, Oxford University Press. (very formal state of the art) Diggle, P., Heagerty, P., Liang, K.Y., and S. Zeger Analysis of Longitudinal Data, Oxford University Press. (non-economic monograph)

The LSDV estimator The pooled regression model Consider the model y it = α+β X it +u it, i = 1,...,N, t = 1,...,T. Assume there are K regressors (covariates), such that dim(β) = K. Panel models mainly differ in their assumptions on u. u independent across i and t, Eu = 0, and varu = σ 2 define the (usually unrealistic) pooled regression model. It is efficiently estimated by least squares (OLS). Sometimes, one may consider digressing from the homogeneity assumption β i β. This entails that most advantages of panel modelling are lost.

The LSDV estimator The fixed-effects regression The model y it = α+β X it +u it, u it = µ i +ν it, i = 1,...,N, t = 1,...,T, with the identifying condition N i=1 µ i = 0, and the individual effects µ i assumed as unobserved constants (parameters), is the fixed-effects (FE) regression model. (ν it ) fulfills the usual conditions on errors: independent, Eν = 0, varν = σ 2 ν.

The LSDV estimator Incidental parameters Most authors call a parameter incidental when its dimension increases with the sample size. The nuisance due to such a parameter is worse than for a typical nuisance parameter whose dimension may be constant. As N, the number of fixed effects µ i increases. Thus, the fixed effects are incidental parameters. Incidental parameters cannot be consistently estimated, and they may cause inconsistency in the ML estimation of the remaining parameters.

The LSDV estimator OLS in the FE regression model In the FE model, the regression equation y it = α+β X it +u it, i = 1,...,N, t = 1,...,T, does not fulfill the conditions of the Gauss-Markov Theorem, as Eu it = µ i 0. OLS may be biased, inconsistent, and even if it is unbiased, it is usually inefficient.

The LSDV estimator Least-squares dummy variables Let Z (j) µ,it denote a dummy variable that is 0 for all observations it with i j and 1 for i = j. Then, convening Z µ,it = (Z (1) µ,it,...,z(n) µ,it ) and µ = (µ 1,...,µ N ), the regression model y it = α+β X it +µ Z µ,it +ν it, i = 1,...,N, t = 1,...,T, fulfills all conditions of the Gauss-Markov Theorem. OLS for this regression is called LSDV (least-squares dummy variables), the within, or the FE estimator. Assuming X as non-stochastic, LSDV is unbiased, consistent, and linear efficient (BLUE).

The LSDV estimator The dummy-variable trap in LSDV Note that N j=1 Z(j) µ,it = 1. Inhomogeneous LSDV regression would be multicollinear. Two (equivalent) solutions: 1. restricted OLS imposing N i=1 µ i = 0; 2. homogeneous regression with free µ i coefficients. Then, ˆα is recovered from N i=1 ˆµ i, and ˆµ i = ˆµ i ˆα. In any case, parameter dimension is K +N.

The LSDV estimator Within and between By conditioning on individual ( group ) dummies, the within or within-groups estimator concentrates exclusively on variation within the individuals. By contrast, the between estimator results from a regression among N individual time averages: ȳ i. = α+β Xi. +ū i., i = 1...,N, with ȳ i. = T 1 T t=1 y it etc. Because of Eū i. = µ i 0, it violates the Gauss-Markov conditions and is more of theoretical interest.

The algebra of the LSDV estimator LSDV regression in matrix form Stacking all NT observations yields the compact form y = αι NT +Xβ +Z µ µ+ν, where ι NT, y, and ν are NT vectors. Generally, ι m stands for an m vector of ones. Convention is that i is the slow index and t the fast index, such that the first T observations belong to i = 1. X is an NT K matrix, β is a K vector, µ is an N vector. Z µ is an NT N matrix.

The algebra of the LSDV estimator The matrix Z µ The NT N matrix for the dummy regressors looks like 1 0 0... 1 0 0 1 Z µ =.. 0 1.... 0 1.. 0 0 1 This matrix can be written in Kronecker notation as I N ι T. I N is the N N identity matrix.

The algebra of the LSDV estimator Review: Kronecker products The Kronecker product A B of two matrices A = [a ij ] and B = [b kl ] of dimensions n m and n 1 m 1 is defined by A B = a 11 B a 12 B... a 1m B a 21 B a 22 B... a 2m B............ a n1 B a n2 B... a nm B, which gives a large (nn 1 ) (mm 1 ) matrix. Left factor determines crude form and right factor determines fine form. (Note: some authors use different definitions, t.ex. David Hendry) I B is a block-diagonal matrix.

The algebra of the LSDV estimator Three calculation rules for Kronecker products (A B) = A B (A B) 1 = A 1 B 1 (A B)(C D) = (AC BD) for non-singular matrices (second rule) and fitting dimensions (third rule).

The algebra of the LSDV estimator The matrix Z µ is very big Direct regression of y on the full NT (N +K) matrix is feasible (unless N is too large) but inconvenient: This straightforward regression does not exploit the simple structure of the matrix Z µ ; it involves inversion of an (N +K) (N +K) matrix, an obstacle especially for cross-section panels. Therefore, the regression is run in two steps.

The algebra of the LSDV estimator The Frisch-Waugh theorem Theorem The OLS estimator on the partitioned regression y = Xβ +Zγ +u can be obtained by first regressing y and all X variables on Z, which yields residuals y Z and X Z, and then regressing y Z on X Z in y Z = X Z β +v. The resulting estimate ˆβ is identical to the one obtained from the original regression.

The algebra of the LSDV estimator Remarks on Frisch-Waugh Note that the outlined sequence does not yield an OLS estimate of γ. If that is really needed, one may run the same sequence with X and Z exchanged. Frisch-Waugh emphasizes that OLS coefficients in any multiple regression are really effects conditional on all remaining regressors. Frisch-Waugh permits running a multiple regression on two regressors if only simple regression is available. Proof by direct evaluation of the regressions, using inversion of partitioned matrices.

The algebra of the LSDV estimator Interpreting regression on Z µ If any variable y is regressed on Z µ, the residuals are y Z µ ( Z µ Z µ ) 1Z µ y = y (I N ι T ){(I N ι T ) (I N ι T )} 1 (I N ι T ) y = y T 1 (I N ι T ι T )y T T T T = y T 1 ( y 1t,..., y 1t, y 2t,..., y Nt ) t=1 = y (ȳ 1.,...,ȳ N. ), such that all observations are adjusted for their individual time averages. This is done for all variables, y and the covariates X. t=1 t=1 t=1

The algebra of the LSDV estimator Applying Frisch-Waugh to LSDV 1. In the first step, y and all regressors in X are regressed on Z µ and the residuals are formed, i.e. the observations corrected for their individual time averages; 2. in the second step, the purged y observations are regressed on the purged covariates. ˆβ is obtained. The procedure fails for time-constant variables. An OLS estimate ˆµ follows from regressing the residuals y Xˆβ on dummies.

The algebra of the LSDV estimator The purging matrix Q The matrix that defines the purging (or sweep-out) operation in the first step is Q = I NT T 1 (I N J T ) with J T = ι T ι T a T T matrix filled with ones: ỹ = y (ȳ 1.,...,ȳ N. ) ι T = Qy Using this matrix allows to write the FE estimator compactly: ˆβ FE = (X Q QX) 1 X Q Qy = (X QX) 1 X Qy, as Q = Q and Q 2 = Q.

Properties of the LSDV estimator The variance matrix of the LSDV estimator The LSDV estimator looks like a GLS estimator, and the algebra for its variance follows the GLS variance: varˆβ FE = σ 2 ν(x QX) 1 Note: This differs from usual GLS algebra, as Q is singular. There is no GLS model that motivates FE, though the result is analogous.

Properties of the LSDV estimator N and T asymptotics of the LSDV estimator If Eˆβ FE = β and varˆβ FE 0, ˆβ FE will be consistent. 1. If T, (X QX) 1 converges to 0, assuming that all covariates show constant or increasing variation around their individual means, which is plausible (excludes time-constant covariates). 2. If N, (X QX) 1 converges to 0, assuming that covariates vary across individuals (excludes covariates indicative of time points). T allows to retrieve consistent estimates of all effects µ i. For N, this is not possible.

Properties of the LSDV estimator t statistics on coefficients 1. Plugging in a sensible estimator ˆσ 2 ν for σ2 ν in Σ FE = σ 2 ν(x QX) 1 yields ˆΣ FE = [ˆσ FE,jk ] j,k=1,...,k and allows to construct t values for the vector β: t β = diag(ˆσ 1/2 FE,11,...,ˆσ 1/2 FE,NN )ˆβ FE. A suggestion for ˆσ 2 ν is (NT N K) 1 N i=1 T t=1 ˆν2 it. 2. t statistics on µ i and α are obtained by evaluating the full model in an analogous way. These t values are, under their null, asymptotically N(0, 1). Under Gauss-Markov assumptions, they are t distributed in small samples. Statistics for effects need T.

Pooled regression in the FE model The bias of pooled regression in the FE model The pooled OLS estimate in the FE model is ˆβ # = (X # X #) 1 X # y, where X # etc. denotes the X matrix extended by the intercept column. As in usual derivation of omitted-variable bias : ) E (ˆβ# = β # + ( X 1X # X #) # Z µ µ, so the bias depends on the correlation of X and Z µ, on T, and on µ. For T, the bias should disappear.

Idea of the random-effects model If N is large, one may view the effects as unobserved random variables and not as incidental parameters: random effects (RE). The model becomes more parsimonious, as it has less parameters. It should be plausible that all effects are drawn from the same probability distribution. Strong heterogeneity across individuals invalidates the RE model. The concept is unattractive for small N. The concept is unattractive for large T. The concept presumes that covariates and effects are approximately independent. This assumption may often be implausible a priori.

The GLS estimator for the RE model The random-effects model The basic one-way RE model (or variance-components model) reads: y it = α+β X it +u it, u it = µ i +ν it, µ i i.i.d. ( 0,σ 2 µ), ν it i.i.d. ( 0,σ 2 ν), for i = 1,...,N, and t = 1,...,T. µ and ν are mutually independent.

The GLS estimator for the RE model GLS estimation for the RE model The RE model corresponds to a usual GLS regression model y = αι NT +X β +u, varu = σ 2 µ(i N J T )+σ 2 νi NT. Denoting the error variance matrix by Euu = Ω, the GLS estimator is ˆβ RE = { X # Ω 1 X # } 1X # Ω 1 y. This estimator is BLUE for the RE model.

The GLS estimator for the RE model The matrix Ω is big GLS estimation involves the inversion of the (NT NT) matrix Ω, which may be inconvenient. It is easily inverted, however, from a spectral decomposition into orthogonal parts. J T is a T T matrix of elements T 1. Ω = Tσµ( 2 ) IN J T +σ 2 ν I NT = ( Tσµ 2 )( ) ( ) +σ2 ν IN J T +σ 2 ν INT I N J T = ( Tσµ 2 +σν) 2 P+σ 2 ν Q, where P and Q have the properties PQ = QP = 0, P 2 = P, Q 2 = Q, P+Q = I.

The GLS estimator for the RE model Inversion of Ω Because of the special properties of the spectral decomposition, Ω can be inverted componentwise: Ω 1 = ( Tσ 2 µ +σ 2 ν ) 1 ( ) ( ) IN J T +σ 2 ν INT I N J T. Thus, the GLS estimator has the simpler closed form ˆβ #,RE = [ (Tσ X #{ 2 µ +σν 2 ) ] 1P+σ 1 2 ν }X Q # (Tσ X #{ 2 µ +σν 2 ) } 1P+σ 2 ν Q y, which only requires inversion of a (K +1) (K +1) matrix.

The GLS estimator for the RE model Splitting Ω Any GLS estimator (X Ω 1 X) 1 X Ω 1 y can also be represented in the form {(MX) MX} 1 (MX) My, as OLS on data transformed by the square root of Ω 1. Because of P 2 = P etc., this form is very simple here: ˆβ #,RE = { ( PX # ) ( PX # ) } 1( PX # ) Py, with P = ( ) 1P+σ Tσµ 2 +σν 2 1 ν Q.

The GLS estimator for the RE model The weight parameter θ The GLS estimator is invariant to any re-scaling of the transformation matrix (cancels out). For example, with ˆβ #,RE = { ( P 1 X # ) ( P 1 X # ) } 1( P 1 X # ) P 1 y P 1 = σ ν Tσ 2 µ +σ2 ν = (1 θ)p+q, P+Q and θ = 1 σ ν Tσ 2 µ +σ 2 ν.

The GLS estimator for the RE model The value of θ determines the RE estimate Note the following cases: 1. θ = 0, the transformation is I, RE-GLS becomes OLS. This happens if σµ 2 = 0, no effects ; 2. θ = 1, the transformation is Q, RE-GLS becomes FE-LSDV. This happens if T is very large or σν 2 = 0 or σµ 2 is large; 3. The transformation can never become just P, which would yield the between estimator.

feasible GLS in the RE model θ must be estimated In practice, RE-GLS requires the unknown weight parameter to be estimated. θ = 1 σ ν Tσ 2 µ +σ 2 ν Most algorithms use a consistent (inefficient, non-gls) estimator in a first step, in order to estimate variances from residuals. Some algorithms iterate this idea. Some use joint estimation of all parameters by nonlinear ML-type optimization.

feasible GLS in the RE model OLS and LSDV are consistent in the RE model The RE model is a traditional GLS model. OLS is consistent and (for fixed regressors) unbiased though not efficient. LSDV is also consistent and unbiased for β. It may be more efficient, as typically θ is closer to one, where LSDV and RE coincide. Thus, LSDV and OLS are appropriate first-step routines for estimating θ.

feasible GLS in the RE model Estimating numerator and denominator A customary method is to estimate σ 2 1 = Tσ2 µ +σ 2 ν by and σ 2 ν by ˆσ 2 1 = T N ( N 1 T i=1 ) 2 T û it t=1 N T ˆσ ν 2 = i=1 t=1(û it T 1 ) 2 T s=1ûis. N(T 1) û may be OLS or LSDV residuals. A drawback is that the implied estimate ˆσ 2 µ = T 1(ˆσ 2 1 ˆσ2 ν) can be negative and inadmissible.

feasible GLS in the RE model Estimating σ 2 µ and σ2 ν The Nerlove method estimates σ 2 µ directly from the effect estimates in a first-step LSDV regression, and forms ˆθ accordingly. Estimating σ1 2, however, is no coincidence. That term appears in the evaluation of the likelihood. It is implied by an eigenvalue decomposition of the matrix Ω. Usually, discrepancies across θ estimation methods are minor.

Properties of the RE estimator RE-GLS is linear efficient for the correct model By construction, the RE estimator is linear efficient (BLUE) for the RE model. It has the GLS variance var (ˆβ#,RE ) = { X # Ω 1 X # } 1. For the fgls estimator, this variance is attained asymptotically, for N and T. For T, the RE estimator becomes the FE estimator. Thus, FE and RE have similar properties for time-series panels.

Two-way panels: the idea In a two-way panel, there are individual-specific unobserved constants (individual effects) as well as time-specific constants (time effects): y it = α+β X it +u it, i = 1,...,N, t = 1,...,T, u it = µ i +λ t +ν it. The model is important in economic applications but it is less ubiquitous than the one-way model. Some software routines have not implemented the two-way RE model, for example.

The two-way fixed-effects model A compact form for the two-way FE regression Assume all effects are constants (incidental parameters). Again, the model can be written compactly as y = α+xβ +Z µ µ+z λ λ+ν, with the (TN T) matrix Z λ for time dummies, and λ = (λ 1,...,λ T ). The model assumes the identifying restriction N i=1 µ i = T t=1 λ t = 0. This may be achieved by just using N 1 individual dummies, T 1 time dummies, and an inhomogeneous intercept. Note that Z λ = ι N I T.

The two-way fixed-effects model Two-way LSDV estimation By construction, OLS regression on X and all time and individual dummies yields the BLUE for the model. Direct regression requires the inversion of a (K +N +T 1) (K +N +T 1) matrix, which is inconveniently large. Thus, application of the Frisch-Waugh theorem may be advisable.

The two-way fixed-effects model The two-way purging matrix Simple algebra shows that residuals of the first-step regression on all dummies is equivalent to applying the sweep-out (or purging) matrix Q 2 = I N I T I N J T J N I T + J N J T. The third matrix has N 2 blocks of diagonal matrices with elements N 1 on their diagonals; the fourth matrix is filled with the constant value (NT) 1. The second matrix subtracts individual time average, the third matrix subtracts time-point averages across individuals, which subtracts means twice, so the last matrix adds a total average.

The two-way fixed-effects model The GLS-type form of the two-way FE estimator Again, the FE estimator has the form ˆβ FE,2 way = ( X Q 2 X ) 1 X Q 2 y, with the just defined two-way Q 2. Its variance is varˆβ = σν 2 ( X Q 2 X ) 1, which can be estimated by plugging in some variance estimate ˆσ 2 ν based on the LSDV residuals.

The two-way fixed-effects model Some properties of the two-way LSDV estimator The estimator ˆβ FE,2 way is BLUE for non-stochastic or exogenous X; ˆβFE,2 way is consistent for T and for N ; ˆµ is consistent for T ; ˆλ is consistent for N.

The two-way random-effects model The two-way random-effects model The basic two-way RE model (or variance-components model) reads: y it = α+β X it +u it, u it = µ i +λ t +ν it, µ i i.i.d. ( 0,σ 2 µ), λ t i.i.d. ( 0,σ 2 λ), ν it i.i.d. ( 0,σ 2 ν), for i = 1,...,N, and t = 1,...,T. µ, λ t, and ν are mutually independent.

The two-way random-effects model Estimation in the two-way random-effects model This is a GLS regression model, with error variance matrix Ω = E ( uu ) = σ 2 µ (I N J T )+σ 2 λ (J N I T )+σ 2 ν I NT Thus, β is estimated consistently though inefficiently by OLS and (linear) efficiently by GLS.

The two-way random-effects model The GLS estimator for the two-way model Calculation of the estimator ˆβ RE,2 way = {X # Ω 1 X # } 1 X # Ω 1 y requires the inversion of an NT NT matrix, which is inconvenient. Unfortunately, the spectral matrix decomposition is much more involved than for the one-way model.

The two-way random-effects model The decomposition of the two-way Ω Some matrix algebra yields Ω 1 = a 1 (I N J T )+a 2 (J N I T )+a 3 I NT +a 4 J NT with the coefficients σ 2 µ a 1 = ( σ 2 ν +Tσµ) 2, σ 2 ν σ 2 λ a 2 = ( σ 2 ν +Nσλ) 2, σ 2 ν a 3 = 1 σν 2, a 4 = σ 2 ν σµσ 2 λ 2 ( )( ) 2σ2 ν +Tσµ 2 +Nσλ 2 σ 2 ν +Tσµ 2 σ 2 ν +Nσλ 2 σν 2. +Tσ2 µ +Nσ2 λ

The two-way random-effects model Properties of the two-way GLS estimator The GLS estimator is linear efficient with variance matrix {X # Ω 1 X # } 1. For T, N, N/T c 0, o.c.s. that Ω 1 Q 2, and FE and RE become equivalent.

The two-way random-effects model Implementation of feasible GLS in the two-way RE model The square roots of the weights can be used to transform all variables y and X in a first step, then inhomogeneous OLS regression of transformed y on transformed X yields the GLS estimate. The variance parameters σν 2, σ2 µ, σ2 λ are unknown, and so are the a j terms and the a j for the transformation. If the variance parameters are estimated from a first-stage LSDV, the resulting estimator can be shown to be asymptotically efficient. First-stage OLS yields similar results but entails some slight asymptotic inefficiency.