Panel Data Analysis Josef Brüderl, University of Mannheim, March 2005

Save this PDF as:

Size: px
Start display at page:

Transcription

1 Panel Data Analysis Josef Brüderl, University of Mannheim, March 2005 This is an introduction to panel data analysis on an applied level using Stata. The focus will be on showing the "mechanics" of these methods. This you will find in no textbook. Panel analysis textbooks are generally full of formulas. For those who see from formulas the "mechanics" this is fine. But most students will need a more basic explanation. That many students did not fully understand the "mechanics" of panel data analysis, I (also Halaby 2004) infer from many published studies. Researchers use panel data methods (i.e. RE-models) that are not able to fully exploit the main advantage of panel data: getting rid of unobserved heterogeneity. For those who are interested in the statistical details I recommend: Wooldridge, J. (2003) Introductory Econometrics: A Modern Approach. Thomson. Chap. 13, 14. (easy introduction) Wooldridge, J. (2002) Econometric Analysis of Cross Section and Panel Data. MIT Press. (more advanced, but very thorough) Nice introductions on the main issues are: Allison, P.D. (1994) Using Panel Data to Estimate the Effects of Events. Sociological Methods & Research 23: Halaby, C. (2004) Panel Models in Sociological Research. Annual Rev. of Sociology 30: My approach is the "modern" econometric approach. The "classic" approach ("dynamic" panel models) is described in: Finkel, S. (1995) Causal Analysis with Panel Data. Sage. A nice introduction to the increasingly popular "growth curve model" is: Singer, J., and J. Willett (2003) Applied Longitudinal Data Analysis. Oxford. Panel Data Panel data are repeated measures of one or more variables on one or more persons (repeated cross-sectional time-series). Mostly they come from panel surveys, however, you can get them also from cross-sectional surveys by retrospective questions. Panel data record "snapshots" from the life course (continuous or discrete dependent variable). Insofar they are less informative than event-history data. Data structure ("long" format, T 2):

2 Panel Data Analysis, Josef Brüderl 2 i t y x 1 1 y 11 x y 12 x y 21 x y 22 x22 N 1 y N1 x N1 N 2 y N2 xn2 Benefits of panel data: They are more informative (more variability, less collinearity, more degrees of freedom), estimates are more efficient. They allow to study individual dynamics (e.g. separating age and cohort effects). They give information on the time-ordering of events. They allow to control for individual unobserved heterogeneity. Since unobserved heterogeneity is the problem of non-experimental research, the latter benefit is especially useful. Panel Data and Causal Inference According to the counterfactual approach on causality (Rubin s model), the causal effect of a treatment (T) is defined by (individual i, time t 0, before treatment C): T Y i,t0 Y C i,t0. This obviously is not estimable. With cross-sectional data we estimate (between estimation) T Y i,t0 Y C j,t0. This only provides the true causal effect if the assumption of unit homogeneity (no unobserved heterogeneity) holds. With panel data we can improve on this, by using (within estimation) T Y i,t1 Y C i,t0. Unit homogeneity here is needed only in an intrapersonal sense! However, period effects might endanger causal inference. To avoid this, we could use (difference-in-differences estimator) T Y i,t1 Y C i,t0 Y C j,t1 Y C j,t0.

3 Panel Data Analysis, Josef Brüderl 3 Example: Does marriage increase the wage of men? Many cross-sectional studies have shown that married men earn more. But is this effect causal? Probably not. Due to self-selection this effect might be spurious. High-ability men select themselves (or are selected) into marriage. In addition, high ability-men earn more. Since researchers usually have no measure of ability, there is potential for an omitted variable bias (unobserved heterogeneity). I have constructed an artificial dataset, where there is both selectivity and a causal effect:. list id time wage marr, separator(6) id time wage marr id time wage marr We observe four men (N 4) for six years (T 6). Wage varies a little around 1000, 2000, 3000, and 4000 respectively. The two high-wage men marry between year 3 and 4 (selectivity). This is indicated by "marr" jumping from 0 to 1. (This specification implies a lasting effect of marriage). Note that with panel data we have two sources of variation: between and within persons. twoway (scatter wage time, ylabel(0(1000)5000, grid) ymtick(500(1000)4500, grid) c(l)) (scatter wage time if marr 1, c(l)), legend(label(1 "before marriage") label(2 "after marriage"))

4 Panel Data Analysis, Josef Brüderl 4 EURO per month Time before marriage after marriage Identifying the Marriage Effect Now, what is the effect of marriage in these data? Clearly there is self-selection: the high-wage men marry. In addition, however, there is a causal effect of marriage: After marriage there is a wage increase (whereas there is none, for the "control group"). More formally, we use the difference-in-differences (DID) approach. First, compute for every man the mean wage before and after marriage (for unmarried men before year 3.5 and after). Then, compute for every man the difference in the mean wage before and after marriage (before-after difference). Take the average of married and unmarried men. Finally, the difference of the before-after difference of married and unmarried men is the causal effect: In this example, marriage causally increases earnings by 500. (At least this is what I wanted it to be, but due to a "data construction error" the correct DID-estimator is 483. Do you find the "error"?). The DID-approach mimics what one would do with experimental data. In identifying the marriage effect we rely on a within-person comparison (the before-after difference). To rule out the possibility of maturation or period effects we compare the within difference of married (treatment) and unmarried (control) men. The Fundamental Problem of Non-Experimental Research What would be the result of a cross-sectional regression at T 4 y i4 0 1 x i4 u i4?

5 Panel Data Analysis, Josef Brüderl (this is the difference of the wage mean between married and unmarried men). The estimate is highly misleading! It is biased because of unobserved heterogeneity (also called: omitted variables bias): unobserved ability is in the error term, therefore u i4 and x i4 are correlated. The central regression assumption is, however, that the X-variable and the error term are uncorrelated (exogeneity). Endogeneity (X-variable correlates with the error term) results in biased regression estimates. Endogeneity is the fundamental problem of non-experimental research. Endogeneity can be the consequence of three mechanisms: unobserved heterogeneity (self-selection), simultaneity, and measurement error (for more on this see below). The nature of the problem can be seen in the Figure from above: the cross-sectional OLS estimator relies totally on a between-person comparison. This is misleading because persons are self-selected. In an experiment we would assign persons randomly. With cross-sectional data we could improve on this only, if we would have measured "ability". In the absence of such a measure our conclusions will be wrong. Due to the problem of unobserved heterogeneity the results of many cross-sectional studies are highly disputable. What to Do? The best thing would be to conduct an experiment. If this is not possible collect at least longitudinal data. As we saw above, with panel data it is possible to identify the true effect, even in the presence of self-selection! In the following, we will discuss several regression methods for panel data. The focus will be on showing, how these methods succeed in identifying the true causal effect. We will continue with the data from our artificial example. Pooled-OLS We pool the data and estimate an OLS regression (pooled-ols) y it 0 1 x it u it (the mean of the red points minus the mean of the green points). This estimate is still heavily biased because of unobserved heterogeneity (u it and x it are correlated). This is due to the fact that pooled-ols also relies on a between comparison. Compared with the cross-sectional OLS the bias is lower, however, because pooled-ols also takes regard of the within variation. Thus, panel data alone do not remedy the problem of unobserved heterogeneity! One has to apply special regression models.

6 Panel Data Analysis, Josef Brüderl 6 Error-Components Model These regression models are based on the following modelling strategy. We decompose the error term in two components: A person-specific error i and an idiosyncratic error it, u it i it. The model is now (we omit the constant, because it would be collinear with i ): y it 1 x it i it. The person-specific error does not change over time. Every person has a fixed value on this latent variable (fixed-effects). i represents person-specific time-constant unobserved heterogeneity. In our example i could be unobserved ability (which is constant over the six years). The idiosyncratic error varies over individuals and time. It should fulfill the assumptions for standard OLS error terms. The assumption of pooled-ols is that x it is uncorrelated both with i and it. First-Difference Estimator With panel data we can "difference out" the person-specific error (T 2): y i2 1 x i2 i i2 y i1 1 x i1 i i1. Subtracting the second equation from the first gives: Δy i 1 Δx i Δ i, where "Δ" denotes the change from t 1 to t 2. This is a simple cross-sectional regression equation in differences (without constant). 1 canbeestimated consistently by OLS, if it is uncorrelated with x it (first-difference (FD) estimator). The big advantage is that the fixed-effects have been cancelled out. Therefore, we no longer need the assumption that i is uncorrelated with x it. Time-constant unobserved heterogeneity is no longer a problem. Differencing is also straightforward with more than two time-periods. For T 3 one could subtract period 1 from 2, and period 2 from 3. This estimator, however, is not efficient, because one could also subtract period 1 from 3 (this information is not used). In addition, with more than two time-periods the problem of serially correlated Δ i arises. OLS assumes that error terms across observations are uncorrelated (no autocorrelation). This assumption can easily be violated with multi-period panel data. Then S.E.s will be biased. To remedy this one can use GLS or Huber-White sandwich estimators.

7 Panel Data Analysis, Josef Brüderl 7 Example We generate the first-differenced variables: tsset id time /* tsset the data generate dwage wage - L.wage /* L. is the lag-operator generate dmarr marr - L.marr Then we run an OLS regression (with no constant):. regress dwage dmarr, noconstant Source SS df MS Number of obs F( 1, 19) Model Prob F Residual R-squared Adj R-squared Total Root MSE dwage Coef. Std. Err. t P t [95% Conf. Interval] dmarr Next with robust S.E. (Huber-White sandwich estimator):. regress dwage dmarr, noconstant cluster(id) Regression with robust standard errors Number of obs 20 F( 1, 3) Prob F R-squared Number of clusters (id) 4 Root MSE Robust dwage Coef. Std. Err. t P t [95% Conf. Interval] dmarr We see that the FD-estimator identifies the true causal effect (almost). Note that the robust S.E. is much lower. However, the FD-estimator is inefficient because it uses only the wage observations immediately before and after a marriage to estimate the slope (i.e., two observations!). This can be seen in the following plot of the first-differenced data: twoway (scatter dwage dmarr) (lfit dwage dmarr, estopts(noconstant)), legend(off) ylabel(-200(100)600, grid) xtitle(delta(marr)) ytitle(delta(wage));

8 Panel Data Analysis, Josef Brüderl 8 delta(wage) delta(marr) Within-person change The intuition behind the FD-estimator is that it no longer uses the between-person comparison. It uses only within-person changes: If X changes, how much does Y change (within one person)? Therefore, in our example, unobserved ability differences between persons no longer bias the estimator. Fixed-Effects Estimation An alternative to differencing is the within transformation. We start from the error-components model: y it 1 x it i it. Average this equation over time for each i (between transformation): y i 1 x i i i. Subtract the second equation from the first for each t (within transformation): y it y i 1 x it x i it i. This model can be estimated by pooled-ols (fixed-effects (FE) estimator). The important thing is that again the i have disappeared. We no longer need the assumption that i is uncorrelated with x it. Time-constant unobserved heterogeneity is no longer a problem. What we do here is to "time-demean" the data. Again, only the within variation is left, because we subtract the between variation. But here all information is used, the within transformation is more efficient than differencing. Therefore, this estimator is also called the within estimator. Example We time-demean our data and run OLS: egen mwage mean(wage), by(id)

9 Panel Data Analysis, Josef Brüderl 9 egen mmarr mean(marr), by(id) generate wwage wage - mwage generate wmarr marr - mmarr. regress wwage wmarr Source SS df MS Number of obs F( 1, 22) Model Prob F Resid R-squared Adj R-squared Total Root MSE wwage Coef. Std. Err. t P t [95% Conf. Interval] wmarr _cons The FE-estimator succeeds in identifying the true causal effect! However, OLS uses df N T k. This is wrong, since we used up another N degrees of freedom by time-demeaning. Correct is df N T 1 k. xtreg takes regard of this (and demeans automatically):. xtreg wage marr, fe Fixed-effects (within) regression Number of obs 24 Group variable (i): id Number of groups 4 R-sq: within Obs per group: min 6 between avg 6.0 overall max 6 F(1,19) corr(u_i, Xb) Prob F wage Coef. Std. Err. t P t [95% Conf. Interval] marr cons sigma_u sigma_e rho (fraction of variance due to u_i) F test that all u_i 0: F(3, 19) Prob F The S.E. is somewhat larger. R 2 -within is the explained variance for the demeaned data. This is the one that you would want to report. sigma_u is and sigma_e is. By construction of our dataset the estimated person-specific error is much larger. (There is a constant in the output, because Stata after the within transformation adds back the overall wage mean, which is 2500.) An intuitive feeling for what the FE-estimator does, gives this plot: twoway (scatter wwage wmarr, jitter(2)) (lfit wwage wmarr), legend(off) ylabel(-400(100)400, grid) xtitle(demeaned(marr)) ytitle(demeaned(wage));

10 Panel Data Analysis, Josef Brüderl 10 demeaned(wage) demeaned(marr) All wage observations of those who married contribute to the slope of the FE-regression. Basically, it compares the before-after wages of those who married. However, it does not use the observations of the "control-group" (those, who did not marry). These are at X 0 and contribute nothing to the slope of the FE-regression. Dummy Variable Regression (LSDV) Instead of demeaning the data, one could include a dummy for every i and estimate the first equation from above by pooled-ols (least-squares-dummy-variables-estimator). This provides also the FE-estimator (with correct test statistics)! In addition, we get estimates for the i which may be of substantive interest.. tabulate id, gen(pers). regress wage marr pers1-pers4, noconstant Source SS df MS Number of obs F( 5, 19) Model Prob F Residual R-squared Adj R-squared Total Root MSE wage Coef. Std. Err. t P t [95% Conf. Interval] marr pers pers pers pers The LSDV-estimator, however, is practical only when N is small. Individual Slope Regressions Another way to obtain the FE-estimator is to estimate a regression for every

11 Panel Data Analysis, Josef Brüderl 11 individual (this is a kind of "random- coefficient model"). The mean of the slopes is the FE-estimator. Our example: The slopes for the two high-wage men are 500. The regressions for the two low-wage men are not defined, because X does not vary (again we see that the FE-estimator does not use the "control group"). That means, the FE-estimator is 500, the true causal effect. With spagplot (an Ado-file you can net search) it is easy to produce a plot with individual regression lines (the red curve is pooled-ols). This is called a "spaghetti plot": spagplot wage marr if id 2, id(id) ytitle("euro per month") xlabel(0 1) ylabel(0(1000)5000, grid) ymtick(500(1000)4500, grid) note("") EURO per month marriage Such a plot shows nicely, how much pooled-ols is biased, because it uses also the between variation. Restrictions 1. With FE-regressions we cannot estimate the effects of time-constant covariates. These are all cancelled out by the within transformation. This reflects the fact that panel data do not help to identify the causal effect of a time-constant covariate (estimates are only more efficient)! The "within logic" applies only with time-varying covariates. 2. Further, there must be some variation in X. Otherwise, we cannot estimate its effect. This is a problem, if only a few observations show a change in X. For instance, estimating the effect of education on wages with panel data is difficult. Cross-sectional education effects are likely to be biased (ability bias). Panel methods would be ideal (cancelling out the unobserved fixed ability effect). But most workers show no change in education. The problem will show up in a huge S.E. Random-Effects Estimation Again we start from the error-components model (now with a constant)

12 Panel Data Analysis, Josef Brüderl 12 y it 0 1 x it i it. Now it is assumed that the i are random variables (i.i.d. random-effects) and that Cov x it, i 0. Then we obtain consistent estimates by using pooled-ols. However, we have now serially correlated error terms u it, and S.E.s are biased. Using a pooled-gls estimator provides the random-effects (RE) estimator. It can be shown that the RE-estimator is obtained by applying pooled-ols to the data after the following transformation: y it y i x it x i 1 i it i, where 1 2 T 2 2. If 1 the RE-estimator is identical with the FE-estimator. If 0 the RE-estimator is identical with the pooled OLS-estimator. Normally will be between 0 and 1. The RE-estimator mixes within and between estimators! If Cov x it, i 0 this is ok, it even increases efficiency. But if Cov x it, i 0 the RE-estimator will be biased. The degree of the bias will depend on the magnitude of. If 2 2 then will be close to 1, and the bias of the RE-estimator will be low. Example. xtreg wage marr, re theta Random-effects GLS regression Number of obs 24 Group variable (i): id Number of groups 4 R-sq: within Obs per group: min 6 between avg 6.0 overall max 6 Random effects u_i ~Gaussian Wald chi2(1) corr(u_i, X) 0 (assumed) Prob chi theta wage Coef. Std. Err. z P z [95% Conf. Interval] marr _cons sigma_u sigma_e rho (fraction of variance due to u_i) The RE-estimator works quite well. The bias is only 3. This is because The option theta includes the estimate in the output. One could use a Hausman specification test on whether the RE-estimator is biased.

13 Panel Data Analysis, Josef Brüderl 13 This test, however, is based on strong assumptions which are usually not met in finite samples. Often it does even not work (as with our data). FE- or RE-Modelling? 1. For most research problems one would suspect that Cov x it, i 0. The RE-estimator will be biased. Therefore, one should use the FE-estimator to get unbiased estimates. 2. The RE-estimator, however, provides estimates for time-constant covariates. Many researchers want to report effects of sex, race, etc. Therefore, they choose the RE-estimator over the FE-estimator. In most applications, however, the assumption Cov x it, i 0 will be wrong, and the RE-estimator will be biased (though the magnitude of the bias could be low). This is risking to throw away the big advantage of panel data only to be able to write a paper on "The determinants of Y". To take full advantage of panel data the style of data analysis has to change: One should concentration on the effects of (a few) time-varying covariates only and use the FE-estimator consequently! 3. The RE-estimator is a special case of a parametric model for unobserved heterogeneity: We make distributional assumptions on the person-specific error term and conceive an estimation method that cancels the nuisance parameters. Generally, such models do not succeed in solving the problem of unobserved heterogeneity! In fact, they work only if there is "irrelevant" unobserved heterogeneity: Cov x it, i 0. Further Remarks on Panel-Regression 1. Though it is not possible to include time-constant variables in a FE-regression, it is possible to include interactions with time-varying variables. E.g., one could include interactions of education and period-effects. The regression coefficients would show, how the return on education changed over periods (compared to the reference period). 2. Unbalanced panels, where T differs over individuals, are no problem for the FE-estimator. 3. With panel data there is always reason to suspect that the errors it of a person i arecorrelatedovertime(autocorrelation). Stata provides xtregar to fit FE-models with AR(1) disturbances. 4. Attrition (individuals leave the panel in a systematic way) is seen as a big problem of panel data. However, an attrition process that is correlated with i does not bias FE-estimates! Only attrition that is correlated with it does. In our example this would mean that attrition correlated with ability would not bias results. 5. Panel data are a special case of "clustered samples". Other special cases are sibling data, survey data sampled by complex sampling designs (svy-regression), and multi-level data (hierarchical regression). Similar models are used in all these literatures.

14 Panel Data Analysis, Josef Brüderl Especially prominent in the multi-level literature is the "random-coefficient model" (regression slopes are allowed to differ by individuals). The panel-analogon where "time" is the independent variable is often named "growth curve model". Growth curve models are especially useful, if you want to analyze how trajectories differ between groups. They come in three variants: the hierarchical linear model version, the MANOVA version, and the LISREL version. Essentially all these versions are RE-models. The better alternative is to use two-way FE-models (see below). 7. Our example investigates the "effect of an event". This is for didactical reasons. Nevertheless, panel analysis is especially appropriate for this kind of questions (including treatment effects). Questions on the effect of an event are prevalent in the social sciences However, panel models work also if X is metric! Problems with the FE-Estimator The FE-estimator still rests on the assumption that Cov x it, is 0, for all t and s. This is the assumption of (strict) exogeneity. If it is violated, we have an endogeneity problem: the independent variable and the idiosyncratic error term are correlated. Under endogeneity the FE-estimator will be biased: endogeneity in this sense is a problem even with panel data. Endogeneity could be produced by: After X changed there were systematic shocks (period effects) Omitted variables (unobserved heterogeneity) Random wage shocks trigger the change in X (simultaneity) Errors in reporting X (measurement error) What can we do? The standard answer to endogeneity is to use IV-estimation (or structural equation modelling). IV-estimation The IV-estimator (more generally: 2SLS, GMM) uses at least one instrument and identifying assumptions to get the unbiased estimator (xtivreg). The identifying assumptions are that the instrument correlates high with X, but does not correlate with the error term. The latter assumption can never be tested! Thus, all conclusions from IV-estimators rest on untestable assumptions. The same is true for another alternative, which is widely used in this context: structural equation modelling (LISREL-models for panel data). Results from these models also rest heavily on untestable assumptions. Experience has shown that research fields, where these methods have been used abundantly, are full of contradictory studies. These methods have produced a big mess in social research! Therefore, do not use these methods!

15 Panel Data Analysis, Josef Brüderl 15 Period Effects It might be that after period 3 everybody gets a wage increase. Such a "systematic" shock would correlate with "marriage" and therefore introduce endogeneity. The FE-estimator would be biased. But there is an intuitive solution to remedy this. The problem with the FE-estimator is that it disregards the information contained in the control group. By introducing fixed-period-effects (two-way FE-model) only within variation that is above the period effect (time trend) is taken into regard. We modify our data so that all men get 500 att 4. Using the DID-estimator we would no longer conclude that marriage has a causal effect. EURO per month Time before marriage after marriage Nevertheless, the FE-estimator shows 500. This problem can, however, be solved by including dummies for the time periods (waves!). In fact, including period-dummies in a FE-model mimics the DID-estimator (see below). Thus, it seems to be a good idea to always include period-dummies in a FE-regression!. tab time, gen(t). xtreg wage3 marr t2-t6, fe Fixed-effects (within) regression Number of obs 24 Group variable (i): id Number of groups 4 R-sq: within Obs per group: min 6 F(6,14) corr(u_i, Xb) Prob F wage3 Coef. Std. Err. t P t [95% Conf. Interval] marr t t t t t _cons Unobserved Heterogeneity

16 Panel Data Analysis, Josef Brüderl 16 Remember, time-constant unobserved heterogeneity is no problem for the FE-estimator. But, time-varying unobserved heterogeneity is a problem for the FE-estimator. The hope, however, is that most omitted variables are time-constant (especially when T is not too large). Simultaneity We modify our data so that the high-ability men get 500 at T 3. They marry in reaction to this wage increase. Thus, causality runs the other way around: wage increases trigger a marriage (simultaneity). EURO per month Time before marriage after marriage The FE-estimator now gives the wrong result:. xtreg wage2 marr, fe Fixed-effects (within) regression Number of obs 24 Group variable (i): id Number of groups 4 R-sq: within Obs per group: min 6 F(1,19) corr(u_i, Xb) Prob F wage2 Coef. Std. Err. t P t [95% Conf. Interval] marr _cons An intuitive idea could come up, when looking at the data: The problem is due to the fact that the FE-estimator uses all information before marriage. The FE-estimator would be unbiased, if it would use only the wage information immediately before marriage. Therefore, the first-difference estimator will give the correct result. A properly modified DID-estimator would also do the job. Thus, by using the appropriate "within observation window" one could remedy the simultaneity problem. This "hand-made" solution will, however, be impractical with most real datasets (see however the remarks below).

17 Panel Data Analysis, Josef Brüderl 17 Measurement Errors Measurement errors in X generally produce endogeneity. If the measurement errors are uncorrelated with the true, unobserved values of X, then 1 is biased downwards (attenuation bias). We modify our data such that person 1 reports erroneously a marriage at T 5 and person 4 "forgets" the first year of marriage. Everything else remains as above, i.e. the true marriage effect is 500. EURO per month Time before marriage after marriage. xtreg wage marr1, fe Fixed-effects (within) regression Number of obs 24 Group variable (i): id Number of groups 4 R-sq: within Obs per group: min wage Coef. Std. Err. t P t [95% Conf. Interval] marr cons The result is a biased FE-estimator of 300. Fortunately, the bias is downwards (conservative estimate). Unfortunately, with more X-variables the direction of the bias is unknown. In fact, compared with pooled-ols the bias due to measurement errors is amplified by using FD- or FE-estimators, because taking the difference of two unreliable measures generally produces an even more unreliable measure. On the other side pooled-ols suffers from bias due to unobserved heterogeneity. Simulation studies show that generally the latter bias dominates. The suggestion is, therefore, to use within estimators: unobserved heterogeneity is a "more important" problem than measurement error. Difference-in-Differences Estimator

18 Panel Data Analysis, Josef Brüderl 18 Probably you have asked yourselves, why do we not use simply the DID-estimator? After all in the beginning we saw that the DID-estimator works. The answer is that with our artificial data there is really no point to use the within estimator. DID would do the job. However, with real datasets the DID-estimator is not so straightforward. The problem is how to find an appropriate "control group". This is not a trivial problem. First, I will show how one can obtain the DID-estimator by a regression. One has to construct a dummy indicating "treatment" (treat). In our case the men who married are in the treatment group, the others are in the control group. A second dummy indicates the periods after "treatment", i.e. marriage (after, note that after has to be defined also for the control group). In addition, one needs the multiplicative interaction term of these two dummies. Then run a regression with these three variables. The coefficient of the interaction term gives the DID-estimator:. gen treat id 3. gen after time 4. gen aftertr after*treat. regr wage after treat aftertr wage Coef. Std. Err. t P t after treat aftertr _cons The DID-estimator says that a marriage increases the wage by 483. This is less than the 500, which we thought until now as correct. But it is the "true" effect of a marriage, because due to a "data construction error", there is a 17 increase in the control group after T 3 (reflected in the coefficient of after). Thus, were the FE-estimates reported above wrong? Yes, the problem of the FE-estimator we saw at several places: it does not use the information contained in the control group. But meanwhile we know how to deal with this problem: we include period-dummies (this is a non-parametric variant of a growth curve model):. xtreg wage marr t2-t6, fe wage Coef. Std. Err. t P t marr t t t t t _cons

19 Panel Data Analysis, Josef Brüderl 19 The (two-way) FE-estimator provides the same answer as the DID-estimator! Thus it is more a matter of taste, whether you prefer the DID-estimator or the two-way FE-estimator. Two points in favor of the FE-estimator: With real data the FE-estimator is more straightforward to apply, because you do not need to construct a control group. As the example shows, the S.E. of the DID-estimator is much larger. This is because there is still "between variance" unaccounted for, i.e. within the two groups. But the DID-estimator also has an advantage: By appropriately constructing the control group, one could deal with endogeneity problems. Before using the DID-estimator one has to construct a control group. For every person marrying at T t, find another one that did not marry up to t and afterwards. Ideally this "match" should be a "statistical twin" concerning time-varying characteristics, e.g. the wage career up to t (there is no need to match on time-constant variables, because the within logic is applied). This procedure (DID-matching estimator) would eliminate the simultaneity bias in the example from above! (Basically this is a combination of a matching estimator and a within estimator, the two most powerful methods for estimating causal effects from non-experimental data that are currently available.) Usually matching is done via the propensity score, but in the case of matching "life-courses" I would suggest optimal-matching (Levenshtein distance). Dynamic Panel Models All panel data models are dynamic, in so far as they exploit the longitudinal nature of panel data. However, there is a distinction in the literature between "static" and "dynamic" panel data models. Static models are those we discussed so far. Dynamic models include a lagged dependent variable on the right-hand side of the equation. A widely used modelling approach is: y it y i,t 1 1 x it i it. Introducing a lagged dependent variable complicates estimation very much, because y i,t 1 is correlated with the error term(s). Under random-effects this is due to the presence of i at all t. Under fixed-effects and within transformation Δy i,t 1 is correlated with i. Therefore, both the FE- and the RE-estimator will be biased. If y i,t 1 and x it are correlated (what generally is the case, because x it and i are correlated) then estimates of both and 1 will be biased. Therefore, IV-estimators havetobeused(e.g.,xtabond). As mentioned above, this may work in theory, but in practice we do not know. Thus, with dynamic panel models there is no way to use "simple" estimation techniques like a FE-estimator. You always have to use IV-techniques or even LISREL. Therefore, the main advantage of panel data - alleviating the problem of unobserved heterogeneity by "simple" methods - is lost, if one uses dynamic panel

20 Panel Data Analysis, Josef Brüderl 20 models! So why bother with "dynamic" panel models at all? These are the "classic" methods for panel data analysis. Already in the 1940ies Paul Lazarsfeld analyzed turnover tables, a method that later was generalized to the "cross-lagged panel model". The latter was once believed to be a panacea for the analysis of cause and effect. Meanwhile several authors concluded that it is a "useless" method for identifying causal effects. You may have substantive interest in the estimate of, the "stability" effect. This, however, is not the case in "analyzing the effects of events" applications. I suppose there are not many applications where you really have a theoretical interest in the stability effect. (The fact that many variables tend to be very similar from period to period is no justification for dynamic models. This stability also captured in static models (by i ).) Including y i,t 1 is another way of controlling for unobserved heterogeneity. This is true, but this way of controlling for unobserved heterogeneity is clearly inferior to within estimation (much more untestable assumptions are needed). Often it is argued that static models are biased in the presence of measurement error, regression toward the mean, etc. This is true as we discussed above (endogeneity). However, dynamic models also have problems with endogeneity. Both with static and dynamic models one has to use IV-estimation under endogeneity. Thus, it is not clear why these arguments should favor dynamic models. Our example: The pooled-ols-, RE-, and FE-estimator of the marriage effect in a model with lagged wage are 125, 124, and 495 respectively. All are biased. So I can see no sense in using a "dynamic" panel model. Conditional Fixed-Effects Models The FE-methodology applies to other regression models also. With non-linear regression models, however, it is not possible to cancel the i by demeaning the data. They enter the likelihood as "nuisance-parameters". However, if there exists a sufficient statistic allowing the fixed-effects to be conditioned out of the likelihood, the FE-model is estimable nevertheless (conditional likelihood). This is not the case for all non-linear regression models. It is the case for count data regression models (xtpoisson, xtnbreg) and for logistic regression. For other regression models one could estimate unconditional FE-models by including person-dummies. Unconditional FE-estimates are, however, biased. The bias gets smaller the larger T becomes. Fixed Effects Logit (Conditional Logit) This is a conventional logistic regression model with person-specific fixed-effects i :

21 Panel Data Analysis, Josef Brüderl 21 P y it 1 exp 1 x it i 1 exp 1 x it i. Estimation is done by conditional likelihood. The sufficient statistic is T t 1 y it the number of 1s. It is intuitively clear that with increasing i the number of 1s on the dependent variable should also increase. Again, we profit from the big advantage of the FE-methodology: Estimates of 1 are unbiased even in the presence of unobserved heterogeneity (if it is time-constant). It is not possible to estimate the effects of time-constant covariates. Finally, persons who have only 0s or 1s on the dependent variable are dropped, because they provide no information for the likelihood. This can dramatically reduce your dataset! Thus, to use a FE-logit you need data with sufficient variance both on X and Y, i.e. generally you will need panel data with many waves! Remark: If you have multi-episode event histories in discrete-time, you can analyze these data with the FE-logit. Thus, this model provides FE-methodology for event-history analysis! Example We investigate whether a wage increase increases the probability of further education. These are the (artificial) data: EURO per month Time no further education further education We use the same wage data as above. High-wage people self-select in further education (because their job is so demanding). At T 3 the high-wage people get a wage increase. This reduces their probability of participating in further education a little. Pooled-logit regression says that with increasing wage the probability of further education (feduc) increases. This is due to the fact that pooled-logit uses the between variation.. logit feduc wage Logit estimates Number of obs 24

22 Panel Data Analysis, Josef Brüderl 22 LR chi2(1) 6.26 Prob chi Log likelihood Pseudo R feduc Coef. Std. Err. z P z [95% Conf. Interval] wage _cons A FE-logit provides the correct answer: A wage increase reduces the probability of further education.. xtlogit feduc wage, fe note: multiple positive outcomes within groups encountered. note: 1 group (6 obs) dropped due to all positive or all negative outcomes. Conditional fixed-effects logistic regression Number of obs 18 Group variable (i): id Number of groups 3 Obs per group: min 6 avg 6.0 max 6 LR chi2(1) 1.18 Log likelihood Prob chi feduc Coef. Std. Err. z P z [95% Conf. Interval] wage The first note indicates that at least one person has more than once Y 1. Thisis good, because it helps in identifying the fixed-effects. The second note says that one person has been dropped, because it had in all waves Y 0. Note that for these effects we can not calculate probability effects, because we would have to plug in values for the i. We have to use the sign or odds interpretation.

Correlated Random Effects Panel Data Models

INTRODUCTION AND LINEAR MODELS Correlated Random Effects Panel Data Models IZA Summer School in Labor Economics May 13-19, 2013 Jeffrey M. Wooldridge Michigan State University 1. Introduction 2. The Linear

Department of Economics Session 2012/2013. EC352 Econometric Methods. Solutions to Exercises from Week 10 + 0.0077 (0.052)

Department of Economics Session 2012/2013 University of Essex Spring Term Dr Gordon Kemp EC352 Econometric Methods Solutions to Exercises from Week 10 1 Problem 13.7 This exercise refers back to Equation

ECON 142 SKETCH OF SOLUTIONS FOR APPLIED EXERCISE #2

University of California, Berkeley Prof. Ken Chay Department of Economics Fall Semester, 005 ECON 14 SKETCH OF SOLUTIONS FOR APPLIED EXERCISE # Question 1: a. Below are the scatter plots of hourly wages

Lab 5 Linear Regression with Within-subject Correlation. Goals: Data: Use the pig data which is in wide format:

Lab 5 Linear Regression with Within-subject Correlation Goals: Data: Fit linear regression models that account for within-subject correlation using Stata. Compare weighted least square, GEE, and random

DETERMINANTS OF CAPITAL ADEQUACY RATIO IN SELECTED BOSNIAN BANKS

DETERMINANTS OF CAPITAL ADEQUACY RATIO IN SELECTED BOSNIAN BANKS Nađa DRECA International University of Sarajevo nadja.dreca@students.ius.edu.ba Abstract The analysis of a data set of observation for 10

ESTIMATING AVERAGE TREATMENT EFFECTS: IV AND CONTROL FUNCTIONS, II Jeff Wooldridge Michigan State University BGSE/IZA Course in Microeconometrics

ESTIMATING AVERAGE TREATMENT EFFECTS: IV AND CONTROL FUNCTIONS, II Jeff Wooldridge Michigan State University BGSE/IZA Course in Microeconometrics July 2009 1. Quantile Treatment Effects 2. Control Functions

Chapter 10: Basic Linear Unobserved Effects Panel Data. Models:

Chapter 10: Basic Linear Unobserved Effects Panel Data Models: Microeconomic Econometrics I Spring 2010 10.1 Motivation: The Omitted Variables Problem We are interested in the partial effects of the observable

Applied Panel Data Analysis

Applied Panel Data Analysis Using Stata Prof. Dr. Josef Brüderl LMU München April 2015 Contents I I) Panel Data 06 II) The Basic Idea of Panel Data Analysis 17 III) An Intuitive Introduction to Linear

Models for Longitudinal and Clustered Data

Models for Longitudinal and Clustered Data Germán Rodríguez December 9, 2008, revised December 6, 2012 1 Introduction The most important assumption we have made in this course is that the observations

Introduction to Regression Models for Panel Data Analysis. Indiana University Workshop in Methods October 7, 2011. Professor Patricia A.

Introduction to Regression Models for Panel Data Analysis Indiana University Workshop in Methods October 7, 2011 Professor Patricia A. McManus Panel Data Analysis October 2011 What are Panel Data? Panel

GETTING STARTED: STATA & R BASIC COMMANDS ECONOMETRICS II. Stata Output Regression of wages on education

GETTING STARTED: STATA & R BASIC COMMANDS ECONOMETRICS II Stata Output Regression of wages on education. sum wage educ Variable Obs Mean Std. Dev. Min Max -------------+--------------------------------------------------------

Testing for serial correlation in linear panel-data models

The Stata Journal (2003) 3, Number 2, pp. 168 177 Testing for serial correlation in linear panel-data models David M. Drukker Stata Corporation Abstract. Because serial correlation in linear panel-data

Handling missing data in Stata a whirlwind tour

Handling missing data in Stata a whirlwind tour 2012 Italian Stata Users Group Meeting Jonathan Bartlett www.missingdata.org.uk 20th September 2012 1/55 Outline The problem of missing data and a principled

Panel Data Analysis Fixed and Random Effects using Stata (v. 4.2)

Panel Data Analysis Fixed and Random Effects using Stata (v. 4.2) Oscar Torres-Reyna otorres@princeton.edu December 2007 http://dss.princeton.edu/training/ Intro Panel data (also known as longitudinal

STATA Tutorial Professor Erdinç Please follow the directions once you locate the Stata software in your computer. Room 114 (Business Lab) has computers with Stata software 1.Wald Test Wald Test is used

Lecture 15. Endogeneity & Instrumental Variable Estimation

Lecture 15. Endogeneity & Instrumental Variable Estimation Saw that measurement error (on right hand side) means that OLS will be biased (biased toward zero) Potential solution to endogeneity instrumental

FIXED EFFECTS AND RELATED ESTIMATORS FOR CORRELATED RANDOM COEFFICIENT AND TREATMENT EFFECT PANEL DATA MODELS

FIXED EFFECTS AND RELATED ESTIMATORS FOR CORRELATED RANDOM COEFFICIENT AND TREATMENT EFFECT PANEL DATA MODELS Jeffrey M. Wooldridge Department of Economics Michigan State University East Lansing, MI 48824-1038

REGRESSION LINES IN STATA

REGRESSION LINES IN STATA THOMAS ELLIOTT 1. Introduction to Regression Regression analysis is about eploring linear relationships between a dependent variable and one or more independent variables. Regression

Instrumental Variables Regression. Instrumental Variables (IV) estimation is used when the model has endogenous s.

Instrumental Variables Regression Instrumental Variables (IV) estimation is used when the model has endogenous s. IV can thus be used to address the following important threats to internal validity: Omitted

Marginal Effects for Continuous Variables Richard Williams, University of Notre Dame, http://www3.nd.edu/~rwilliam/ Last revised February 21, 2015

Marginal Effects for Continuous Variables Richard Williams, University of Notre Dame, http://www3.nd.edu/~rwilliam/ Last revised February 21, 2015 References: Long 1997, Long and Freese 2003 & 2006 & 2014,

Econometric Analysis of Cross Section and Panel Data Second Edition. Jeffrey M. Wooldridge. The MIT Press Cambridge, Massachusetts London, England

Econometric Analysis of Cross Section and Panel Data Second Edition Jeffrey M. Wooldridge The MIT Press Cambridge, Massachusetts London, England Preface Acknowledgments xxi xxix I INTRODUCTION AND BACKGROUND

ECON Introductory Econometrics. Lecture 17: Experiments

ECON4150 - Introductory Econometrics Lecture 17: Experiments Monique de Haan (moniqued@econ.uio.no) Stock and Watson Chapter 13 Lecture outline 2 Why study experiments? The potential outcome framework.

Introduction to Regression and Data Analysis

Statlab Workshop Introduction to Regression and Data Analysis with Dan Campbell and Sherlock Campbell October 28, 2008 I. The basics A. Types of variables Your variables may take several forms, and it

Milk Data Analysis. 1. Objective Introduction to SAS PROC MIXED Analyzing protein milk data using STATA Refit protein milk data using PROC MIXED

1. Objective Introduction to SAS PROC MIXED Analyzing protein milk data using STATA Refit protein milk data using PROC MIXED 2. Introduction to SAS PROC MIXED The MIXED procedure provides you with flexibility

HURDLE AND SELECTION MODELS Jeff Wooldridge Michigan State University BGSE/IZA Course in Microeconometrics July 2009

HURDLE AND SELECTION MODELS Jeff Wooldridge Michigan State University BGSE/IZA Course in Microeconometrics July 2009 1. Introduction 2. A General Formulation 3. Truncated Normal Hurdle Model 4. Lognormal

From the help desk: Swamy s random-coefficients model

The Stata Journal (2003) 3, Number 3, pp. 302 308 From the help desk: Swamy s random-coefficients model Brian P. Poi Stata Corporation Abstract. This article discusses the Swamy (1970) random-coefficients

August 2012 EXAMINATIONS Solution Part I

August 01 EXAMINATIONS Solution Part I (1) In a random sample of 600 eligible voters, the probability that less than 38% will be in favour of this policy is closest to (B) () In a large random sample,

How Do We Test Multiple Regression Coefficients?

How Do We Test Multiple Regression Coefficients? Suppose you have constructed a multiple linear regression model and you have a specific hypothesis to test which involves more than one regression coefficient.

Stata Walkthrough 4: Regression, Prediction, and Forecasting

Stata Walkthrough 4: Regression, Prediction, and Forecasting Over drinks the other evening, my neighbor told me about his 25-year-old nephew, who is dating a 35-year-old woman. God, I can t see them getting

Introduction to Longitudinal Data Analysis

Introduction to Longitudinal Data Analysis Longitudinal Data Analysis Workshop Section 1 University of Georgia: Institute for Interdisciplinary Research in Education and Human Development Section 1: Introduction

Lecture 16. Endogeneity & Instrumental Variable Estimation (continued)

Lecture 16. Endogeneity & Instrumental Variable Estimation (continued) Seen how endogeneity, Cov(x,u) 0, can be caused by Omitting (relevant) variables from the model Measurement Error in a right hand

MODEL I: DRINK REGRESSED ON GPA & MALE, WITHOUT CENTERING

Interpreting Interaction Effects; Interaction Effects and Centering Richard Williams, University of Notre Dame, http://www3.nd.edu/~rwilliam/ Last revised February 20, 2015 Models with interaction effects

IAPRI Quantitative Analysis Capacity Building Series. Multiple regression analysis & interpreting results

IAPRI Quantitative Analysis Capacity Building Series Multiple regression analysis & interpreting results How important is R-squared? R-squared Published in Agricultural Economics 0.45 Best article of the

Multilevel Models for Longitudinal Data. Fiona Steele

Multilevel Models for Longitudinal Data Fiona Steele Aims of Talk Overview of the application of multilevel (random effects) models in longitudinal research, with examples from social research Particular

Clustering in the Linear Model

Short Guides to Microeconometrics Fall 2014 Kurt Schmidheiny Universität Basel Clustering in the Linear Model 2 1 Introduction Clustering in the Linear Model This handout extends the handout on The Multiple

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates

Statistics 104 Final Project A Culture of Debt: A Study of Credit Card Spending in America TF: Kevin Rader Anonymous Students: LD, MH, IW, MY

Statistics 104 Final Project A Culture of Debt: A Study of Credit Card Spending in America TF: Kevin Rader Anonymous Students: LD, MH, IW, MY ABSTRACT: This project attempted to determine the relationship

Lectures 8, 9 & 10. Multiple Regression Analysis

Lectures 8, 9 & 0. Multiple Regression Analysis In which you learn how to apply the principles and tests outlined in earlier lectures to more realistic models involving more than explanatory variable and

Regression in Stata. Alicia Doyle Lynch Harvard-MIT Data Center (HMDC)

Regression in Stata Alicia Doyle Lynch Harvard-MIT Data Center (HMDC) Documents for Today Find class materials at: http://libraries.mit.edu/guides/subjects/data/ training/workshops.html Several formats

Employer-Provided Health Insurance and Labor Supply of Married Women

Upjohn Institute Working Papers Upjohn Research home page 2011 Employer-Provided Health Insurance and Labor Supply of Married Women Merve Cebi University of Massachusetts - Dartmouth and W.E. Upjohn Institute

ON THE ROBUSTNESS OF FIXED EFFECTS AND RELATED ESTIMATORS IN CORRELATED RANDOM COEFFICIENT PANEL DATA MODELS

ON THE ROBUSTNESS OF FIXED EFFECTS AND RELATED ESTIMATORS IN CORRELATED RANDOM COEFFICIENT PANEL DATA MODELS Jeffrey M. Wooldridge THE INSTITUTE FOR FISCAL STUDIES DEPARTMENT OF ECONOMICS, UCL cemmap working

MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS

MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS MSR = Mean Regression Sum of Squares MSE = Mean Squared Error RSS = Regression Sum of Squares SSE = Sum of Squared Errors/Residuals α = Level of Significance

Regression analysis in practice with GRETL

Regression analysis in practice with GRETL Prerequisites You will need the GNU econometrics software GRETL installed on your computer (http://gretl.sourceforge.net/), together with the sample files that

From the help desk: Bootstrapped standard errors

The Stata Journal (2003) 3, Number 1, pp. 71 80 From the help desk: Bootstrapped standard errors Weihua Guan Stata Corporation Abstract. Bootstrapping is a nonparametric approach for evaluating the distribution

The Simple Linear Regression Model: Specification and Estimation

Chapter 3 The Simple Linear Regression Model: Specification and Estimation 3.1 An Economic Model Suppose that we are interested in studying the relationship between household income and expenditure on

Statistical Modelling in Stata 5: Linear Models

Statistical Modelling in Stata 5: Linear Models Mark Lunt Arthritis Research UK Centre for Excellence in Epidemiology University of Manchester 08/11/2016 Structure This Week What is a linear model? How

Simultaneous Equation Models As discussed last week, one important form of endogeneity is simultaneity. This arises when one or more of the

Simultaneous Equation Models As discussed last week, one important form of endogeneity is simultaneity. This arises when one or more of the explanatory variables is jointly determined with the dependent

Introduction to Panel Data Analysis

Introduction to Panel Data Analysis Oliver Lipps / Ursina Kuhn Swiss Centre of Expertise in the Social Sciences (FORS) c/o University of Lausanne Lugano Summer School, August 27-31 212 Introduction panel

What s New in Econometrics? Lecture 8 Cluster and Stratified Sampling

What s New in Econometrics? Lecture 8 Cluster and Stratified Sampling Jeff Wooldridge NBER Summer Institute, 2007 1. The Linear Model with Cluster Effects 2. Estimation with a Small Number of Groups and

Discussion Section 4 ECON 139/239 2010 Summer Term II

Discussion Section 4 ECON 139/239 2010 Summer Term II 1. Let s use the CollegeDistance.csv data again. (a) An education advocacy group argues that, on average, a person s educational attainment would increase

Multiple Linear Regression in Data Mining

Multiple Linear Regression in Data Mining Contents 2.1. A Review of Multiple Linear Regression 2.2. Illustration of the Regression Process 2.3. Subset Selection in Linear Regression 1 2 Chap. 2 Multiple

Interaction effects between continuous variables (Optional)

Interaction effects between continuous variables (Optional) Richard Williams, University of Notre Dame, http://www.nd.edu/~rwilliam/ Last revised February 0, 05 This is a very brief overview of this somewhat

Failure to take the sampling scheme into account can lead to inaccurate point estimates and/or flawed estimates of the standard errors.

Analyzing Complex Survey Data: Some key issues to be aware of Richard Williams, University of Notre Dame, http://www3.nd.edu/~rwilliam/ Last revised January 24, 2015 Rather than repeat material that is

Econ 371 Problem Set #3 Answer Sheet

Econ 371 Problem Set #3 Answer Sheet 4.1 In this question, you are told that a OLS regression analysis of third grade test scores as a function of class size yields the following estimated model. T estscore

2. Linear regression with multiple regressors

2. Linear regression with multiple regressors Aim of this section: Introduction of the multiple regression model OLS estimation in multiple regression Measures-of-fit in multiple regression Assumptions

Data Analysis Methodology 1

Data Analysis Methodology 1 Suppose you inherited the database in Table 1.1 and needed to find out what could be learned from it fast. Say your boss entered your office and said, Here s some software project

2. Pooled Cross Sections and Panels. 2.1 Pooled Cross Sections versus Panel Data

2. Pooled Cross Sections and Panels 2.1 Pooled Cross Sections versus Panel Data Pooled Cross Sections are obtained by collecting random samples from a large polulation independently of each other at different

is paramount in advancing any economy. For developed countries such as

Introduction The provision of appropriate incentives to attract workers to the health industry is paramount in advancing any economy. For developed countries such as Australia, the increasing demand for

Sample Size Calculation for Longitudinal Studies

Sample Size Calculation for Longitudinal Studies Phil Schumm Department of Health Studies University of Chicago August 23, 2004 (Supported by National Institute on Aging grant P01 AG18911-01A1) Introduction

problem arises when only a non-random sample is available differs from censored regression model in that x i is also unobserved

4 Data Issues 4.1 Truncated Regression population model y i = x i β + ε i, ε i N(0, σ 2 ) given a random sample, {y i, x i } N i=1, then OLS is consistent and efficient problem arises when only a non-random

When to Use Which Statistical Test

When to Use Which Statistical Test Rachel Lovell, Ph.D., Senior Research Associate Begun Center for Violence Prevention Research and Education Jack, Joseph, and Morton Mandel School of Applied Social Sciences

What s New in Econometrics? Lecture 10 Difference-in-Differences Estimation

What s New in Econometrics? Lecture 10 Difference-in-Differences Estimation Jeff Wooldridge NBER Summer Institute, 2007 1. Review of the Basic Methodology 2. How Should We View Uncertainty in DD Settings?

SYSTEMS OF REGRESSION EQUATIONS

SYSTEMS OF REGRESSION EQUATIONS 1. MULTIPLE EQUATIONS y nt = x nt n + u nt, n = 1,...,N, t = 1,...,T, x nt is 1 k, and n is k 1. This is a version of the standard regression model where the observations

Regression Analysis. Data Calculations Output

Regression Analysis In an attempt to find answers to questions such as those posed above, empirical labour economists use a useful tool called regression analysis. Regression analysis is essentially a

Residuals. Residuals = ª Department of ISM, University of Alabama, ST 260, M23 Residuals & Minitab. ^ e i = y i - y i

A continuation of regression analysis Lesson Objectives Continue to build on regression analysis. Learn how residual plots help identify problems with the analysis. M23-1 M23-2 Example 1: continued Case

In Chapter 2, we used linear regression to describe linear relationships. The setting for this is a

Math 143 Inference on Regression 1 Review of Linear Regression In Chapter 2, we used linear regression to describe linear relationships. The setting for this is a bivariate data set (i.e., a list of cases/subjects

Correlation and Regression

Correlation and Regression Scatterplots Correlation Explanatory and response variables Simple linear regression General Principles of Data Analysis First plot the data, then add numerical summaries Look

Panel Data Econometrics

Panel Data Econometrics Master of Science in Economics - University of Geneva Christophe Hurlin, Université d Orléans University of Orléans January 2010 De nition A longitudinal, or panel, data set is

Lecture 13. Use and Interpretation of Dummy Variables. Stop worrying for 1 lecture and learn to appreciate the uses that dummy variables can be put to

Lecture 13. Use and Interpretation of Dummy Variables Stop worrying for 1 lecture and learn to appreciate the uses that dummy variables can be put to Using dummy variables to measure average differences

Department of Economics, Session 2012/2013. EC352 Econometric Methods. Exercises from Week 03

Department of Economics, Session 01/013 University of Essex, Autumn Term Dr Gordon Kemp EC35 Econometric Methods Exercises from Week 03 1 Problem P3.11 The following equation describes the median housing

Panel Data: Linear Models

Panel Data: Linear Models Laura Magazzini University of Verona laura.magazzini@univr.it http://dse.univr.it/magazzini Laura Magazzini (@univr.it) Panel Data: Linear Models 1 / 45 Introduction Outline What

2. What are the theoretical and practical consequences of autocorrelation?

Lecture 10 Serial Correlation In this lecture, you will learn the following: 1. What is the nature of autocorrelation? 2. What are the theoretical and practical consequences of autocorrelation? 3. Since

Introduction to Fixed Effects Methods

Introduction to Fixed Effects Methods 1 1.1 The Promise of Fixed Effects for Nonexperimental Research... 1 1.2 The Paired-Comparisons t-test as a Fixed Effects Method... 2 1.3 Costs and Benefits of Fixed

Standard errors of marginal effects in the heteroskedastic probit model

Standard errors of marginal effects in the heteroskedastic probit model Thomas Cornelißen Discussion Paper No. 320 August 2005 ISSN: 0949 9962 Abstract In non-linear regression models, such as the heteroskedastic

A Panel Data Analysis of Corporate Attributes and Stock Prices for Indian Manufacturing Sector

Journal of Modern Accounting and Auditing, ISSN 1548-6583 November 2013, Vol. 9, No. 11, 1519-1525 D DAVID PUBLISHING A Panel Data Analysis of Corporate Attributes and Stock Prices for Indian Manufacturing

Premium Copayments and the Trade-off between Wages and Employer-Provided Health Insurance. February 2011

PRELIMINARY DRAFT DO NOT CITE OR CIRCULATE COMMENTS WELCOMED Premium Copayments and the Trade-off between Wages and Employer-Provided Health Insurance February 2011 By Darren Lubotsky School of Labor &

Quantitative Methods for Economics Tutorial 9. Katherine Eyal

Quantitative Methods for Economics Tutorial 9 Katherine Eyal TUTORIAL 9 4 October 2010 ECO3021S Part A: Problems 1. In Problem 2 of Tutorial 7, we estimated the equation ŝleep = 3, 638.25 0.148 totwrk

I n d i a n a U n i v e r s i t y U n i v e r s i t y I n f o r m a t i o n T e c h n o l o g y S e r v i c e s

I n d i a n a U n i v e r s i t y U n i v e r s i t y I n f o r m a t i o n T e c h n o l o g y S e r v i c e s Linear Regression Models for Panel Data Using SAS, Stata, LIMDEP, and SPSS * Hun Myoung Park,

Econometrics II. Lecture 9: Sample Selection Bias

Econometrics II Lecture 9: Sample Selection Bias Måns Söderbom 5 May 2011 Department of Economics, University of Gothenburg. Email: mans.soderbom@economics.gu.se. Web: www.economics.gu.se/soderbom, www.soderbom.net.

11. Analysis of Case-control Studies Logistic Regression

Research methods II 113 11. Analysis of Case-control Studies Logistic Regression This chapter builds upon and further develops the concepts and strategies described in Ch.6 of Mother and Child Health:

A course on Longitudinal data analysis : what did we learn? COMPASS 11/03/2011

A course on Longitudinal data analysis : what did we learn? COMPASS 11/03/2011 1 Abstract We report back on methods used for the analysis of longitudinal data covered by the course We draw lessons for

MISSING DATA TECHNIQUES WITH SAS. IDRE Statistical Consulting Group

MISSING DATA TECHNIQUES WITH SAS IDRE Statistical Consulting Group ROAD MAP FOR TODAY To discuss: 1. Commonly used techniques for handling missing data, focusing on multiple imputation 2. Issues that could

Random Effects Models for Longitudinal Survey Data

Analysis of Survey Data. Edited by R. L. Chambers and C. J. Skinner Copyright 2003 John Wiley & Sons, Ltd. ISBN: 0-471-89987-9 CHAPTER 14 Random Effects Models for Longitudinal Survey Data C. J. Skinner

How to Use a Monte Carlo Study to Decide on Sample Size and Determine Power

STRUCTURAL EQUATION MODELING, 9(4), 599 620 Copyright 2002, Lawrence Erlbaum Associates, Inc. TEACHER S CORNER How to Use a Monte Carlo Study to Decide on Sample Size and Determine Power Linda K. Muthén

Econometric analysis of the Belgian car market

Econometric analysis of the Belgian car market By: Prof. dr. D. Czarnitzki/ Ms. Céline Arts Tim Verheyden Introduction In contrast to typical examples from microeconomics textbooks on homogeneous goods

Colombian industrial structure behavior and its regions between 1974 and 2005.

Colombian industrial structure behavior and its regions between 1974 and 2005. Luis Fernando López Pineda Director of Center of Research to Development and Compeveness Chief Economic Research Columbus,

Interaction effects and group comparisons Richard Williams, University of Notre Dame, http://www3.nd.edu/~rwilliam/ Last revised February 20, 2015

Interaction effects and group comparisons Richard Williams, University of Notre Dame, http://www3.nd.edu/~rwilliam/ Last revised February 20, 2015 Note: This handout assumes you understand factor variables,

25 Working with categorical data and factor variables

25 Working with categorical data and factor variables Contents 25.1 Continuous, categorical, and indicator variables 25.1.1 Converting continuous variables to indicator variables 25.1.2 Converting continuous

10. Analysis of Longitudinal Studies Repeat-measures analysis

Research Methods II 99 10. Analysis of Longitudinal Studies Repeat-measures analysis This chapter builds on the concepts and methods described in Chapters 7 and 8 of Mother and Child Health: Research methods.

1. Suppose that a score on a final exam depends upon attendance and unobserved factors that affect exam performance (such as student ability).

Examples of Questions on Regression Analysis: 1. Suppose that a score on a final exam depends upon attendance and unobserved factors that affect exam performance (such as student ability). Then,. When

Regression Analysis: A Complete Example

Regression Analysis: A Complete Example This section works out an example that includes all the topics we have discussed so far in this chapter. A complete example of regression analysis. PhotoDisc, Inc./Getty

Note 2 to Computer class: Standard mis-specification tests

Note 2 to Computer class: Standard mis-specification tests Ragnar Nymoen September 2, 2013 1 Why mis-specification testing of econometric models? As econometricians we must relate to the fact that the

Marketing Mix Modelling and Big Data P. M Cain

1) Introduction Marketing Mix Modelling and Big Data P. M Cain Big data is generally defined in terms of the volume and variety of structured and unstructured information. Whereas structured data is stored

The average hotel manager recognizes the criticality of forecasting. However, most

Introduction The average hotel manager recognizes the criticality of forecasting. However, most managers are either frustrated by complex models researchers constructed or appalled by the amount of time

A Simple Feasible Alternative Procedure to Estimate Models with High-Dimensional Fixed Effects

DISCUSSION PAPER SERIES IZA DP No. 3935 A Simple Feasible Alternative Procedure to Estimate Models with High-Dimensional Fixed Effects Paulo Guimarães Pedro Portugal January 2009 Forschungsinstitut zur

10 Dichotomous or binary responses

10 Dichotomous or binary responses 10.1 Introduction Dichotomous or binary responses are widespread. Examples include being dead or alive, agreeing or disagreeing with a statement, and succeeding or failing

Lecture 3: Differences-in-Differences

Lecture 3: Differences-in-Differences Fabian Waldinger Waldinger () 1 / 55 Topics Covered in Lecture 1 Review of fixed effects regression models. 2 Differences-in-Differences Basics: Card & Krueger (1994).

Regression Analysis: Basic Concepts

The simple linear model Regression Analysis: Basic Concepts Allin Cottrell Represents the dependent variable, y i, as a linear function of one independent variable, x i, subject to a random disturbance