Comparing Group Means: The T-test and One-way ANOVA Using STATA, SAS, and SPSS

Size: px
Start display at page:

Download "Comparing Group Means: The T-test and One-way ANOVA Using STATA, SAS, and SPSS"

Transcription

1 , The Trustees of Indiana University Comparing Group Means: 1 Comparing Group Means: The T-test and One-way ANOVA Using STATA, SAS, and SPSS Hun Myoung Park This document summarizes the method of comparing group means and illustrates how to conduct the t-test and one-way ANOVA using STATA 9.0, SAS 9.1, and SPSS Introduction. Univariate Samples 3. Paired (dependent) Samples 4. Independent Samples with Equal Variances 5. Independent Samples with Unequal Variances 6. One-way ANOVA, GLM, and Regression 7. Conclusion 1. Introduction The t-test and analysis of variance (ANOVA) compare group means. The mean of a variable to be compared should be substantively interpretable. A t-test may examine gender differences in average salary or racial (white versus black) differences in average annual income. The lefthand side (LHS) variable to be tested should be interval or ratio, whereas the right-hand side (RHS) variable should be binary (categorical). 1.1 T-test and ANOVA While the t-test is limited to comparing means of two groups, one-way ANOVA can compare more than two groups. Therefore, the t-test is considered a special case of one-way ANOVA. These analyses do not, however, necessarily imply any causality (i.e., a causal relationship between the left-hand and right-hand side variables). Table 1 compares the t-test and one-way ANOVA. Table 1. Comparison between the T-test and One-way ANOVA T-test One-way ANOVA LHS (Dependent) Interval or ratio variable Interval or ratio variable RHS (Independent) Binary variable with only two groups Categorical variable Null Hypothesis µ 1 = µ µ 1 = µ = µ 3 =... Prob. Distribution * T distribution F distribution * In the case of one degree of freedom on numerator, F=t. The t-test assumes that samples are randomly drawn from normally distributed populations with unknown population means. Otherwise, their means are no longer the best measures of central tendency and the t-test will not be valid. The Central Limit Theorem says, however, that

2 , The Trustees of Indiana University Comparing Group Means: the distributions of y 1 and y are approximately normal when N is large. When n 1 + n 30, in practice, you do not need to worry too much about the normality assumption. You may numerically test the normality assumption using the Shapiro-Wilk W (N<=000), Shapiro-Francia W (N<=5000), Kolmogorov-Smirnov D (N>000), and Jarque-Bera tests. If N is small and the null hypothesis of normality is rejected, you my try such nonparametric methods as the Kolmogorov-Smirnov test, Kruscal-Wallis test, Wilcoxon Rank-Sum Test, or Log-Rank Test, depending on the circumstances. 1. T-test in SAS, STATA, and SPSS In STATA, the.ttest and.ttesti commands are used to conduct t-tests, whereas the.anova and.oneway commands perform one-way ANOVA. SAS has the TTEST procedure for t-test, but the UNIVARIATE, and MEANS procedures also have options for t- test. SAS provides various procedures for the analysis of variance, such as the ANOVA, GLM, and MIXED procedures. The ANOVA procedure can handle balanced data only, while the GLM and MIXED can analyze either balanced or unbalanced data (having the same or different numbers of observations across groups). However, unbalanced data does not cause any problems in the t-test and one-way ANOVA. In SPSS, T-TEST, ONEWAY, and UNIANOVA commands are used to perform t-test and one-way ANOVA. Table summarizes STATA commands, SAS procedures, and SPSS commands that are associated with t-test and one-way ANOVA. Table. Related Procedures and Commands in STATA, SAS, and SPSS STATA 9.0 SE SAS 9.1 SPSS 13.0 Normality Test.sktest;.swilk; UNIVARIATE EXAMINE.sfrancia Equal Variance.oneway TTEST T-TEST Nonparametric.ksmirnov;.kwallis NPAR1WAY NPAR TESTS T-test.ttest TTEST; MEANS T-TEST ANOVA.anova;.oneway ANOVA ONEWAY GLM * GLM; MIXED UNIANOVA * The STATA.glm command is not used for the T test, but for the generalized linear model. 1.3 Data Arrangement There are two types of data arrangement for t-tests (Figure 1). The first data arrangement has a variable to be tested and a grouping variable to classify groups (0 or 1). The second, appropriate especially for paired samples, has two variables to be tested. The two variables in this type are not, however, necessarily paired nor balanced. SAS and SPSS prefer the first data arrangement, whereas STATA can handle either type flexibly. Note that the numbers of observations across groups are not necessarily equal.

3 , The Trustees of Indiana University Comparing Group Means: 3 Figure 1. Two Types of Data Arrangement Variable Group Variable1 Variable x x y y The data set used here is adopted from J. F. Fraumeni s study on cigarette smoking and cancer (Fraumeni 1968). The data are per capita numbers of cigarettes sold by 43 states and the District of Columbia in 1960 together with death rates per hundred thousand people from various forms of cancer. Two variables were added to categorize states into two groups. See the appendix for the details. x x y y

4 , The Trustees of Indiana University Comparing Group Means: 4. Univariate Samples The univariate-sample or one-sample t-test determines whether an unknown population mean µ differs from a hypothesized value c that is commonly set to zero: H 0 : µ = c. The t statistic y c follows Student s T probability distribution with n-1 degrees of freedom, t = ~ t( n 1), s y where y is a variable to be tested and n is the number of observations. 1 Suppose you want to test if the population mean of the death rates from lung cancer is 0 per 100,000 people at the.01 significance level. Note the default significance level used in most software is the.05 level..1 T-test in STATA The.ttest command conducts t-tests in an easy and flexible manner. For a univariate sample test, the command requires that a hypothesized value be explicitly specified. The level() option indicates the confidence level as a percentage. The 99 percent confidence level is equivalent to the.01 significance level.. ttest lung=0, level(99) One-sample t test Variable Obs Mean Std. Err. Std. Dev. [99% Conf. Interval] lung mean = mean(lung) t = Ho: mean = 0 degrees of freedom = 43 Ha: mean < 0 Ha: mean!= 0 Ha: mean > 0 Pr(T < t) = Pr( T > t ) = Pr(T > t) = STATA first lists descriptive statistics of the variable lung. The mean and standard deviation of the 44 observations are and 4.8, respectively. The t statistic is = ( ) / Finally, the degrees of freedom are 43 =44-1. There are three t-tests at the bottom of the output above. The first and third are one-tailed tests, whereas the second is a two-tailed test. The t statistic and its large p-value do not reject the null hypothesis that the population mean of the death rate from lung cancer is 0 at the.01 level. The mean of the death rate may be 0 per 100,000 people. Note that the hypothesized value 0 falls into the 99 percent confidence interval y 1 i ( ) y =, = yi y s s, and standard error s y =. n n 1 n The 99 percent confidence interval of the mean is y tα s = *. 6374, where the.695 is ± y ± the critical value with 43 degree of freedom at the.01 level in the two-tailed test.

5 , The Trustees of Indiana University Comparing Group Means: 5 If you just have the aggregate data (i.e., the number of observations, mean, and standard deviation of the sample), use the.ttesti command to replicate the t-test above. Note the hypothesized value is specified at the end of the summary statistics.. ttesti , level(99). T-test Using the SAS TTEST Procedure The TTEST procedure conducts various types of t-tests in SAS. The H0 option specifies a hypothesized value, whereas the ALPHA indicates a significance level. If omitted, the default values zero and.05 respectively are assumed. PROC TTEST H0=0 ALPHA=.01 DATA=masil.smoking; VAR lung; RUN; The TTEST Procedure Statistics Lower CL Upper CL Lower CL Upper CL Variable N Mean Mean Mean Std Dev Std Dev Std Dev Std Err lung T-Tests Variable DF t Value Pr > t lung The TTEST procedure reports descriptive statistics followed by a one-tailed t-test. You may have a summary data set containing the values of a variable (lung) and their frequencies (count). The FREQ option of the TTEST procedure provides the solution for this case. PROC TTEST H0=0 ALPHA=.01 DATA=masil.smoking; VAR lung; FREQ count; RUN;.3 T-test Using the SAS UNIVARIATE and MEANS Procedures The SAS UNIVARIATE and MEANS procedures also conduct a t-test for a univariate-sample. The UNIVARIATE procedure is basically designed to produces a variety of descriptive statistics of a variable. Its MU0 option tells the procedure to perform a t-test using the hypothesized value specified. The VARDEF=DF specifies a divisor (degrees of freedom) used in

6 , The Trustees of Indiana University Comparing Group Means: 6 computing the variance (standard deviation). 3 The NORMAL option examines if the variable is normally distributed. PROC UNIVARIATE MU0=0 VARDEF=DF NORMAL ALPHA=.01 DATA=masil.smoking; VAR lung; RUN; The UNIVARIATE Procedure Variable: lung Moments N 44 Sum Weights 44 Mean Sum Observations Std Deviation Variance Skewness Kurtosis Uncorrected SS Corrected SS Coeff Variation Std Error Mean Basic Statistical Measures Location Variability Mean Std Deviation 4.81 Median Variance Mode. Range Interquartile Range Tests for Location: Mu0=0 Test -Statistic p Value Student's t t Pr > t Sign M 1 Pr >= M Signed Rank S Pr >= S Tests for Normality Test --Statistic p Value Shapiro-Wilk W Pr < W Kolmogorov-Smirnov D Pr > D > Cramer-von Mises W-Sq Pr > W-Sq >0.500 Anderson-Darling A-Sq Pr > A-Sq >0.500 Quantiles (Definition 5) Quantile Estimate 100% Max The VARDEF=N uses N as a divisor, while VARDEF=WDF specifies the sum of weights minus one.

7 , The Trustees of Indiana University Comparing Group Means: 7 99% % % % Q % Median % Q Quantiles (Definition 5) Quantile Estimate 10% % % % Min Extreme Observations -----Lowest Highest---- Value Obs Value Obs The third block of the output above reports a t statistic and its p-value. The fourth block contains several statistics of normality test. Since N is less than,000, you should read the Shapiro-Wilk W, which suggests that lung is normally distributed (p<.535) The MEANS procedure also conducts t-tests using the T and PROBT options that request the t statistic and its two-tailed p-value. The CLM option produces the two-tailed confidence interval (or upper and lower limits). The MEAN, STD, and STDERR respectively print the sample mean, standard deviation, and standard error. PROC MEANS MEAN STD STDERR T PROBT CLM VARDEF=DF ALPHA=.01 DATA=masil.smoking; VAR lung; RUN; The MEANS Procedure Analysis Variable : lung Lower 99% Upper 99% Mean Std Dev Std Error t Value Pr > t CL for Mean CL for Mean ƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒ < ƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒ

8 , The Trustees of Indiana University Comparing Group Means: 8 The MEANS procedure does not, however, have an option to specify a hypothesized value to anything other than zero. Thus, the null hypothesis here is that the population mean of death rate from lung cancer is zero. The t statistic is ( )/ The large t statistic and small p-value reject the null hypothesis, reporting a consistent conclusion..4 T-test in SPSS The SPSS has the T-TEST command for t-tests. The /TESTVAL subcommand specifies the value with which the sample mean is compared, whereas the /VARIABLES list the variables to be tested. Like STATA, SPSS specifies a confidence level rather than a significance level in the /CRITERIA=CI() subcommand. T-TEST /TESTVAL = 0 /VARIABLES = lung /MISSING = ANALYSIS /CRITERIA = CI(.99).

9 , The Trustees of Indiana University Comparing Group Means: 9 3. Paired (Dependent) Samples When two variables are not independent, but paired, the difference of these two variables, di = y1 i yi, is treated as if it were a single sample. This test is appropriate for pre-post treatment responses. The null hypothesis is that the true mean difference of the two variables is D 0, H : D 0 µ d = 0. 4 The difference is typically assumed to be zero unless explicitly specified. 3.1 T-test in STATA In order to conduct a paired sample t-test, you need to list two variables separated by an equal sign. The interpretation of the t-test remains almost unchanged. The = ( )/ at 35 degrees of freedom does not reject the null hypothesis that the difference is zero.. ttest pre=post0, level(95) Paired t test Variable Obs Mean Std. Err. Std. Dev. [95% Conf. Interval] pre post diff mean(diff) = mean(pre post0) t = Ho: mean(diff) = 0 degrees of freedom = 35 Ha: mean(diff) < 0 Ha: mean(diff)!= 0 Ha: mean(diff) > 0 Pr(T < t) = Pr( T > t ) = Pr(T > t) = Alternatively, you may first compute the difference between the two variables, and then conduct one-sample t-test. Note that the default confidence level, level(95), can be omitted.. gen d=pre post0. ttest d=0 3. T-test in SAS In the TTEST procedure, you have to use the PAIRED instead of the VAR statement. For the output of the following procedure, refer to the end of this section. PROC TTEST DATA=temp.drug; PAIRED pre*post0; RUN; t d D 4 = 0 ~ t( n 1) d sd, where d n d =, i s d ( ) = di d n 1, and s d = sd n

10 , The Trustees of Indiana University Comparing Group Means: 10 The PAIRED statement provides various ways of comparing variables using asterisk (*) and colon (:) operators. The asterisk requests comparisons between each variable on the left with each variable on the right. The colon requests comparisons between the first variable on the left and the first on the right, the second on the left and the second on the right, and so forth. Consider the following examples. PROC TTEST; PAIRED pro: post0; PAIRED (a b)*(c d); /* Equivalent to PAIRED a*c a*d b*c b*d; */ PAIRED (a b):(c d); /* Equivalent to PAIRED a*c b*c; */ PAIRED (a1-a10)*(b1-b10); RUN; The first PAIRED statement is the same as the PAIRED pre*post0. The second and the third PAIRED statements contrast differences between asterisk and colon operators. The hyphen ( ) operator in the last statement indicates a1 through a10 and b1 through b10. Let us consider an example of the PAIRED statement. PROC TTEST DATA=temp.drug; PAIRED (pre)*(post0-post1); RUN; The TTEST Procedure Statistics Lower CL Upper CL Lower CL Upper CL Difference N Mean Mean Mean Std Dev Std Dev Std Dev Std Err pre - post pre - post T-Tests Difference DF t Value Pr > t pre - post pre - post The first t statistic for pre versus post0 is identical to that of the previous section. The second for pre versus post1 rejects the null hypothesis of no mean difference at the.01 level (p<.000). In order to use the UNIVARIATE and MEANS procedures, the difference between two paired variables should be computed in advance. DATA temp.drug; SET temp.drug; d1 = pre - post0; d = pre - post1; RUN;

11 , The Trustees of Indiana University Comparing Group Means: 11 PROC UNIVARIATE MU0=0 VARDEF=DF NORMAL; VAR d1 d; RUN; PROC MEANS MEAN STD STDERR T PROBT CLM; VAR d1 d; RUN; PROC TTEST ALPHA=.05; VAR d1 d; RUN; 3.3 T-test in SPSS In SPSS, the PAIRS subcommand indicates a paired sample t-test. T-TEST PAIRS = pre post0 /CRITERIA = CI(.95) /MISSING = ANALYSIS.

12 , The Trustees of Indiana University Comparing Group Means: 1 4. Independent Samples with Equal Variances You should check three assumptions first when testing the mean difference of two independent samples. First, the samples are drawn from normally distributed populations with unknown parameters. Second, the two samples are independent in the sense that they are drawn from different populations and/or the elements of one sample are not related to those of the other sample. Finally, the population variances of the two groups, σ 1 and σ are equal. 5 If any one of assumption is violated, the t-test is not valid. An example here is to compare mean death rates from lung cancer between smokers and nonsmokers. Let us begin with discussing the equal variance assumption. 4.1 F test for Equal Variances The folded form F test is widely used to examine whether two populations have the same sl variance. The statistic is ~ F( n 1, 1) L ns, where L and S respectively indicate groups ss with larger and smaller sample variances. Unless the null hypothesis of equal variances is rejected, the pooled variance estimate s pool is used. The null hypothesis of the independent sample t-test is H : µ µ = D ( y1 y ) D0 t = ~ t( n1 + n 1 1 s pool + n1 n ( ) ( y1 i y1 + y s = n + n ), where y j ( n1 1) s1 + ( n 1) s pool =. 1 n1 + n ) When the assumption is violated, the t-test requires the approximations of the degree of freedom. The null hypothesis and other components of the t-test, however, remain unchanged. Satterthwaite s approximation for the degree of freedom is commonly used. Note that the approximation is a real number, not an integer. y1 y D0 t' = ~ t( df Satterthwaite ), where s1 s + n n df 1 ( n 1)( n 1) 1 Satterthwaite = and ( n1 1)(1 c) + ( n 1) c c = s 1 s1 n n + s 1 1 n 5 1 E ( x1 x ) = µ 1 µ, 1 Var( x = + = + 1 x ) σ n1 n n1 n σ σ 1

13 , The Trustees of Indiana University Comparing Group Means: 13 The SAS TTEST procedure and SPSS T-TEST command conduct F tests for equal variance. SAS reports the folded form F statistic, whereas SPSS computes Levene's weighted F statistic. In STATA, the.oneway command produces Bartlett s statistic for the equal variance test. The following is an example of Bartlett's test that does not reject the null hypothesis of equal variance.. oneway lung smoke Analysis of Variance Source SS df MS F Prob > F Between groups Within groups Total Bartlett's test for equal variances: chi(1) = Prob>chi = 0.77 STATA, SAS, and SPSS all compute Satterthwaite s approximation of the degrees of freedom. In addition, the SAS TTEST procedure reports Cochran-Cox approximation and the STATA.ttest command provides Welch s degrees of freedom. 4. T-test in STATA With the.ttest command, you have to specify a grouping variable smoke in this example in the parenthesis of the by option.. ttest lung, by(smoke) level(95) Two-sample t test with equal variances Group Obs Mean Std. Err. Std. Dev. [95% Conf. Interval] combined diff diff = mean(0) - mean(1) t = Ho: diff = 0 degrees of freedom = 4 Ha: diff < 0 Ha: diff!= 0 Ha: diff > 0 Pr(T < t) = Pr( T > t ) = Pr(T > t) = s Let us first check the equal variance. The F statistic is 1.17 = L = ~ F(1,1). The ss degrees of freedom of the numerator and denominator are 1 (=-1). The p-value of.773, virtually the same as that of Bartlett s test above, does not reject the null hypothesis of equal variance. Thus, the t-test here is valid (t= and p<.0000).

14 , The Trustees of Indiana University Comparing Group Means: 14 ( ) 0 t = = ~ t( + ), where 1 1 s pool + ( 1) ( 1)3.418 s pool = = If only aggregate data of the two variables are available, use the.ttesti command and list the number of observations, mean, and standard deviation of the two variables.. ttesti , level(95) Suppose a data set is differently arranged (second type in Figure 1) so that one variable smk_lung has data for smokers and the other non_lung for non-smokers. You have to use the unpaired option to indicate that two variables are not paired. A grouping variable here is not necessary. Compare the following output with what is printed above.. ttest smk_lung=non_lung, unpaired Two-sample t test with equal variances Variable Obs Mean Std. Err. Std. Dev. [95% Conf. Interval] smk_lung non_lung combined diff diff = mean(smk_lung) - mean(non_lung) t = Ho: diff = 0 degrees of freedom = 4 Ha: diff < 0 Ha: diff!= 0 Ha: diff > 0 Pr(T < t) = Pr( T > t ) = Pr(T > t) = This unpaired option is very useful since it enables you to conduct a t-test without additional data manipulation. You may run the.ttest command with the unpaired option to compare two variables, say leukemia and kidney, as independent samples in STATA. In SAS and SPSS, however, you have to stack up two variables and generate a grouping variable before t- tests.. ttest leukemia=kidney, unpaired Two-sample t test with equal variances Variable Obs Mean Std. Err. Std. Dev. [95% Conf. Interval] leukemia kidney combined diff

15 , The Trustees of Indiana University Comparing Group Means: 15 diff = mean(leukemia) - mean(kidney) t = Ho: diff = 0 degrees of freedom = 86 Ha: diff < 0 Ha: diff!= 0 Ha: diff > 0 Pr(T < t) = Pr( T > t ) = Pr(T > t) = The F = ( ^)/( ^) and its p-value (=.1797) do not reject the null hypothesis of equal variance. The large t statistic rejects the null hypothesis that death rates from leukemia and kidney cancers have the same mean. 4.3 T-test in SAS The TTEST procedure by default examines the hypothesis of equal variances, and provides T statistics for either case. The procedure by default reports Satterthwaite s approximation for the degrees of freedom. Keep in mind that a variable to be tested is grouped by the variable that is specified in the CLASS statement. PROC TTEST H0=0 ALPHA=.05 DATA=masil.smoking; CLASS smoke; VAR lung; RUN; The TTEST Procedure Statistics Lower CL Upper CL Lower CL Upper CL Variable smoke N Mean Mean Mean Std Dev Std Dev Std Dev lung lung lung Diff (1-) Statistics Variable smoke Std Err Minimum Maximum lung lung lung Diff (1-) T-Tests Variable Method Variances DF t Value Pr > t lung Pooled Equal <.0001 lung Satterthwaite Unequal <.0001 Equality of Variances Variable Method Num DF Den DF F Value Pr > F

16 , The Trustees of Indiana University Comparing Group Means: 16 lung Folded F The F test for equal variance does not reject the null hypothesis of equal variances. Thus, the t- test labeled as Pooled should be referred to in order to get the t and its p-value If the equal variance assumption is violated, the statistics of Satterthwaite and Cochran should be read. If you have a summary data set with the values of variables (lung) and their frequency (count), specify the count variable in the FREQ statement. PROC TTEST DATA=masil.smoking; CLASS smoke; VAR lung; FREQ count; RUN; Now, let us compare the death rates from leukemia and kidney in the second data arrangement type of Figure 1. As mentioned before, you need to rearrange the data set to stack up two variables into one and generate a grouping variable (first type in Figure 1). DATA masil.smoking; SET masil.smoking; death = leukemia; leu_kid ='Leukemia'; OUTPUT; death = kidney; leu_kid ='Kidney'; OUTPUT; KEEP leu_kid death; RUN; PROC TTEST COCHRAN DATA=masil.smoking; CLASS leu_kid; VAR death; RUN; The TTEST Procedure Statistics Lower CL Upper CL Lower CL Upper CL Variable leu_kid N Mean Mean Mean Std Dev Std Dev Std Dev Std Err death Kidney death Leukemia death Diff (1-) T-Tests Variable Method Variances DF t Value Pr > t death Pooled Equal <.0001 death Satterthwaite Unequal <.0001 death Cochran Unequal <.0001 Equality of Variances Variable Method Num DF Den DF F Value Pr > F

17 , The Trustees of Indiana University Comparing Group Means: 17 death Folded F Compare this SAS output with that of STATA in the previous section. 4.4 T-test in SPSS In the T-TEST command, you need to use the /GROUP subcommand in order to specify a grouping variable. SPSS reports Levene's F.0000 that does not reject the null hypothesis of equal variance (p<.995). T-TEST GROUPS = smoke(0 1) /VARIABLES = lung /MISSING = ANALYSIS /CRITERIA = CI(.95).

18 , The Trustees of Indiana University Comparing Group Means: Independent Samples with Unequal Variances If the assumption of equal variances is violated, we have to compute the adjusted t statistic using individual sample standard deviations rather than a pooled standard deviation. It is also necessary to use the Satterthwaite, Cochran-Cox (SAS), or Welch (STATA) approximations of the degrees of freedom. In this chapter, you compare mean death rates from kidney cancer between the west (south) and east (north). 5.1 T-test in STATA As discussed earlier, let us check equality of variances using the.oneway command. The tabulate option produces a table of summary statistics for the groups.. oneway kidney west, tabulate Summary of kidney west Mean Std. Dev. Freq Total Analysis of Variance Source SS df MS F Prob > F Between groups Within groups Total Bartlett's test for equal variances: chi(1) = Prob>chi = Bartlett s chi-squared statistic rejects the null hypothesis of equal variance at the.01 level. It is appropriate to use the unequal option in the.ttest command, which calculates Satterthwaite s approximation for the degrees of freedom. Unlike the SAS TTEST procedure, the.ttest command cannot specify the mean difference D 0 other than zero. Thus, the null hypothesis is that the mean difference is zero.. ttest kidney, by(west) unequal level(95) Two-sample t test with unequal variances Group Obs Mean Std. Err. Std. Dev. [95% Conf. Interval] combined diff

19 , The Trustees of Indiana University Comparing Group Means: 19 diff = mean(0) - mean(1) t =.7817 Ho: diff = 0 Satterthwaite's degrees of freedom = Ha: diff < 0 Ha: diff!= 0 Ha: diff > 0 Pr(T < t) = Pr( T > t ) = Pr(T > t) = See Satterthwaite s approximation of in the middle of the output. If you want to get Welch s approximation, use the welch as well as unequal options; without the unequal option, the welch is ignored.. ttest kidney, by(west) unequal welch Two-sample t test with unequal variances Group Obs Mean Std. Err. Std. Dev. [95% Conf. Interval] combined diff diff = mean(0) - mean(1) t =.7817 Ho: diff = 0 Welch's degrees of freedom = Ha: diff < 0 Ha: diff!= 0 Ha: diff > 0 Pr(T < t) = Pr( T > t ) = Pr(T > t) = Satterthwaite s approximation is slightly smaller than Welch s Again, keep in mind that these approximations are not integers, but real numbers. The t statistic.7817 and its p- value.0086 reject the null hypothesis of equal population means. The north and east have larger death rates from kidney cancer per 100 thousand people than the south and west. For aggregate data, use the.ttesti command with the necessary options.. ttesti , unequal welch As mentioned earlier, the unpaired option of the.ttest command directly compares two variables without data manipulation. The option treats the two variables as independent of each other. The following is an example of the unpaired and unequal options.. ttest bladder=kidney, unpaired unequal welch Two-sample t test with unequal variances Variable Obs Mean Std. Err. Std. Dev. [95% Conf. Interval] bladder kidney combined diff diff = mean(bladder) - mean(kidney) t = Ho: diff = 0 Welch's degrees of freedom =

20 , The Trustees of Indiana University Comparing Group Means: 0 Ha: diff < 0 Ha: diff!= 0 Ha: diff > 0 Pr(T < t) = Pr( T > t ) = Pr(T > t) = The F = ( ^)/( ^) rejects the null hypothesis of equal variance (p<0001). If the welch option is omitted, Satterthwaite's degree of freedom will be produced instead. For aggregate data, again, use the.ttesti command without the unpaired option.. ttesti , unequal welch level(95) 5. T-test in SAS The TTEST procedure reports statistics for cases of both equal and unequal variance. You may add the COCHRAN option to compute Cochran-Cox approximations for the degree of freedom. PROC TTEST COCHRAN DATA=masil.smoking; CLASS west; VAR kidney; RUN; The TTEST Procedure Statistics Lower CL Upper CL Lower CL Upper CL Variable s_west N Mean Mean Mean Std Dev Std Dev Std Dev kidney kidney kidney Diff (1-) Statistics Variable west Std Err Minimum Maximum kidney kidney kidney Diff (1-) T-Tests Variable Method Variances DF t Value Pr > t kidney Pooled Equal kidney Satterthwaite Unequal kidney Cochran Unequal Equality of Variances Variable Method Num DF Den DF F Value Pr > F kidney Folded F

21 , The Trustees of Indiana University Comparing Group Means: 1 F = (.59837^)/( ^) and p <.0034 reject the null hypothesis of equal variances. Thus, individual sample standard deviations need to be used to compute the adjusted t, and either Satterthwaite s or the Cochran-Cox approximation should be used in computing the p-value. See the following computations. t ' = =.78187, c = s df 1 Satterthwa s n = n1 + s n ( n 1)( n 1) ite = ( n 1)(1 c) + ( n 1) c 1 = 1.318, and 4 (0 1)(4 1) = (0 1)(1.318) + (4 1) = The t statistic.78 rejects the null hypothesis of no difference in mean death rates between the two regions (p<.0086). Now, let us compare death rates from bladder and kidney cancers using SAS. DATA masil.smoking3; SET masil.smoking; death = bladder; bla_kid ='Bladder'; OUTPUT; death = kidney; bla_kid ='Kidney'; OUTPUT; KEEP bla_kid death; RUN; PROC TTEST COCHRAN DATA=masil.smoking3; CLASS bla_kid; VAR death; RUN; The TTEST Procedure Statistics Lower CL Upper CL Lower CL Upper CL Variable bla_kid N Mean Mean Mean Std Dev Std Dev Std Dev Std Err death Bladder death Kidney death Diff (1-) T-Tests Variable Method Variances DF t Value Pr > t death Pooled Equal <.0001 death Satterthwaite Unequal <.0001 death Cochran Unequal <.0001 Equality of Variances

22 , The Trustees of Indiana University Comparing Group Means: Variable Method Num DF Den DF F Value Pr > F death Folded F <.0001 Fortunately, the t-tests under equal and unequal variance in this case lead the same conclusion at the.01 level; that is, the means of the two death rates are not the same. 5.3 T-test in SPSS Like SAS, SPSS also reports t statistics for cases of both equal and unequal variance. Note that Levene's F rejects the null hypothesis of equal variance at the.05 level (p<.04). T-TEST GROUPS = west(0 1) /VARIABLES = kidney /MISSING = ANALYSIS /CRITERIA = CI(.95).

23 , The Trustees of Indiana University Comparing Group Means: 3 6. One-way ANOVA, GLM, and Regression The t-test is a special case of one-way ANOVA. Thus, one-way ANOVA produces equivalent results to those of the t-test. ANOVA examines mean differences using the F statistic, whereas the t-test reports the t statistic. The one-way ANOVA (t-test), GLM, and linear regression present essentially the same things in different ways. 6.1 One-way ANOVA Consider the following ANOVA procedure. The CLASS statement is used to specify categorical variables. The MODEL statement lists the variable to be compared and a grouping variable, separating them with an equal sign. PROC ANOVA DATA=masil.smoking; CLASS smoke; MODEL lung=smoke; RUN; The ANOVA Procedure Dependent Variable: lung Sum of Source DF Squares Mean Square F Value Pr > F Model <.0001 Error Corrected Total R-Square Coeff Var Root MSE lung Mean Source DF Anova SS Mean Square F Value Pr > F smoke <.0001 STATA.anova and.oneway commands also conduct one-way ANOVA.. anova lung smoke Number of obs = 44 R-squared = Root MSE = Adj R-squared = Source Partial SS df MS F Prob > F Model smoke Residual Total

24 , The Trustees of Indiana University Comparing Group Means: 4 In SPSS, the ONEWAY command is used. ONEWAY lung BY smoke /MISSING ANALYSIS. 6. Generalized Linear Model (GLM) The SAS GLM and MIXED procedures and the SPSS UNIANOVA command also report the F statistic for one-way ANOVA. Note that STATA s.glm command does not perform one-way ANOVA. PROC GLM DATA=masil.smoking; CLASS smoke; MODEL lung=smoke /SS3; RUN; The GLM Procedure Dependent Variable: lung Sum of Source DF Squares Mean Square F Value Pr > F Model <.0001 Error Corrected Total R-Square Coeff Var Root MSE lung Mean Source DF Type III SS Mean Square F Value Pr > F smoke <.0001 The MIXED procedure has the similar usage as the GLM procedure. The output here is skipped. PROC MIXED; CLASS smoke; MODEL lung=smoke; RUN; In SPSS, the UNIANOVA command estimates univariate ANOVA models using the GLM method. UNIANOVA lung BY smoke /METHOD = SSTYPE(3) /INTERCEPT = INCLUDE /CRITERIA = ALPHA(.05) /DESIGN = smoke. 6.3 Regression

25 , The Trustees of Indiana University Comparing Group Means: 5 The SAS REG procedure, STATA.regress command, and SPSS REGRESSION command estimate linear regression models. PROC REG DATA=masil.smoking; MODEL lung=smoke; RUN; The REG Procedure Model: MODEL1 Dependent Variable: lung Number of Observations Read 44 Number of Observations Used 44 Analysis of Variance Sum of Mean Source DF Squares Square F Value Pr > F Model <.0001 Error Corrected Total Root MSE R-Square Dependent Mean Adj R-Sq Coeff Var Parameter Estimates Parameter Standard Variable DF Estimate Error t Value Pr > t Intercept <.0001 smoke <.0001 Look at the results above. The coefficient of the intercept is the mean of the first group (smoke=0). The coefficient of smoke is, in fact, mean difference between two groups with its sign reversed ( = ). Finally, the standard error of the coefficient is the denominator of the independent sample t-test,.99314= s pool + = , n 1 n where the pooled variance estimate =3.939^ (see page 11 and 13). Thus, the t 5.37 is identical to the t statistic of the independent sample t-test with equal variance. The STATA.regress command is quite simple. Note that a dependent variable precedes a list of independent variables.. regress lung smoke Source SS df MS Number of obs = F( 1, 4) = 8.85

26 , The Trustees of Indiana University Comparing Group Means: 6 Model Prob > F = Residual R-squared = Adj R-squared = Total Root MSE = lung Coef. Std. Err. t P> t [95% Conf. Interval] smoke _cons The SPSS REGRESSION command looks complicated compared to the SAS REG procedure and STATA.regress command. REGRESSION /MISSING LISTWISE /STATISTICS COEFF OUTS R ANOVA /CRITERIA=PIN(.05) POUT(.10) /NOORIGIN /DEPENDENT lung /METHOD=ENTER smoke. Note that ANOVA, GLM, and regression report the same F (1, 4) 8.85, which is equivalent to t (4) As long as the degrees of freedom of the numerator is 1, F is always t^ (8.85= ^).

27 , The Trustees of Indiana University Comparing Group Means: 7 7. Conclusion The t-test is a basic statistical method for examining the mean difference between two groups. One-way ANOVA can compare means of more than two groups. The number of observations in individual groups does not matter in the t-test or one-way ANOVA; both balanced and unbalanced data are fine. One-way ANOVA, GLM, and linear regression models all use the variance-covariance structure in their analysis, but present the results in different ways. Researchers must check four issues when performing t-tests. First, a variable to be tested should be interval or ratio so that its mean is substantively meaningful. Do not, for example, run a t-test to compare the mean of skin colors (white=0, yellow=1, black=) between two countries. If you have a latent variable measured by several Likert-scaled manifest variables, first run a factor analysis to get that latent variable. Second, examine the normality assumptions before conducting a t-test. It is awkward to compare means of variables that are not normally distributed. Figure illustrates a normal probability distribution on top and a Poisson distribution skewed to the right on the bottom. Although the two distributions have the same mean and variance of 1, they are not likely to be substantively interpretable. This is the rationale to conduct normality test such as Shapiro-Wilk W, Shapiro-Francia W, and Kolmogorov-Smirnov D statistics. If the normality assumption is violated, try to use nonparametric methods. Figure. Comparing Normal and Poisson Probability Distributions ( σ =1 and µ =1)

28 , The Trustees of Indiana University Comparing Group Means: 8 Third, check the equal variance assumption. You should be careful when comparing means of normally distributed variables with different variances. You may conduct the folded form F test. If the equal variance assumption is violated, compute the adjusted t and approximations of the degree of freedom. Finally, consider the types of t-tests, data arrangement, and functionalities available in each statistical software (e.g., STATA, SAS, and SPSS) to determine the best strategy for data analysis (Table 3). The first data arrangement in Figure 1 is commonly used for independent sample t-tests, whereas the second arrangement is appropriate for a paired sample test. Keep in mind that the type II data sets in Figure 1 needs to be reshaped into type I in SAS and SPSS. Table 3. Comparison of T-test Functionalities of STATA, SAS and SPSS STATA 9.0 SAS 9.1 SPSS 13.0 Test for equal variance Bartlett s chi-squared (.ttest command) Folded form F (TTEST procedure) Levene s weighted F (T-TEST command) Approximation of the Satterthwaite s DF Satterthwaite s DF Satterthwaite s DF degrees of freedom (DF) Welch s DF Cochran-Cox DF Second Data Arrangement var1=var Reshaping the data set Reshaping the data set Aggregate Data.ttesti command FREQ option N/A SAS has several procedures (e.g., TTEST, MEANS, and UNIVARIATE) and useful options for t-tests. The STATA.ttest and.ttesti commands provide very flexible ways of handling different data arrangements and aggregate data. Table 4 summarizes usages of options in these two commands. Table 4. Summary of the Usages of the.ttest and.ttest Command Options Usage by(group var) unequal welch unpaired * Univariate sample var=c Paired (dependent) sample var1=var Equal variance (1 variable) Var O Equal variance ( variables) ** var1=var O Unequal variance (1 variable) Var O O O Unequal variance ( variables) var1=var O O O * The.ttesti command does not allow the unpaired option. ** The var1=var assumes second type of data arrangement in Figure 1.

29 , The Trustees of Indiana University Comparing Group Means: 9 Appendix: Data Set Literature: Fraumeni, J. F "Cigarette Smoking and Cancers of the Urinary Tract: Geographic Variations in the United States," Journal of the National Cancer Institute, 41(5): Data Source: The data are per capita numbers of cigarettes smoked (sold) by 43 states and the District of Columbia in 1960 together with death rates per 100 thousand people from various forms of cancer. The variables used in this document are, cigar = number of cigarettes smoked (hds per capita) bladder = deaths per 100k people from bladder cancer lung = deaths per 100k people from lung cancer kidney = deaths per 100k people from kidney cancer leukemia = deaths per 100k people from leukemia smoke = 1 for those whose cigarette consumption is larger than the median and 0 otherwise. west = 1 for states in the South or West and 0 for those in the North, East or Midwest. The followings are summary statistics and normality tests of these variables.. sum cigar-leukemia Variable Obs Mean Std. Dev. Min Max cigar bladder lung kidney leukemia sfrancia cigar-leukemia Shapiro-Francia W' test for normal data Variable Obs W' V' z Prob>z cigar bladder lung kidney leukemia tab west smoke smoke west 0 1 Total Total 44

30 , The Trustees of Indiana University Comparing Group Means: 30 References Fraumeni, J. F "Cigarette Smoking and Cancers of the Urinary Tract: Geographic Variations in the United States," Journal of the National Cancer Institute, 41(5): Ott, R. Lyman An Introduction to Statistical Methods and Data Analysis. Belmont, CA: Duxbury Press. SAS Institute SAS/STAT User's Guide, Version 9.1. Cary, NC: SAS Institute. SPSS SPSS 11.0 Syntax Reference Guide. Chicago, IL: SPSS Inc. STATA Press STATA Reference Manual Release 9. College Station, TX: STATA Press. Walker, Glenn A. 00. Common Statistical Methods for Clinical Research with SAS Examples. Cary, NC: SAS Institute. Acknowledgements I am grateful to Jeremy Albright, Takuya Noguchi, and Kevin Wilhite at the UITS Center for Statistical and Mathematical Computing, Indiana University, who provided valuable comments and suggestions. Revision History 003. First draft 004. Second draft 005. Third draft (Added data arrangements and conclusion).

31 , The Trustees of Indiana University Regression Models for Event Count Data: 1 Regression Models for Event Count Data Using SAS, STATA, and LIMDEP Hun Myoung Park This document summarizes regression models for event count data and illustrates how to estimate individual models using SAS, STATA, and LIMDEP. Example models were tested in SAS 9.1, STATA 9.0, and LIMDEP Introduction. The Poisson Regression Model (PRM) 3. The Negative Binomial Regression Model (NBRM) 4. The Zero-Inflated Poisson Regression Model (ZIP) 5. The Zero-Inflated Negative Binomial Regression Model (ZINB) 6. Conclusion 7. Appendix 1. Introduction An event count is the realization of a nonnegative integer-valued random variable (Cameron and Trivedi 1998). Examples are the number of car accidents per month, thunder storms per year, and wild fires per year. The ordinary least squares (OLS) method for event count data results in biased, inefficient, and inconsistent estimates (Long 1997). Thus, researchers have developed various nonlinear models that are based on the Poisson distribution and negative binomial distribution. 1.1 Count Data Regression Models The left-hand side (LHS) of the equation has event count data. Independent variables are, as in the OLS, located at the right-hand side (RHS). These RHS variables may be interval, ratio, or binary (dummy). Table 1 below summarizes the categorical dependent variable regression models (CDVMs) according to the level of measurement of the dependent variable. Table 1. Ordinary Least Squares and CDVMs Model Dependent (LHS) Method Independent (RHS) Ordinary least Moment based OLS Interval or ratio squares method A linear function of interval/ratio or binary variables β + β X + β Binary response Binary (0 or 1) Maximum CDVMs Ordinal response Ordinal (1 st, nd, 3 rd ) likelihood Nominal response Nominal (A, B, C ) method X Event count data Count (0, 1,, 3 ) The Poisson regression model (PRM) and negative binomial regression model (NBRM) are basic models for count data analysis. Either the zero-inflated Poisson (ZIP) or the zero-inflated

32 , The Trustees of Indiana University Regression Models for Event Count Data: negative binomial regression model (ZINB) is used when there are many zero counts. Other count models are developed to handle censored, truncated, or sample selected count data. This document, however, focuses on the PRM, NBRM, ZIP, and ZINB. 1. Poisson Models versus Negative Binomial Models µ y e µ The Poisson probability distribution, P( y µ ) =, has the same mean and variance y! (equidispersion), Var(y)=E(y)= µ. As the mean of a Poisson distribution increases, the probability of zeros decreases and the distribution approximates a normal distribution (Figure 1). The Poisson distribution also has the strong assumption that events are independent. Thus, this distribution does not fit well if µ differs across observations (heterogeneity) (Long 1997). The Poisson regression model (PRM) incorporates observed heterogeneity into the Poisson distribution function, Var ( yi xi ) = E( yi xi ) = µ i = exp( xiβ ). As µ increases, the conditional variance of y increases, the proportion of predicted zeros decreases, and the distribution around the expected value becomes approximately normal (Long 1997). The conditional mean of the errors is zero, but the variance of the errors is a function of independent variables, Var ( ε x) = exp( xβ ). The errors are heteroscedastic. Thus, the PRM rarely fits in practice due to overdispersion (Long 1997; Maddala 1983). Figure 1. Poisson Probability Distribution with Means of.5, 1,, and 5 i y i Γ( yi + vi ) vi µ i The negative binomial probability distribution is P( yi xi ) = yi v i v i i v,! Γ( ) + µ i + µ i where 1 / v = α determines the degree of dispersion and Γ is the Gamma probability distribution. As the dispersion parameter α increases, the variance of the negative binomial distribution also increases, Var ( y i x i ) = µ i ( 1+ µ i v i ). The negative binomial regression model (NBRM) incorporates observed and unobserved heterogeneity into the conditional mean, v

33 , The Trustees of Indiana University Regression Models for Event Count Data: 3 µ i = exp( xiβ + ε i ) (Long 1997). Thus, the conditional variance of y becomes larger than its conditional mean, E ( y i x i ) = µ i, which remains unchanged. Figure illustrates how the probabilities for small and larger counts increase in the negative binomial distribution as the conditional variance of y increases, given µ = 3. Figure. Negative Binomial Probability Distribution with Alpha of.01,.5, 1, and 5 The PRM and NBRM, however, have the same mean structure. If α = 0, the NBRM reduces to the PRM (Cameron and Trivedi 1998; Long 1997). 1.3 Overdispersion When Var ( yi xi ) > E( yi xi ), we are said to have overdispersion. Estimates of a PRM for overdispersed data are unbiased, but inefficient with standard errors biased downward (Cameron and Trivedi 1998; Long 1997). The likelihood ratio test is developed to examine the null hypothesis of no overdispersion, H 0 : α = 0. The likelihood ratio follows the Chi-squared distribution with one degree of freedom, LR = *(ln L NB ln L Poisson ) ~ χ (1). If the null hypothesis is rejected, the NBRM is preferred to the PRM. Zero-inflated models handle overdispersion by changing the mean structure to explicitly model the production of zero counts (Long 1997). These models assume two latent groups. One is the always-zero group and the other is the not-always-zero or sometime-zero group. Thus, zero counts come from the former group and some of the latter group with a certain probability. The likelihood ratio, LR = *(ln L ZINB ln L ZIP ) ~ χ (1), tests H 0 : α = 0 to compare the ZIP and NBRM. The PRM and ZIP as well as NBRM and ZINB cannot, however, be tested by this likelihood ratio, since they are not nested respectively. The Voung s statistic compares these non-nested models. If V is greater than 1.96, the ZIP or ZINB is favored. If V is less than -1.96, the PRM or NBRM is preferred (Long 1997).

34 , The Trustees of Indiana University Regression Models for Event Count Data: Estimation in SAS, STATA, and LIMDEP The SAS GENMOD procedure estimates Poisson and negative binomial regression models. STATA has individual commands (e.g.,.poisson and.nbreg) for the corresponding count data models. LIMDEP has Poisson$ and Negbin$ commands to estimate various count data models including zero-inflated and zero-truncated models. Table summarizes the procedures and commands for count data regression models. Table. Comparison of the Procedures and Commands for Count Data Models Model SAS 9.1 STATA 9.0 LIMDEP 8.0 Poisson Regression (PRM) GENMOD.poisson Poisson$ Negative Binomial Regression (NBRM) GENMOD.nbreg Negbin$ Zero-Inflated Poisson (ZIP) -.zip Poisson; Zip; Rh$ Zero-inflated Negative Binomial (ZINB) -.zinb Negbin; Zip; Rh$ Zero-truncated Poisson (ZTP) -.ztp Poisson; Truncation$ Zero-truncated Negative Binomial (ZTNB) -.ztnb Negbin; Truncation$ The example here examines how waste quotas (emps) and the strictness of policy implementation (strict) affect the frequency of waste spill accidents of plants (accident) Long and Freese s SPost Module STATA users may take advantages of user-written modules such as SPost written by J. Scott Long and Jeremy Freese. The module allows researchers to conduct follow-up analyses of various CDVMs including event count data models. See.3 for examples of major SPost commands. In order to install SPost, execute the following commands consecutively. For more details, visit J. Scott Long s Web site at net from net install spost9_ado, replace. net get spost9_do, replace

35 , The Trustees of Indiana University Regression Models for Event Count Data: 5. The Poisson Regression Model The SAS GENMOD procedure, STATA.poisson command, and LIMDEP Poisson$ command estimate the Poisson regression model (PRM)..1 PRM in SAS SAS has the GENMOD procedure for the PRM. The /DIST=POISSON option tells SAS to use the Poisson distribution. PROC GENMOD DATA = masil.accident; MODEL accident=emps strict /DIST=POISSON LINK=LOG; RUN; The GENMOD Procedure Model Information Data Set COUNT.WASTE Distribution Poisson Link Function Log Dependent Variable Accident Observations Used 778 Criteria For Assessing Goodness Of Fit Criterion DF Value Value/DF Deviance Scaled Deviance Pearson Chi-Square Scaled Pearson X Log Likelihood Algorithm converged. Analysis Of Parameter Estimates Standard Wald 95% Confidence Chi- Parameter DF Estimate Error Limits Square Pr > ChiSq Intercept <.0001 Emps <.0001 Strict <.0001 Scale NOTE: The scale parameter was held fixed. You will need to run a restricted model without regressors in order to conduct the likelihood ratio test for goodness-of-fit, LR = *(ln L ln L Re ) ~ χ ( J ), where J is the difference in Unrestricted stricted

36 , The Trustees of Indiana University Regression Models for Event Count Data: 6 the number of regressors between the unrestricted and restricted models. The chi-squared statistic is = * [ ( )] (p<.0000). PROC GENMOD DATA = masil.accident; MODEL accident= /DIST=POISSON LINK=LOG; RUN; The GENMOD Procedure Model Information Data Set Distribution Link Function Dependent Variable MASIL.ACCIDENT Poisson Log accident Number of Observations Read 778 Number of Observations Used 778 Criteria For Assessing Goodness Of Fit Criterion DF Value Value/DF Deviance Scaled Deviance Pearson Chi-Square Scaled Pearson X Log Likelihood Algorithm converged. Analysis Of Parameter Estimates Standard Wald 95% Confidence Chi- Parameter DF Estimate Error Limits Square Pr > ChiSq Intercept <.0001 Scale NOTE: The scale parameter was held fixed.. PRM in STATA STATA has the.poisson command for the PRM. This command provides likelihood ratio and Pseudo R statistics.. poisson accident emps strict Iteration 0: log likelihood = Iteration 1: log likelihood = Iteration : log likelihood =

37 , The Trustees of Indiana University Regression Models for Event Count Data: 7 Poisson regression Number of obs = 778 LR chi() = 14.8 Prob > chi = Log likelihood = Pseudo R = accident Coef. Std. Err. z P> z [95% Conf. Interval] emps strict _cons Let us run a restricted model and then run the.display command in order to double check that the likelihood ratio for goodness-of-fit is poisson accident Iteration 0: log likelihood = Iteration 1: log likelihood = Poisson regression Number of obs = 778 LR chi(0) = 0.00 Prob > chi =. Log likelihood = Pseudo R = accident Coef. Std. Err. z P> z [95% Conf. Interval] _cons display * ( ( )) Using the SPost Module in STATA The SPost module provides useful commands for follow-up analyses of various categorical dependent variable models. The.fitstat command calculates various goodness-of-fit statistics such as log likelihood, McFadden s R (or Pseudo R ), Akaike Information Criterion (AIC), and Bayesian Information Criterion (BIC).. quietly poisson accident emps strict. fitstat Measures of Fit for poisson of accident Log-Lik Intercept Only: Log-Lik Full Model: D(775): LR(): 14.8 Prob > LR: McFadden's R: McFadden's Adj R: 0.03 Maximum Likelihood R: Cragg & Uhler's R: AIC: AIC*n: BIC: BIC':

38 , The Trustees of Indiana University Regression Models for Event Count Data: 8 The.listcoef command lists unstandardized coefficients (parameter estimates), factor and percent changes, and standardized coefficients to help interpret regression results.. listcoef, help poisson (N=778): Factor Change in Expected Count Observed SD: accident b z P> z e^b e^bstdx SDofX emps strict b = raw coefficient z = z-score for test of b=0 P> z = p-value for z-test e^b = exp(b) = factor change in expected count for unit increase in X e^bstdx = exp(b*sd of X) = change in expected count for SD increase in X SDofX = standard deviation of X The.prtab command constructs a table of predicted values (events) for all combinations of categorical variables listed. The following example shows that the predicted number of accidents under the strict policy is.917 at the mean waste quota (emps=4.019).. prtab strict poisson: Predicted rates for accident strict Prediction emps strict x= The.prvalue lists predicted values for a given set of values for the independent variables. For example, the predicted probability of a zero count is.3996 at the mean waste quota under the strict policy (strict=1). Note that the predicted rate of.917 is equivalent to.917 in the.prtab above.. prvalue, x(strict=1) maxcnt(5) poisson: Predictions for accident Predicted rate: % CI [.87, 1.0] Predicted probabilities: Pr(y=0 x): Pr(y=1 x): Pr(y= x): Pr(y=3 x): Pr(y=4 x): Pr(y=5 x): 0.00 emps strict x=

39 , The Trustees of Indiana University Regression Models for Event Count Data: 9 The most useful command is the.prchange that calculates marginal effects (changes) and discrete changes. For instance, a standard deviation increase in waste quota form its mean will increase accidents by.3841 under the lenient policy (strict=0).. prchange, x(strict=0) poisson: Changes in Predicted Rate for accident min->max 0->1 -+1/ -+sd/ MargEfct emps strict exp(xb): emps strict x= sd(x)= SPost also includes the.prgen command, which computes a series of predictions by holding all variables but one constant and allowing that variable to vary (Long and Freese 003). These SPost commands work with most categorical and count data models such as.logit,.probit,.poisson,.nbreg,.zip, and.zinb..4 PRM in LIMDEP The LIMDEP Poisson$ command estimates the PRM. LIMDEP reports log likelihoods of both the unrestricted and restricted models. Keep in mind that you must include the ONE for the intercept. POISSON; Lhs=ACCIDENT; Rhs=ONE,EMPS,STRICT$ Poisson Regression Maximum Likelihood Estimates Model estimated: Aug 4, 005 at 04:56:45PM. Dependent variable ACCIDENT Weighting variable None Number of observations 778 Iterations completed 8 Log likelihood function Restricted log likelihood Chi squared Degrees of freedom Prob[ChiSqd > value] = Chi- squared = RsqP= G - squared = RsqD=.043 Overdispersion tests: g=mu(i) : 4.70 Overdispersion tests: g=mu(i)^: Variable Coefficient Standard Error b/st.er. P[ Z >z] Mean of X

40 , The Trustees of Indiana University Regression Models for Event Count Data: 10 Constant E EMPS E E STRICT E (Note: E+nn or E-nn means multiply by 10 to + or -nn power.) SAS, STATA, and LIMDEP produce almost the same parameter estimates and standard errors (Table 3). The log likelihood in SAS is different from that of STATA and LIMDEP ( versus ). This difference seems to come from the generalized linear model that the GENMOD procedure uses. These log likelihoods are, however, equivalent in the sense that they result in the same likelihood ratio. Table 3. Summary of the Poisson Regression Model in SAS, STATA, and LIMDEP Model SAS 9.1 STATA 9.0 LIMDEP 8.0 Intercept (.0467) (.0467) (.0467) EMPS (.0007) (.0007) (.0007) STRICT (.0668) (.0668) (.0668) Log Likelihood (unrestricted) Log Likelihood (restricted) Likelihood Ratio for Goodness-of-fit

41 , The Trustees of Indiana University Regression Models for Event Count Data: The Negative Binomial Regression Model The SAS GENMODE procedure, STATA.nbreg command, and LIMDEP Negbin$ command estimate the negative binomial regression model (NBRM). 3.1 NBRM in SAS The GENMOD procedure estimates the NBRM using the /DIST=NEGBIN option. Note that the dispersion parameter is equivalent to the alpha in STATA and LIMDEP. PROC GENMOD DATA = masil.accident; MODEL accident=emps strict /DIST=NEGBIN LINK=LOG; RUN; The GENMOD Procedure Model Information Data Set COUNT.WASTE Distribution Negative Binomial Link Function Log Dependent Variable Accident Observations Used 778 Criteria For Assessing Goodness Of Fit Criterion DF Value Value/DF Deviance Scaled Deviance Pearson Chi-Square Scaled Pearson X Log Likelihood Algorithm converged. Analysis Of Parameter Estimates Standard Wald 95% Confidence Chi- Parameter DF Estimate Error Limits Square Pr > ChiSq Intercept Emps Strict <.0001 Dispersion NOTE: The negative binomial dispersion parameter was estimated by maximum likelihood. The restricted model produces a log likelihood of Thus, the likelihood ratio for goodness-of-fit is = * ( ) (p<.00017). PROC GENMOD DATA = masil.accident; MODEL accident= /DIST=NEGBIN LINK=LOG; RUN;

42 , The Trustees of Indiana University Regression Models for Event Count Data: 1 The likelihood ratio for overdispersion is = * ( ( )). 3. NBRM in STATA STATA has the.nbreg command for the NBRM. The command reports three log likelihood statistics: for the PRM, restricted NBRM (constant-only model), and unrestricted NBRM (full model), which make it easy to conduct likelihood ratio tests.. nbreg accident emps strict Fitting comparison Poisson model: Iteration 0: log likelihood = Iteration 1: log likelihood = Iteration : log likelihood = Fitting constant-only model: Iteration 0: log likelihood = Iteration 1: log likelihood = Iteration : log likelihood = Iteration 3: log likelihood = Iteration 4: log likelihood = Fitting full model: Iteration 0: log likelihood = Iteration 1: log likelihood = Iteration : log likelihood = Iteration 3: log likelihood = Negative binomial regression Number of obs = 778 LR chi() = Prob > chi = Log likelihood = Pseudo R = accident Coef. Std. Err. z P> z [95% Conf. Interval] emps strict _cons /lnalpha alpha Likelihood ratio test of alpha=0: chibar(01) = Prob>=chibar = The restricted model or constant-only model gives us a log likelihood Thus, the likelihood ratio for goodness-of-fit is = * [ ( )] (p<.00017). The p-value is computed as follows (Note the.disp or.di is an abbreviation of the.display).. disp chitail(, )

43 , The Trustees of Indiana University Regression Models for Event Count Data: 13 The likelihood ratio test for overdispersion results in a chi-squared of (p<.0000) and rejects the null hypothesis of alpha=0. The statistically significant evidence of overdispersion indicates that the NBRM is preferred to the PRM.. di * ( ( )) The p-value of the likelihood ratio for overdispersion is computed as,. di chitail(1, ) 1.74e-308 Now, let us calculate marginal effects (or changes) at the means of independent variables. You should the read the discrete change labeled 0->1 of a binary variable strict, since its marginal change at the mean (.5077) is meaningless.. prchange nbreg: Changes in Predicted Rate for accident min->max 0->1 -+1/ -+sd/ MargEfct emps strict exp(xb): emps strict x= sd(x)= NBRM in LIMDEP LIMDEP has the Negbin$ command for the NBRM that reports the PRM as well. Note that the standard errors of parameter estimates are slightly different from those of SAS and STATA. The Marginal Effects$ and the Means$ subcommands compute marginal effects at the mean of independent variables. You may not omit the Means$ subcommand. NEGBIN; Lhs=ACCIDENT; Rhs=ONE,EMPS,STRICT; Marginal Effects; Means$ Poisson Regression Maximum Likelihood Estimates Model estimated: Sep 08, 005 at 09:35:36AM. Dependent variable ACCIDENT Weighting variable None Number of observations 778 Iterations completed 8 Log likelihood function Restricted log likelihood Chi squared Degrees of freedom

44 , The Trustees of Indiana University Regression Models for Event Count Data: 14 Prob[ChiSqd > value] = Chi- squared = RsqP= G - squared = RsqD=.043 Overdispersion tests: g=mu(i) : 4.70 Overdispersion tests: g=mu(i)^: Variable Coefficient Standard Error b/st.er. P[ Z >z] Mean of X Constant E EMPS E E STRICT E (Note: E+nn or E-nn means multiply by 10 to + or -nn power.) Normal exit from iterations. Exit status= Negative Binomial Regression Maximum Likelihood Estimates Model estimated: Sep 08, 005 at 09:35:36AM. Dependent variable ACCIDENT Weighting variable None Number of observations 778 Iterations completed 8 Log likelihood function Restricted log likelihood Chi squared Degrees of freedom 1 Prob[ChiSqd > value] = Variable Coefficient Standard Error b/st.er. P[ Z >z] Mean of X Constant EMPS E E STRICT Dispersion parameter for count data model Alpha (Note: E+nn or E-nn means multiply by 10 to + or -nn power.) Partial derivatives of expected val. with respect to the vector of characteristics. They are computed at the means of the Xs. Observations used for means are All Obs. Conditional Mean at Sample Point Scale Factor for Marginal Effects Variable Coefficient Standard Error b/st.er. P[ Z >z] Mean of X Constant EMPS E E STRICT (Note: E+nn or E-nn means multiply by 10 to + or -nn power.)

45 , The Trustees of Indiana University Regression Models for Event Count Data: 15 Read the coefficients (.0068 and -.871) to confirm that they are identical to the corresponding marginal effects calculated in STATA. SAS, STATA, and LIMDEP produce almost the same parameter estimates and goodness-of-fit statistics (Table 4). Note that SAS reports different log likelihoods, but the same likelihood ratio. Table 4. Summary of the Negative Binomial Regression Model in SAS, STATA, and LIMDEP Model SAS 9.1 STATA 9.0 LIMDEP 8.0 Intercept (.178) (.178) (.186) EMPS (.003) (.003) (.003) STRICT (.1671) (.1671) (.1673) Dispersion Parameter (Alpha) (.3501) (.3501) (.3568) Log Likelihood (unrestricted) Log Likelihood (restricted) * Likelihood Ratio for Goodness-of-fit Likelihood Ratio for Overdispersion * LIMDEP mistakenly reports the log likelihood of the unrestricted Poisson regression model. The following plot compares the PRM and NBRM. Look at the predictions for zero counts of the two models. As the likelihood ratio test indicates, the NBRM seems to fit these data better than PRM. Figure 3. Comparison of the Poisson and Negative Binomial Regression Models

46 , The Trustees of Indiana University Regression Models for Event Count Data: The Zero-Inflated Poisson Regression Model STATA and LIMDEP have commands for the zero-inflated Poisson regression model (ZIP). 4.1 ZIP in STATA (.zip) STATA has the.zip command to estimate the ZIP. The inflate() option specifies a list of variables that determines whether the observed count is zero. The vuong option computes the Vuong statistic to compare the ZIP and PRM.. zip accident emps strict, inflate(emps strict) vuong Fitting constant-only model: Iteration 0: log likelihood = Iteration 1: log likelihood = Iteration : log likelihood = Iteration 3: log likelihood = Iteration 4: log likelihood = Iteration 5: log likelihood = Fitting full model: Iteration 0: log likelihood = Iteration 1: log likelihood = Iteration : log likelihood = Iteration 3: log likelihood = Zero-inflated Poisson regression Number of obs = 778 Nonzero obs = 80 Zero obs = 498 Inflation model = logit LR chi() =.46 Log likelihood = Prob > chi = Coef. Std. Err. z P> z [95% Conf. Interval] accident emps strict _cons inflate emps strict _cons Vuong test of zip vs. standard Poisson: z = 8.40 Pr>z = The restricted model is estimated with the intercept only.. zip accident, inflate(emps strict) The Vuong statistic at the bottom compares the ZIP and PRM. Since the V 8.40 is greater than 1.96, we conclude that the ZIP is preferred to the PRM.

47 , The Trustees of Indiana University Regression Models for Event Count Data: ZIP in LIMDEP The LIMDEP Poisson$ command needs to have the Zip and Rh subcommands. The Rh is equivalent to the inflate() option in STATA. The Alg=Newton$ subcommand is needed to use the Newton-Raphson algorithm because the default Broyden algorithm failed to converge. 1 POISSON; Lhs=ACCIDENT; Rhs=ONE,EMPS,STRICT; ZIP; Rh=ONE,EMPS,STRICT; Alg=Newton$ Poisson Regression Maximum Likelihood Estimates Model estimated: Sep 06, 005 at 00:5:07PM. Dependent variable ACCIDENT Weighting variable None Number of observations 778 Iterations completed 8 Log likelihood function Restricted log likelihood Chi squared Degrees of freedom Prob[ChiSqd > value] = Chi- squared = RsqP= G - squared = RsqD=.043 Overdispersion tests: g=mu(i) : 4.70 Overdispersion tests: g=mu(i)^: Variable Coefficient Standard Error b/st.er. P[ Z >z] Mean of X Constant E EMPS E E STRICT E (Note: E+nn or E-nn means multiply by 10 to + or -nn power.) Normal exit from iterations. Exit status= Zero Altered Poisson Regression Model Logistic distribution used for splitting model. ZAP term in probability is F[tau x Z(i) ] Comparison of estimated models Pr[0 means] Number of zeros Log-likelihood Poisson.739 Act.= 498 Prd.= If you get a warning message of Error: 806: Line search does not improve fn. Exit iterations. Status=3 or Error: 805: Initial iterations cannot improve function. Status=3, you may change the optimization algorithm or increase the maximum number of iterations (e.g., Maxit=1000$).

48 , The Trustees of Indiana University Regression Models for Event Count Data: 18 Z.I.Poisson.6464 Act.= 498 Prd.= Note, the ZIP log-likelihood is not directly comparable. ZIP model with nonzero Q does not encompass the others. Vuong statistic for testing ZIP vs. unaltered model is Distributed as standard normal. A value greater than favors the zero altered Z.I.Poisson model. A value less than rejects the ZIP model Variable Coefficient Standard Error b/st.er. P[ Z >z] Mean of X Poisson/NB/Gamma regression model Constant E EMPS E E STRICT E E Zero inflation model Constant EMPS E E STRICT (Note: E+nn or E-nn means multiply by 10 to + or -nn power.) In order to estimate the restricted model, run the following command with the ONE only in the Lhs$ subcommand. The Rh$ subcommand remains unchanged. POISSON; Lhs=ACCIDENT; Rhs=ONE; ZIP; Alg=Newton; Rh=ONE,EMPS,STRICT$ Table 5 summarizes parameter estimates and goodness-of-fit statistics for the zero-inflated Poisson model. STATA and LIMDEP report the same parameter estimates, but they produce different standard errors and log likelihoods. In particular, LIMDEP returned a suspicious log likelihood for the restricted model, and thus ended up with the unlikely likelihood ratio of In addition, the Vuong statistics in STATA and LIMDEP are different. Table 5. Summary of the Zero-Inflated Poisson Regression Model in STATA, and LIMDEP Model SAS 9.1 STATA 9.0 LIMDEP 8.0 Intercept (.0493) (.039) EMPS (.0009) (.0004) STRICT (.079) (.0333) Intercept (Zero-inflated) (.111) (.11) EMPS (Zero-inflated) (.003) (.00) STRICT (Zero-inflated) (.1768) (.177) Log Likelihood (unrestricted) Log Likelihood (restricted) Likelihood Ratio for Goodness-of-fit Vuong Statistic (ZINB versus NBRM)

49 , The Trustees of Indiana University Regression Models for Event Count Data: The Zero-Inflated NB Regression Model STATA and LIMDEP can estimate the zero-inflated negative binomial regression model (ZINB). 5.1 ZINB in STATA (.zinb) The STATA.zinb command estimates the ZINB. The vuong option computes the Vuong statistic to compare the ZINB and NBRM.. zinb accident emps strict, inflate(emps strict) vuong Fitting constant-only model: Iteration 0: log likelihood = (not concave) Iteration 1: log likelihood = Iteration : log likelihood = Iteration 3: log likelihood = Iteration 4: log likelihood = Iteration 5: log likelihood = Iteration 6: log likelihood = Iteration 7: log likelihood = Iteration 8: log likelihood = Iteration 9: log likelihood = Iteration 10: log likelihood = Fitting full model: Iteration 0: log likelihood = Iteration 1: log likelihood = Iteration : log likelihood = Iteration 3: log likelihood = Zero-inflated negative binomial regression Number of obs = 778 Nonzero obs = 80 Zero obs = 498 Inflation model = logit LR chi() = 4.43 Log likelihood = Prob > chi = Coef. Std. Err. z P> z [95% Conf. Interval] accident emps strict _cons inflate emps strict _cons /lnalpha alpha Vuong test of zinb vs. standard negative binomial: z = 4.13 Pr>z =

50 , The Trustees of Indiana University Regression Models for Event Count Data: 0 The likelihood ratio, = *( ( )), rejects the null hypothesis of no overdispersion, indicating that the ZINB can improve goodness-of-fit over the ZIP (p<.0000). The Vuong test, 4.13 > 1.96, suggests that the ZINB is preferred to the NBRM. 5. ZINB in LIMDEP The LIMDEP Negbin$ command needs to have the Zip and Rh subcommands for the ZINB. The following command produces the Poisson regression model, negative binomial model, and zero-inflated negative binomial model. You may omit the Alg=Newton$ subcommand. NEGBIN; Lhs=ACCIDENT; Rhs=ONE,EMPS,STRICT; Rh=ONE,EMPS,STRICT; ZIP; Alg=Newton$ Poisson Regression Maximum Likelihood Estimates Model estimated: Sep 10, 005 at 00:0:00AM. Dependent variable ACCIDENT Weighting variable None Number of observations 778 Iterations completed 8 Log likelihood function Restricted log likelihood Chi squared Degrees of freedom Prob[ChiSqd > value] = Chi- squared = RsqP= G - squared = RsqD=.043 Overdispersion tests: g=mu(i) : 4.70 Overdispersion tests: g=mu(i)^: Variable Coefficient Standard Error b/st.er. P[ Z >z] Mean of X Constant E EMPS E E STRICT E (Note: E+nn or E-nn means multiply by 10 to + or -nn power.) Normal exit from iterations. Exit status= Negative Binomial Regression Maximum Likelihood Estimates Model estimated: Sep 10, 005 at 00:0:00AM. Dependent variable ACCIDENT Weighting variable None Number of observations 778 Iterations completed 1 Log likelihood function Restricted log likelihood Chi squared

51 , The Trustees of Indiana University Regression Models for Event Count Data: 1 Degrees of freedom 1 Prob[ChiSqd > value] = Variable Coefficient Standard Error b/st.er. P[ Z >z] Mean of X Constant EMPS E E STRICT Dispersion parameter for count data model Alpha (Note: E+nn or E-nn means multiply by 10 to + or -nn power.) Normal exit from iterations. Exit status= Zero Altered Neg.Binomial Regression Model Logistic distribution used for splitting model. ZAP term in probability is F[tau x Z(i) ] Comparison of estimated models Pr[0 means] Number of zeros Log-likelihood Poisson.739 Act.= 498 Prd.= Neg. Bin Act.= 498 Prd.= Z.I.Neg_Bin.6918 Act.= 498 Prd.= Note, the ZIP log-likelihood is not directly comparable. ZIP model with nonzero Q does not encompass the others. Vuong statistic for testing ZIP vs. unaltered model is Distributed as standard normal. A value greater than favors the zero altered Z.I.Neg_Bin model. A value less than rejects the ZIP model Variable Coefficient Standard Error b/st.er. P[ Z >z] Mean of X Poisson/NB/Gamma regression model Constant EMPS E E STRICT Dispersion parameter Alpha Zero inflation model Constant EMPS E STRICT (Note: E+nn or E-nn means multiply by 10 to + or -nn power.) In order to estimate the restricted model, run the following command. You have to use the Alg=Newton$ subcommand to get the restricted model to converge. Negbin; Lhs=ACCIDENT; Rhs=ONE; Rh=ONE,EMPS,STRICT; ZIP; Alg=Newton$

52 , The Trustees of Indiana University Regression Models for Event Count Data: Table 6 summarizes parameter estimates and goodness-of-fit statistics for the zero-inflated negative binomial regression model. STATA and LIMDEP reports the same results except standard errors and likelihood ratio for overdispersion. Table 6. Summary of the Zero-Inflated NBRM in STATA, and LIMDEP Model SAS 9.1 STATA 9.0 LIMDEP 8.0 Intercept (.1508) (.1518) EMPS (.001) (.000) STRICT (.1659) (.1618) Intercept (Zero-inflated) (.3800) (.3741) EMPS (Zero-inflated) (.0955) (.0688) STRICT (Zero-inflated) (3.0558) (.16) Dispersion Parameter (Alpha) (.3409) (.99) Log Likelihood (unrestricted) Log Likelihood (restricted) Likelihood Ratio for Goodness-of-fit Likelihood Ratio for Overdispersion * Vuong Statistic (ZINB versus NBRM) * The likelihood ratio for overdispersion is =*( ( )) Figure 4. Comparison of the Zero-Inflated PRM and the Zero-Inflated NBRM

53 , The Trustees of Indiana University Regression Models for Event Count Data: 3 6. Conclusion Like other econometric models, researchers must first examine the data generation process of a dependent variable to understand its behavior. Sophisticated researchers pay special attention to excess zeros, censored and/or truncated counts, sample selection, and other particular patterns of the data generation, and then decide which model best describes the data generation process. The Poisson regression model and negative binomial regression model have the same mean structure, but they describe the behavior of a dependent variable in different ways. Zero-inflated regression models integrate two different data generation processes to deal with overdispersion. Truncated or censored regression models are appropriate when data are (left and/or right) truncated or censored. Researchers need to spend more time and effort interpreting the results substantively. Like other categorical dependent variable models, count data models produce estimates that are difficult to interpret intuitively. Reporting parameter estimates and goodness-of-fit statistics are not sufficient. J. Scott Long (1997) and Long and Freese (003) provide good examples of meaningful count data model interpretations. Regarding statistical software, I would recommend STATA for general count data models and LIMDEP for special types of models. Although able to handle various models, LIMDEP does not seem stable and reliable. The SAS GENMODE procedure estimates the Poisson regression model and the negative binomial model, but it does not have easy ways of estimating other models. We encourage SAS Institute to develop an individual procedure, say the CLIM (Count and Limited Dependent Variable Model) procedure, to handle a variety of count data models.

54 , The Trustees of Indiana University Regression Models for Event Count Data: 4 Appendix: Data Set The data set used here is a part of the data provided for David H. Good s class of the School of Public and Environmental Affairs, Indiana University. Note that these data have been manipulated for the sake of data security. The variables in the data set include, 1. emps: the size of the waste quotas. strict: strictness of policy implementation (1=strict) 3. accident: the frequency of waste spill accidents of plant The followings summarize descriptive statistics of these variables. Note that there are many zero counts that indicate an overdispersion problem.. summarize accident emps strict Variable Obs Mean Std. Dev. Min Max accident emps strict tab accident strict strict accident 0 1 Total Total

55 , The Trustees of Indiana University Regression Models for Event Count Data: 5 References Allison, Paul D Logistic Regression Using the SAS System: Theory and Application. Cary, NC: SAS Institute. Cameron, A. Colin, and Pravin K. Trivedi Regression Analysis of Count Data. New York: Cambridge University Press. Greene, William H Econometric Analysis, 5 th ed. Upper Saddle River, NJ: Prentice Hall. Greene, William H. 00. LIMDEP Version 8.0 Econometric Modeling Guide. Plainview, New York: Econometric Software. Long, J. Scott, and Jeremy Freese Regression Models for Categorical Dependent Variables Using STATA, nd ed. College Station, TX: STATA Press. Long, J. Scott Regression Models for Categorical and Limited Dependent Variables. Advanced Quantitative Techniques in the Social Sciences. Sage Publications. Maddala, G. S Limited Dependent and Qualitative Variables in Econometrics. New York: Cambridge University Press. SAS Institute SAS/STAT 9.1 User's Guide. Cary, NC: SAS Institute. STATA Press STATA Base Reference Manual, Release 9. College Station, TX: STATA Press. Acknowledgements I am grateful to Jeremy Albright and Kevin Wilhite at the UITS Center for Statistical and Mathematical Computing, Indiana University, who provided valuable comments and suggestions. Revision History 003. First draft 004. Second draft 005. Third draft (Added LIMDEP examples)

56 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 1 Linear Regression Models for Panel Data Using SAS, STATA, LIMDEP, and SPSS Hun Myoung Park This document summarizes linear regression models for panel data and illustrates how to estimate each model using SAS 9.1, STATA 9.0, LIMDEP 8.0, and SPSS This document does not address nonlinear models (i.e., logit and probit models), but focuses on linear regression models. 1. Introduction. Least Squares Dummy Variable Regression 3. Panel Data Models 4. The Fixed Group Effect Model 5. The Fixed Time Effect Model 6. The Fixed Group and Time Effect Model 7. Random Effect Models 8. The Poolability Test 9. Conclusion 1. Introduction Panel data are cross sectional and longitudinal (time series). Some examples are the cumulative General Social Survey (GSS) and Current Population Survey (CPS) data. Panel data may have group effects, time effects, or the both. These effects are analyzed by fixed effect and random effect models. 1.1 Data Arrangement A panel data set contains observations on n individuals (e.g., firms and states), each measured at T points in time. In other word, each individual (1 through n subject) includes T observations (1 through t time period). Thus, the total number of observations is nt. Figure 1 illustrates the data arrangement of a panel data set. Figure 1. Data Arrangement of Panel Data Group Time Variable1 Variable Variable T 1... T

57 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: n 1 n n T 1. Fixed Effect versus Random Effect Models Panel data models estimate fixed and/or random effects models using dummy variables. The core difference between fixed and random effect models lies in the role of dummies. If dummies are considered as a part of the intercept, it is a fixed effect model. In a random effect model, the dummies act as an error term (see Table 1). The fixed effect model examines group differences in intercepts, assuming the same slopes and constant variance across groups. Fixed effect models use least square dummy variable (LSDV), within effect, and between effect estimation methods. Thus, ordinary least squares (OLS) regressions with dummies, in fact, are fixed effect models. Table 1. Fixed Effect and Random Effect Models Fixed Effect Model Random Effect Model Functional form * ' ' y = ( α + μ ) + X β + v y = α + X β + ( μ + v ) it i it Intercepts Varying across group and/or time Constant Error variances Constant Varying across group and/or time Slopes Constant Constant Estimation LSDV, within effect, between effect GLS, FGLS Hypothesis test Incremental F test Breusch-Pagan LM test * v ~ IID(0, σ ) it v it The random effect model, by contrast, estimates variance components for groups and error, assuming the same intercept and slopes. The difference among groups (or time periods) lies in the variance of the error term. This model is estimated by generalized least squares (GLS) when the Ω matrix, a variance structure among groups, is known. The feasible generalized least squares (FGLS) method is used to estimate the variance structure when Ω is not known. A typical example is the groupwise heteroscedastic regression model (Greene 003). There are various estimation methods for FGLS including maximum likelihood methods and simulations (Baltagi and Cheng 1994). Fixed effects are tested by the (incremental) F test, while random effects are examined by the Lagrange multiplier (LM) test (Breusch and Pagan 1980). If the null hypothesis is not rejected, the pooled OLS regression is favored. The Hausman specification test (Hausman 1978) compares fixed effect and random effect models. Table 1 compares the fixed effect and random effect models. Group effect models create dummies using grouping variables (e.g., country, firm, and race). If one grouping variable is considered, it is called a one-way fixed or random group effects model. Two-way group effect models have two sets of dummy variables, one for a grouping variable and the other for a time variable. it it i it

58 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: Estimation and Software Issues LSDV regression, the within effect model, the between effect model (group or time mean model), GLS, and FGLS are fundamentally based on OLS in terms of estimation. Thus, any procedure and command for OLS is good for the panel data models. The REG procedure of SAS/STAT, STATA.regress (.cnsreg), LIMDEP regress$, and SPSS regression commands all fit LSDV1 dropping one dummy and have options to suppress the intercept (LSDV). SAS, STATA, and LIMDEP can estimate OLS with restrictions (LSDV3), but SPSS cannot. Note that the STATA.cnsreg command requires the.constraint command that defines a restriction (Table ). Table. Procedures and Commands in SAS, STATA, LIMDEP, and SPSS SAS 9.1 STATA 9.0 LIMDEP 8.0 SPSS 13.0 Regression (OLS) PROC REG.regress Regress$ Regression LSDV1 w/o a dummy w/o a dummy w/o a dummy w/o a dummy LSDV /NOINT Noconstant w/o One in Rhs /Origin LSDV3 RESTRICT.cnsreg Cls: N/A Fixed effect (within effect) Two-way fixed (within effect) Between effect Random effect Two-way random TSCSREG /FIXONE PANEL /FIXONE TSCSREG /FIXTWO PANEL /FIXTWO PANEL /BTWNG PANEL /BTWNT TSCSREG /RANONE PANEL /RANONE TSCSREG /RANTWO PANEL /RANTWO.xtreg w/ fe Regress;Panel;St r=;pds=;fixed$ N/A N/A Regress;Panel;St N/A r=;pds=;fixed$.xtreg w/ be Regress;Panel;St N/A r=;pds=;means$.xtreg w/ re Regress;Panel;St N/A r=;pds=;random$ N/A Problematic N/A SAS, STATA, and LIMDEP also provide the procedures (commands) that are designed to estimate panel data models conveniently. SAS/ETS has the TSCSREG and PANEL procedures to estimate one-way and two-way fixed and random effect models. 1 For the fixed effect model, these procedures estimate LSDV1, which drops one of the dummy variables. For the random effects model, they by default use the Fuller-Battese method (1974) to estimate variance components for group, time, and error. These procedures also support other estimation methods such as Parks (1967) autoregressive model and Da Silva moving average method. The TSCSREG procedure can handle balanced data only, whereas the PANEL procedure is able to deal with balanced and unbalanced data. The former provides one-way and two-way fixed and random effect models, while the latter supports the between effect model and pooled OLS regression as well. Despite advanced features of PANEL, output from the two procedures looks alike. The STATA.xtreg command estimates within effect (fixed effect) models with the fe option, between effect models with the be option, and random effect models with the re option. This command, however, does not fit the two-way fixed and random effect models. The LIMDEP 1 SAS recently announced the PROC PANEL, an experimental procedure, for panel data models.

59 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 4 regress$ command with the panel; subcommand estimates panel data models, but this command is not sufficiently stable. SPSS has limited ability to analyze panel data. 1.4 Data Sets This document uses two data sets. The cross-sectional data set contains research and development (R&D) expenditure data of the top 50 information technology firms presented in OECD Information Technology Outlook 004. The panel data set has cost data for U.S. airlines ( ) from Econometric Analysis (Greene 003). See the Appendix for the details.

60 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 5. Least Squares Dummy Variable Regression A dummy variable is a binary variable that is coded either 1 or zero. It is commonly used to examine group and time effects in regression. Consider a simple model of regressing R&D expenditure in 00 on 000 net income and firm type. The dummy variable d1 is set to 1 for equipment and software firms and zero for telecommunication and electronics. The variable d is coded in the opposite way. Take a look at the data structure (Figure ). Figure. Dummy Variable Coding for Firm Type firm rnd income type d1 d Samsung,500 4,768 Electronics 0 1 AT&T 54 4,669 Telecom 0 1 IBM 4,750 8,093 IT Equipment 1 0 Siemens 5,490 6,58 Electronics 0 1 Verizon. 11,797 Telecom 0 1 Microsoft 3,77 9,41 Service & S/W Model 1 without a Dummy Variable The ordinary least squares (OLS) regression without dummy variables, a pooled regression model, assumes a constant intercept and slope regardless of firm types. In the following regression equation, β 0 is the intercept; β 1 is the slope of net income in 000; and ε i is the error term. Model 1: R & D i = β 0 + β1income i + ε i The pooled model has the intercept of 1, and slope of.3. For a $ one million increase in net income, a firm is likely to increase R&D expenditure in 00 by $.3 million.. regress rnd income Source SS df MS Number of obs = F( 1, 37) = 7.07 Model Prob > F = Residual R-squared = Adj R-squared = Total Root MSE = rnd Coef. Std. Err. t P> t [95% Conf. Interval] income _cons Pooled model: R&D = 1, *income Despite moderate goodness of fit statistics such as F and t, this is a naïve model. R&D investment tends to vary across industries.

61 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 6. Model with a Dummy Variable You may assume that equipment and software firms have more R&D expenditure than other types of companies. Let us take this group difference into account. We have to drop one of the two dummy variables in order to avoid perfect multicollinearity. That is, OLS does not work with both dummies in a model. The δ 1 in model is the coefficient that is valid in equipment and software companies only. Model : R & D i β income d + ε = 0 + β1 i + δ1 1i i Unlike Model 1, this model results in two different regression equations for two groups. The difference lies in the intercepts, but the slope remains unchanged.. regress rnd income d1 Source SS df MS Number of obs = F(, 36) = 6.06 Model Prob > F = Residual R-squared = Adj R-squared = Total Root MSE = rnd Coef. Std. Err. t P> t [95% Conf. Interval] income d _cons d1=1: R&D =, *income = 1, ,006.66*1 +.18*income d1=0: R&D = 1, *income = 1, ,006.66*0 +.18*income The slope.18 indicates a positive impact of two-year-lagged net income on a firm s R&D expenditure. Equipment and software firms on average spend $1,007 million more for R&D than telecommunication and electronics companies..3 Visualization of Model 1 and There is only a tiny difference in the slope (.3 versus.18) between Model 1 and Model. The intercept 1,483 of Model 1, however, is quite different from 1,134 for equipment and software companies and,140 for telecommunications and electronics in Model. This result appears to support Model. Figure 3 highlights differences between Model 1 and more clearly. The black line (pooled) in the middle is the regression line of Model 1; the red line at the top is one for equipment and software companies (d1=1) in Model ; finally the blue line at the bottom is for telecommunication and electronics firms (d=1 or d1=0). The dummy variable (firm types) and regressors (net income) may or may not be correlated.

62 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 7 Figure 3. Regression Lines of Model 1 and Model This plot shows that Model 1 ignores the group difference, and thus reports the misleading intercept. The difference in the intercept between two groups of firms looks substantial. Moreover, the two models have the similar slopes. Consequently, Model considering fixed group effects seems better than the simple Model 1. Compare goodness of fit statistics (e.g., F, t, R, and SSE) of the two models. See Section 3.. and 4.7 for formal hypothesis testing..4 Alternatives to LSDV1 The least squares dummy variable (LSDV) regression is ordinary least squares (OLS) with dummy variables. The critical issue in LSDV is how to avoid the perfect multicollinearity or the so called dummy variable trap. LSDV has three approaches to avoid getting caught in the trap. They produce different parameter estimates of dummies, but their results are equivalent. The first approach, LSDV1, drops a dummy variable as in Model above. The second approach includes all dummies and, in turn, suppresses the intercept (LSDV). Finally, include the intercept and all dummies, and then impose a restriction that the sum of parameters of all dummies is zero (LSDV3). Take a look at the following functional forms to compare these three LSDVs. LSDV1: LSDV: LSDV3: R R R & Di = β 0 + β1incomei + δ1d1 i + ε i or R & Di = β 0 + β1incomei + δ d i + ε i & Di = β 1incomei + δ1d1 i + δ d i + ε i & Di = β 0 + β1incomei + δ1d1 i + δ d i + ε i, subject to δ 1 + δ = 0

63 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 8 The main differences among these approaches exist in the meanings of the dummy variable parameters. Each approach defines the coefficients of dummy variables in different ways (Table 3). The parameter estimates in LSDV are actual intercepts of groups, making it easy to interpret substantively. LSDV1 reports differences from the reference point (dropped dummy variable). LSDV3 computes how far parameter estimates are away from the average group effect. Accordingly, null hypotheses of t-tests in the three approaches are different. Keep in mind that the R of LSDV is not correct. Table 3 contrasts the three LSDVs. Table 3. Three Approaches of Least Squares Dummy Variable Models LSDV1: Drop one dummy LSDV: Suppress the intercept Dummy included a a α, d a d * * d d d 1 d LSDV3: Impose a restriction c c c α, d d Intercept? Yes No Yes All dummy? No (d-1) Yes (d) Yes (d) Restriction? No No d c i * = 0 Meaning of coefficient How far away from the reference point (dropped)? Fixed group effect How far away from the average group effect? Coefficients * a d i = α + * d dropped = d a α a i, * d 1, H 0 of T-test * * d d = 0 d * = 0 i dropped i d, d * * d 1 d * c d i = α + c 1 α = d * i d Source: David Good s Lecture (004) * This restriction reduces the number of parameters to be estimated, making the model identified. d d d c i * i 1 * d i, where = 0.5 Estimating Three LSDVs The SAS REG procedure, STATA.regress command, LIMDEP Regress$ command, and SPSS Regression command all fit OLS and LSDVs. Let us estimate three LSDVs using SAS and STATA..5.1 LSDV 1 without a Dummy LSDV 1 drops a dummy variable. The intercept is the actual parameter estimate of the dropped dummy variable. The coefficient of the dummy included means how far its parameter estimate is away from the reference point or baseline (i.e., the intercept). Here we include d instead of d1 to see how a different reference point changes the result. Check the sign of the dummy coefficient included and the intercept. Dropping other dummies does not make any significant difference. PROC REG DATA=masil.rnd00; MODEL rnd = income d; RUN;

64 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 9 The REG Procedure Model: MODEL1 Dependent Variable: rnd Number of Observations Read 50 Number of Observations Used 39 Number of Observations with Missing Values 11 Analysis of Variance Sum of Mean Source DF Squares Square F Value Pr > F Model Error Corrected Total Root MSE R-Square 0.50 Dependent Mean Adj R-Sq Coeff Var Parameter Estimates Parameter Standard Variable DF Estimate Error t Value Pr > t Intercept <.0001 income d d=0: R&D =, *income =, ,006.66*0 +.18*income d=1: R&D = 1, *income =, ,006.66*1 +.18*income.5. LSDV without the Intercept LSDV includes all dummy variables and suppresses the intercept. The STATA.regress command has the noconstant option to fit LSDV. The coefficients of dummies are actual parameter estimates; thus, you do not need to compute intercepts of groups. This LSDV, however, reports wrong R ( ).. regress rnd income d1 d, noconstant Source SS df MS Number of obs = F( 3, 36) = 9.88 Model Prob > F = Residual R-squared = Adj R-squared = Total Root MSE = rnd Coef. Std. Err. t P> t [95% Conf. Interval] income d d

65 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 10 d1=1: R&D =, *income d=1: R&D = 1, *income.5.3 LSDV 3 with a Restriction LSDV 3 includes the intercept and all dummies and then imposes a restriction on the model. The restriction is that the sum of all dummy parameters is zero. The STATA.constraint command defines a constraint, while the.cnsreg command fits a constrained OLS using the constraint()option. The number in the parenthesis indicates the constraint number defined in the.constraint command.. constraint 1 d1 + d = 0. cnsreg rnd income d1 d, constraint(1) Constrained linear regression Number of obs = 39 F(, 36) = 6.06 Prob > F = Root MSE = ( 1) d1 + d = 0 rnd Coef. Std. Err. t P> t [95% Conf. Interval] income d d _cons d1=1: R&D =, *income = 1, *1 + (-503)*0 +.18*income d=1: R&D = 1, *income = 1, *0 + (-503)*1 +.18*income The intercept is the average of actual parameter estimates: 1,636 = (,140+1,133)/. In the SAS output below, the coefficient of RESTRICT is virtually zero and, in theory, should be zero. PROC REG DATA=masil.rnd00; MODEL rnd = income d1 d; RESTRICT d1 + d = 0; RUN; The REG Procedure Model: MODEL1 Dependent Variable: rnd NOTE: Restrictions have been applied to parameter estimates. Number of Observations Read 50 Number of Observations Used 39 Number of Observations with Missing Values 11 Analysis of Variance Sum of Mean Source DF Squares Square F Value Pr > F Model

66 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 11 Error Corrected Total Root MSE R-Square 0.50 Dependent Mean Adj R-Sq Coeff Var Parameter Estimates Parameter Standard Variable DF Estimate Error t Value Pr > t Intercept <.0001 income d d RESTRICT E * Probability computed using beta distribution. Table 4 compares how SAS, STATA, LIMDEP, and SPSS conducts LSDVs. SPSS is not able to fit the LSDV3. In LIMDEP, the b() of the Cls: indicates the parameter estimate of the second independent variable. In SPSS, pay attention to the /ORIGIN option for LSDV. Table 4. Estimating Three LSDVs Using SAS, STATA, LIMDEP, and SPSS LSDV 1 LSDV LSDV 3 SAS PROC REG; MODEL rnd = income d; RUN; PROC REG; MODEL rnd = income d1 d /NOINT; RUN; PROC REG; MODEL rnd = income d1 d; RESTRICT d1 + d = 0; RUN; STATA. regress ind income d. regress rnd income d1 d, noconstant. constraint 1 d1+ d = 0. cnsreg rnd income d1 d const(1) LIMDEP REGRESS; REGRESS; REGRESS; Lhs=rnd; Lhs=rnd; Lhs=rnd; Rhs=ONE,income, d$ Rhs=income, d1, d$ Rhs=ONE,income, d1, d; Cls: b()+b(3)=0$ SPSS REGRESSION /MISSING LISTWISE /STATISTICS COEFF R ANOVA /CRITERIA=PIN(.05) POUT(.10) /NOORIGIN /DEPENDENT rnd /METHOD=ENTER income d. REGRESSION /MISSING LISTWISE /STATISTICS COEFF R ANOVA /CRITERIA=PIN(.05) POUT(.10) /ORIGIN /DEPENDENT rnd /METHOD=ENTER income d1 d. N/A

67 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 1 3. Panel Data Models Panel data may have group effects, time effects, or both. These effects are either fixed effect or random effect. A fixed effect model assumes differences in intercepts across groups or time periods, whereas a random effect model explores differences in error variances. A one-way model includes only one set of dummy variables (e.g., firm), while a two way model considers two sets of dummy variables (e.g., firm and year). Model in Chapter, in fact, is a one-way fixed group effect panel data model. 3.1 Functional Forms and Notation The functional forms of one-way panel data models are as follows. ' Fixed group effect model: y it = ( α + μi ) + X it β + vit, where vit ~ IID(0, σ v ) ' Random group effect model: y = α + X β + ( μ + v ), where v ~ IID(0, σ ) it it The dummy variable is a part of the intercept in the fixed effect model and a part of error in the random effect model. vit ~ IID(0, σ v ) indicates that errors are independent identically distributed. The notations used in this document are, y i : dependent variable (DV) mean of group i. x : means of independent variables (IVs) at time t. t y and x for overall means of the DV and IVs, respectively. n: the number of groups or firms T : the number of time periods N=nT : total number of observations k : the number of regressors excluding dummy variables K=k+1 (including the intercept) i it it v 3. Fixed Effect Models There are several strategies for estimating fixed effect models. The least squares dummy variable model (LSDV) uses dummy variables, whereas the within effect does not. These strategies produce the identical slopes of non-dummy independent variables. The between effect model also does not use dummies, but produces different parameter estimates. There are pros and cons of these strategies (Table 5) Estimations: LSDV, Within Effect, and Between Effect Model As discussed in Chapter, LSDV is widely used because it is relatively easy to estimate and interpret substantively. This LSDV, however, becomes problematic when there are many

68 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 13 groups or subjects in the panel data. If T is fixed and N, only coefficients of regressors are consistent. The coefficients of dummy variables, α + μi, are not consistent since the number of these parameters increases as N increases (Baltagi 001). This is so called the incidental parameter problem. Under this circumstance, LSDV is useless, calling for another strategy, the within effect model. The within effect model does not use dummy variables, but uses deviations from group means. Thus, this model is the OLS of ( yit yi ) = β '( xit xi ) + ( ε it ε i ) without an intercept. 3 You do not need to worry about the incidental parameter problem any more. The parameter estimates of regressors are identical to those of LSDV. The within effect model in turn has several disadvantages. Table 5. Three Strategies for Fixed Effect Models LSDV1 Within Effect Between Effect Functional form y iα + X β + ε y y = x x + ε ε y α + + ε i = i i i it i it i it i i = x i Dummy Yes No No Dummy coefficient Presented Need to be computed N/A Transformation No Deviation from the group means Group means Intercept (estimation) Yes No No R Correct Incorrect SSE Correct Correct MSE Correct Smaller Standard error of β Correct Incorrect (smaller) DF error nt-n-k nt-k (Larger) n-k Observations nt nt n Since this model does not report dummy coefficients, you need to compute them using the * formula d = g y g β ' xg Since no dummy is used, the within effect model has a larger degree of freedom for error, resulting in a small MSE (mean square error) and incorrect (larger) standard errors of parameter estimates. Thus, you have to adjust the standard error using the Within * df error nt k formula sek = sek = se LSDV k. Finally, R of the within effect model is not df error nt n k correct because an intercept is suppressed. The between group effect model, so called the group mean regression, uses the group means of the dependent and independent variables. Then, run OLS of yi = α + x i + εi The number of observations decreases to n. This model uses aggregated data to test effects between groups (or individuals), assuming no group and time effect. Table 5 contrasts LSDV, the within effect model, and the between group models. In two-way fixed effect model, LSDV and the between effect model are not valid. i 3 You need to follow three steps: 1) compute group means of the dependent and independent variables; ) transform variables to get deviations of individual values from the group means; 3) run OLS with the transformed variables without the intercept.

69 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: Testing Group Effects The null hypothesis is that all dummy parameters except one are zero: H 0 : μ 1 =... = μ n 1 = 0. This hypothesis is tested by the F test, which is based on loss of goodness-of-fit. The robust model in the following formula is LSDV and the efficient model is the pooled regression. 4 ( e' e Efficient ( e' e Robust e' e Robust ) ( n 1) ) ( nt n k) ( RRobust REfficient ) ( n 1) = (1 R ) ( nt n k) Robust ~ F( n 1, nt n k) If the null hypothesis is rejected, you may conclude that the fixed group effect model is better than the pooled OLS model Fixed Time Effect and Two-way Fixed Effect Models For the fixed time effects model, you need to switch n and T, and i and t in the formulas. Model: yit = α + τ t + β ' X it + ε it Within effect model: y y ) = β '( x x ) + ( ε ε ) Dummy coefficients: t Correct standard errors: ( it t it t it t * d t = y t β ' x se * k = se k df df Within error LSDV error = se k Tn k Tn T k Between effect model: y t = α + x t + ε t H 0 : τ 1 =... = τ T 1 = 0. ( e' eefficient e' erobust ) ( T 1) F-test: ~ F( T 1, Tn T k). ( e' e ) ( Tn T k) Robust The fixed group and time effect model uses slightly different formulas. The within effect model of this two-way fixed model has four approaches for LSDV (see 6.1 for details). Model: yit = α + μ i + τ t + β ' X it + ε it. * * Within effect Model: yit = yit yi y t + y and xit = xit xi x t + x. * * Dummy coefficients: d g = ( y g y ) b'( xg x ) and d t = ( y t y ) b'( x t x ) Correct standard errors: se * k = se df Within error k LSDV df error 1 =... = τ T 1 = = se k nt k nt n T k + 1 H 0 : μ 1 =... = μ n 1 = 0 and τ 0. ( e' eefficient e' erobust ) ( n + T ) F-test: ~ F[( n + T ),( nt n T k + 1)] ( e' e ) ( nt n T k + 1) Robust 4 When comparing fixed effect and random effect models, the fixed effect estimates are considered as the robust estimates and random effect estimates as the efficient estimates.

70 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: Random Effect Models The one-way random group effect model is formulated as y = α + β ' X + μ + v, w it = μ i + vit where μi ~ IID(0, σ μ ) and vit ~ IID(0, σ v ). The μ i are assumed independent of v it and X it, which are also independent of each other for all i and t. Remember that this assumption is not necessary in the fixed effect model. The components of Cov w, w ) = E( w w ) are σ μ + if i=j and t=s and σ if i=j and t s. 5 ( it js it js σ v A random effect model is estimated by generalized least squares (GLS) when the variance structure is known and feasible generalized least squares (FGLS) when the variance is unknown. Compared to fixed effect models, random effect models are relatively difficult to estimate. This document assumes panel data are balanced Generalized Least Squares (GLS) When Ω is known (given), GLS based on the true variance components is BLUE and all the feasible GLS estimators considered are asymptotically efficient as either n or T approaches infinity (Baltagi 001). The Ω matrix looks like, σ μ + σ v Ω = σ μ T... σ μ T μ σ μ σ + σ... σ μ v σ μ σ μ... σ μ + σ v In GLS, you just need to compute θ using the Ω matrix: θ = 1 σ v Tσ μ + σ v. 6 Then transform variables as follows. * yit = yit θ yi * x x θ x for all X k it = it i * α = 1 θ * * * * * Finally, run OLS with the transformed variables: yit = α + x it β ε it. Since Ω is often unknown, FGLS is more frequently used rather than GLS Feasible Generalized Least Squares (FGLS) μ it ti i it 5 This implies that, ) ( js Corr w it w is 1 if i=j and t=s, and σ μ ( σ μ + σ v ) if i=j and t s. 6 If θ = 0, run pooled OLS. If θ = 1 andσ = 0, then run the within effect model. v

71 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 16 If Ω is unknown, first you have to estimate θ using ˆ σ μ and ˆ σ v : ˆ ˆ σ ˆ σ θ = 1 = 1. T ˆ σ ˆ σ T ˆ σ v μ + v v between The ˆ σ v is derived from the SSE (sum of squares due to error) of the within effect model or from the deviations of residuals from group means of residuals: ˆ σ v SSEwithin e' ewithin = = = nt n k nt n k n T i= 1 t= 1 ( v it v nt n k ) i, where v it are the residuals of the LSDV1. The ˆ σ μ comes from the between effect model (group mean regression): σ v SSE = ˆ σ between, where ˆ σ between between =. T n K ˆ ˆ σ μ Next, transform variables using θˆ and then run OLS: * y y θˆ y it = it i x x θˆ x * it = it i ˆ * α = θ 1 for all X k y * * * * * it = α + x it β ε it. The estimation of the two-way random effect model is skipped here, since it is complicated Testing Random Effects (LM test) The null hypothesis is that cross-sectional variance components are zero, H0 : σ u = 0. Breusch and Pagan (1980) developed the Lagrange multiplier (LM) test (Greene 003; Judge et al. 1988). In the following formula, e is the n X 1 vector of the group specific means of pooled regression residuals, and e' e is the SSE of the pooled OLS regression. The LM is distributed as chi-squared with one degree of freedom. nt e' DDe nt T e' e LM μ = 1 = 1 ~ χ (1) ( 1) '. T e e ( T 1) e' e Baltagi (001) presents the same LM test in a different way. LM μ = ( nt T 1) ( e ) it nt ( ) 1 Tei = eit ( T 1) eit 1 ~ χ (1). The two way random effect model has the null hypothesis of H0 : σ u = 0 and σ v = 0. The LM test combines two one-way random effect models for group and time, LM = LM + LM ~ χ (). μv μ v

72 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: Hausman Test: Fixed Effects versus Random Effects The Hausman specification test compares the fixed versus random effects under the null hypothesis that the individual effects are uncorrelated with the other regressors in the model (Hausman 1978). If correlated (H 0 is rejected), a random effect model produces biased estimators, violating one of the Gauss-Markov assumptions; so a fixed effect model is preferred. Hausman s essential result is that the covariance of an efficient estimator with its difference from an inefficient estimator is zero (Greene 003). m = ' ( ) ˆ 1 b b ( b b ) ~ χ ( k) Robust Efficient Robust Efficient ˆ = Var[ brobust befficient ] = Var( brobust ) Var( befficient ) is the difference between the estimated covariance matrix of the parameter estimates in the LSDV model (robust) and that of the random effects model (efficient). It is notable that an intercept and dummy variables SHOULD be excluded in computation., 3.5 Poolability Test What is poolability? It asks if slopes are the same across groups or over time. Thus, the null hypothesis of the poolability test is H 0 : β ik = β k. Remember that slopes remain constant in fixed and random effect models; only intercepts and error variances matter. The poolability test is undertaken under the assumption of μ ~ N(0, s ). This test uses the F ( e' e ' e e ) ( n 1) K i i statistic, Fobs = ~ F[ ( n 1) K, n( T K) ] ', where e eiei n( T K) I NT e' is the SSE of the pooled OLS and e ie ' i is the SSE of the OLS regression for group i. If the null hypothesis is rejected, the panel data are not poolable. Under this circumstance, you may go to the random coefficient model or hierarchical regression model. Similarly, the null hypothesis of the poolability test over time is H 0 : β tk = β k. The F-test is ' ( e' e etet ) ( T 1) K Fobs = = F[ ( T 1) K, T ( n K) ] ' etet T ( n K), where e te ' is SSE of the OLS t regression at time t.

73 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: The Fixed Group Effect Model The one-way fixed group model examines group differences in the intercepts. The LSDV for this fixed model needs to create as many dummy variables as the number of groups or subjects. When many dummies are needed, the within effect model is useful since it transforms variables using group means to avoid dummies. The between effect model uses group means of variables. 4.1 The Pooled OLS Regression Model Let us first consider the pooled model without dummy variables.. regress cost output fuel load // pooled model Source SS df MS Number of obs = F( 3, 86) = Model Prob > F = Residual R-squared = Adj R-squared = Total Root MSE =.1461 cost Coef. Std. Err. t P> t [95% Conf. Interval] output fuel load _cons cost = *output +.454*fuel -1.68*load. This model fits the data well (p<.0000 and R =.9883). We may, however, suspect fixed group effects that produce different intercepts across groups. As discussed in Chapter, there are three equivalent approaches of LSDV. They report the identical parameter estimates of regresors excluding dummies. Let us begin with LSDV1. 4. LSDV1 without a Dummy LSDV1 drops a dummy variable to identify the model. LSDV1 produces correct ANOVA information, goodness of fit, parameter estimates, and standard errors. As a consequence, this approach is commonly used in practice. LSDV produces six regression equations for six groups (airlines). Group1: cost = *output +.417*fuel *load Group: cost = *output +.417*fuel *load Group3: cost = *output +.417*fuel *load Group4: cost = *output +.417*fuel *load Group5: cost = *output +.417*fuel *load Group6: cost = *output +.417*fuel *load In SAS, the REG procedure fits the OLS regression model. Let us drop the last dummy g6, the reference point. PROC REG DATA=masil.airline; MODEL cost = g1-g5 output fuel load;

74 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 19 RUN; The REG Procedure Model: MODEL1 Dependent Variable: cost Number of Observations Read 90 Number of Observations Used 90 Analysis of Variance Sum of Mean Source DF Squares Square F Value Pr > F Model <.0001 Error Corrected Total Root MSE R-Square Dependent Mean Adj R-Sq Coeff Var Parameter Estimates Parameter Standard Variable DF Estimate Error t Value Pr > t Intercept <.0001 g g g <.0001 g g output <.0001 fuel <.0001 load <.0001 Note that the parameter estimate of g6 is presented in the intercept (9.793). Other dummy parameter estimates are computed with the reference point. The actual intercept of the group 1, for example, is computed as = (-.087)*1 + (-.183)*0 + (-.960)*0 + (.0975)*0 + (-.0630)*0, where is the reference point. STATA has the.regress command for OLS regression (LSDV).. regress cost g1-g5 output fuel load Source SS df MS Number of obs = F( 8, 81) = Model Prob > F = Residual R-squared = Adj R-squared = Total Root MSE = cost Coef. Std. Err. t P> t [95% Conf. Interval]

75 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: g g g g g output fuel load _cons Now, run the LIMDEP Regress$ command to fit the LSDV1. Do not forget to include ONE for the intercept in the Rhs;. --> REGRESS;Lhs=COST;Rhs=ONE,G1,G,G3,G4,G5,OUTPUT,FUEL,LOAD$ Ordinary least squares regression Weighting variable = none Dep. var. = COST Mean= , S.D.= Model size: Observations = 90, Parameters = 9, Deg.Fr.= 81 Residuals: Sum of squares= , Std.Dev.= Fit: R-squared= , Adjusted R-squared = Model test: F[ 8, 81] = , Prob value = Diagnostic: Log-L = , Restricted(b=0) Log-L = LogAmemiyaPrCrt.= -5.58, Akaike Info. Crt.= Autocorrel: Durbin-Watson Statistic = , Rho = Variable Coefficient Standard Error t-ratio P[ T >t] Mean of X Constant G E E G E G E G E E G E E OUTPUT E FUEL E LOAD (Note: E+nn or E-nn means multiply by 10 to + or -nn power.) What if you drop a different dummy variable, say g1, instead of g6? Since the different reference point is applied, you will get different dummy coefficients. The other statistics such as goodness-of-fits, however, remain unchanged.. regress cost g-g6 output fuel load // LSDV1 dropping g1 Source SS df MS Number of obs = F( 8, 81) = Model Prob > F = Residual R-squared = Adj R-squared = Total Root MSE = cost Coef. Std. Err. t P> t [95% Conf. Interval] g g g

76 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 1 g g output fuel load _cons When you have not created dummy variables, take advantage of the.xi prefix command. 7 Note that STATA by default drops the first dummy variable while the SAS TSCSREG and PANEL procedures in 4.5. drops the last dummy.. xi: regress cost i.airline output fuel load i.airline _Iairline_1-6 (naturally coded; _Iairline_1 omitted) Source SS df MS Number of obs = F( 8, 81) = Model Prob > F = Residual R-squared = Adj R-squared = Total Root MSE = cost Coef. Std. Err. t P> t [95% Conf. Interval] _Iairline_ _Iairline_ _Iairline_ _Iairline_ _Iairline_ output fuel load _cons LSDV without the Intercept LSDV reports actual parameter estimates of the dummies. Because LSDV suppresses the intercept, you will get incorrect F and R statistics. In the SAS REG procedure, you need to use the /NOINT option to suppress the intercept. Note that the F value of 497,985 and R of 1 are not likely. PROC REG DATA=masil.airline; MODEL cost = g1-g6 output fuel load /NOINT; RUN; The REG Procedure Model: MODEL1 Dependent Variable: cost Number of Observations Read 90 Number of Observations Used 90 7 The STATA.xi is used either as an ordinary command or a prefix command like.bysort. This command creates dummies from a categorical variable specified in the term i. and then run the command following the colon.

77 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: NOTE: No intercept in model. R-Square is redefined. Analysis of Variance Sum of Mean Source DF Squares Square F Value Pr > F Model <.0001 Error Uncorrected Total Root MSE R-Square Dependent Mean Adj R-Sq Coeff Var Parameter Estimates Parameter Standard Variable DF Estimate Error t Value Pr > t g <.0001 g <.0001 g <.0001 g <.0001 g <.0001 g <.0001 output <.0001 fuel <.0001 load <.0001 STATA uses the noconstant option to suppress the intercept. Note that noc is its abbreviation.. regress cost g1-g6 output fuel load, noc Source SS df MS Number of obs = F( 9, 81) =. Model Prob > F = Residual R-squared = Adj R-squared = Total Root MSE = cost Coef. Std. Err. t P> t [95% Conf. Interval] g g g g g g output fuel load In LIMDEP, you need to drop ONE out of the Rhs; to suppress the intercept. Unlike SAS and STATA, LIMDEP reports correct R and F even in LSDV.

78 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 3 --> REGRESS;Lhs=COST;Rhs=G1,G,G3,G4,G5,G6,OUTPUT,FUEL,LOAD$ Ordinary least squares regression Weighting variable = none Dep. var. = COST Mean= , S.D.= Model size: Observations = 90, Parameters = 9, Deg.Fr.= 81 Residuals: Sum of squares= , Std.Dev.= Fit: R-squared= , Adjusted R-squared = Model test: F[ 8, 81] = , Prob value = Diagnostic: Log-L = , Restricted(b=0) Log-L = LogAmemiyaPrCrt.= -5.58, Akaike Info. Crt.= Model does not contain ONE. R-squared and F can be negative! Autocorrel: Durbin-Watson Statistic = , Rho = Variable Coefficient Standard Error t-ratio P[ T >t] Mean of X G G G G G G OUTPUT E FUEL E LOAD (Note: E+nn or E-nn means multiply by 10 to + or -nn power.) 4.4 LSDV3 with Restrictions LSDV3 imposes a restriction that the sum of the dummy parameters is zero. The SAS REG procedure uses the RESTRICT statement to impose restrictions. PROC REG DATA=masil.airline; MODEL cost = g1-g6 output fuel load; RESTRICT g1 + g + g3 + g4 + g5 + g6 = 0; RUN; The REG Procedure Model: MODEL1 Dependent Variable: cost NOTE: Restrictions have been applied to parameter estimates. Number of Observations Read 90 Number of Observations Used 90 Analysis of Variance Sum of Mean Source DF Squares Square F Value Pr > F

79 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 4 Model <.0001 Error Corrected Total Root MSE R-Square Dependent Mean Adj R-Sq Coeff Var Parameter Estimates Parameter Standard Variable DF Estimate Error t Value Pr > t Intercept <.0001 g g g <.0001 g <.0001 g g output <.0001 fuel <.0001 load <.0001 RESTRICT E E * * Probability computed using beta distribution. The dummy coefficients mean deviations from the averaged group effect (9.714). The actual intercept of group, for example, is = (-.049). Note that the E-15 of RESTRICT below is virtually zero. In STATA, you have to use the.cnsreg command rather than.regress. The command, however, does not provide an ANOVA table and goodness-of-fit statistics.. constraint define 1 g1 + g + g3 + g4 + g5 + g6 = 0. cnsreg cost g1-g6 output fuel load, constraint(1) Constrained linear regression Number of obs = 90 F( 8, 81) = Prob > F = Root MSE = ( 1) g1 + g + g3 + g4 + g5 + g6 = 0 cost Coef. Std. Err. t P> t [95% Conf. Interval] g g g g g g output fuel load _cons

80 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 5 LIMDEP has the Cls$ subcommand to impose restrictions. Again, do not forget to include ONE in the Rhs;. --> REGRESS;Lhs=COST;Rhs=ONE,G1,G,G3,G4,G5,G6,OUTPUT,FUEL,LOAD; Cls:b(1)+b()+b(3)+b(4)+b(5)+b(6)=0$ Linearly restricted regression Ordinary least squares regression Weighting variable = none Dep. var. = COST Mean= , S.D.= Model size: Observations = 90, Parameters = 9, Deg.Fr.= 81 Residuals: Sum of squares= , Std.Dev.= Fit: R-squared= , Adjusted R-squared = (Note: Not using OLS. R-squared is not bounded in [0,1] Model test: F[ 8, 81] = , Prob value = Diagnostic: Log-L = , Restricted(b=0) Log-L = LogAmemiyaPrCrt.= -5.58, Akaike Info. Crt.= Note, when restrictions are imposed, R-squared can be less than zero. F[ 1, 80] for the restrictions =.0000, Prob = Autocorrel: Durbin-Watson Statistic = , Rho = Variable Coefficient Standard Error t-ratio P[ T >t] Mean of X Constant G E G E G E G E G E G E OUTPUT E FUEL E LOAD LSDV3 in LIMDEP reports different dummy coefficients. But you may draw actual intercepts of groups in a manner similar to what you would do in SAS and STATA. The actual intercept of group 3, for example, is = (-.65). 4.5 Within Group Effect Model The within effect model does not use the dummies and thus has larger degrees of freedom, smaller MSE, and smaller standard errors of parameters than those of LSDV. As a consequence, you need to adjust standard errors. This model does not report individual dummy coefficients either. The SAS TSCSREG procedure and LIMDEP Regress$ command report the adjusted (correct) MSE, SEE (Root MSE), R, and standard errors Estimating the Within Effect Model

81 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 6 First, let us manually estimate the within group effect model in STATA. You need to compute group means and transform dependent and independent variables using group means (log is skipped here).. egen gm_cost=mean(cost), by(airline) // compute group means. egen gm_output=mean(output), by(airline). egen gm_fuel=mean(fuel), by(airline). egen gm_load=mean(load), by(airline) You will get the following group means of variables airline gm_cost gm_output gm_fuel gm_load gen gw_cost = cost - gm_cost // compute deviations from the group means. gen gw_output = output - gm_output. gen gw_fuel = fuel - gm_fuel. gen gw_load = load - gm_load Now, we are ready to run the within effect model. Keep in mind that you have to suppress the intercept. Carefully check MSE, SEE, R, and standard errors.. regress gw_cost gw_output gw_fuel gw_load, noc // within effect Source SS df MS Number of obs = F( 3, 87) = Model Prob > F = Residual R-squared = Adj R-squared = Total Root MSE =.058 gw_cost Coef. Std. Err. t P> t [95% Conf. Interval] gw_output gw_fuel gw_load * You may compute group intercepts using d = g y g β ' xg. For example, the intercept of airline 5 is computed as = {.919*(-.86) +.417* (-1.073)*.566 }. In order to get the correct standard errors, you need to adjust them using the ratio of degrees of freedom of the within effect model and the LSDV. For example, the standard error of the logged output is computed as.099=.088*sqrt(87/81) Using the SAS TSCSREG and PANEL Procedures The TSCSREG and PANEL procedures of SAS/ETS allows users to fit the within effect model conveniently. The procedures, in fact, report LSDV1, but you do not need to create dummy

82 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 7 variables and compute deviations from the group means. This procedures reports correct MSE, SEE, R, and standard errors, and conducts the F test for the fixed group effect as well. PROC SORT DATA=masil.airline; BY airline year; PROC TSCSREG DATA=masil.airline; ID airline year; MODEL cost = output fuel load /FIXONE; RUN; The TSCSREG Procedure Dependent Variable: cost Model Description Estimation Method FixOne Number of Cross Sections 6 Time Series Length 15 Fit Statistics SSE 0.96 DFE 81 MSE Root MSE R-Square F Test for No Fixed Effects Num DF Den DF F Value Pr > F <.0001 Parameter Estimates Standard Variable DF Estimate Error t Value Pr > t Label CS Cross Sectional Effect 1 CS Cross Sectional Effect CS <.0001 Cross Sectional Effect 3 CS Cross Sectional Effect 4 CS Cross Sectional Effect 5 Intercept <.0001 Intercept output <.0001 fuel <.0001 load <.0001

83 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 8 Note that a data set needs to be sorted in advance by variables to appear in the ID statement of the TSCSREG and PANEL procedures. The following PANEL procedure returns the same output. PROC PANEL DATA=masil.airline; ID airline year; MODEL cost = output fuel load /FIXONE; RUN; Using STATA The STATA.xtreg command fits the within group effect model without creating dummy variables. The command reports correct standard errors and the F test for fixed group effects. This command, however, does not provide an analysis of variance (ANOVA) table and correct R and F statistics. The.xtreg command should follow the.tsset command that specifies grouping and time variables.. tsset airline year panel variable: airline, 1 to 6 time variable: year, 1 to 15 The fe of.xtreg indicates the within effect model and i(airline) specifies airline as the independent unit. Note that this command reports adjusted (correct) standard errors.. xtreg cost output fuel load, fe i(airline) // within group effect Fixed-effects (within) regression Number of obs = 90 Group variable (i): airline Number of groups = 6 R-sq: within = Obs per group: min = 15 between = avg = 15.0 overall = max = 15 F(3,81) = corr(u_i, Xb) = Prob > F = cost Coef. Std. Err. t P> t [95% Conf. Interval] output fuel load _cons sigma_u sigma_e rho (fraction of variance due to u_i) F test that all u_i=0: F(5, 81) = Prob > F = The last line of the output tests the null hypothesis that all dummy parameters in LSDV1 are zero (e.g., g1=0, g=0, g3=0, g4=0, and g5=0). Not the intercept of is that of LSDV Using LIMDEP

84 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 9 In LIMDEP, you have to specify the panel data model and stratification or time variables. The Panel$ and Fixed$ subcommands mean a fixed effect panel data model. The Str$ subcommand specifies a stratification variable. --> REGRESS;Lhs=COST;Rhs=ONE,OUTPUT,FUEL,LOAD;Panel;Str=AIRLINE;Fixed$ OLS Without Group Dummy Variables Ordinary least squares regression Weighting variable = none Dep. var. = COST Mean= , S.D.= Model size: Observations = 90, Parameters = 4, Deg.Fr.= 86 Residuals: Sum of squares= , Std.Dev.=.1461 Fit: R-squared=.98890, Adjusted R-squared = Model test: F[ 3, 86] = , Prob value = Diagnostic: Log-L = , Restricted(b=0) Log-L = LogAmemiyaPrCrt.= -4.1, Akaike Info. Crt.= Panel Data Analysis of COST [ONE way] Unconditional ANOVA (No regressors) Source Variation Deg. Free. Mean Square Between Residual Total Variable Coefficient Standard Error t-ratio P[ T >t] Mean of X OUTPUT E FUEL E LOAD Constant (Note: E+nn or E-nn means multiply by 10 to + or -nn power.) Least Squares with Group Dummy Variables Ordinary least squares regression Weighting variable = none Dep. var. = COST Mean= , S.D.= Model size: Observations = 90, Parameters = 9, Deg.Fr.= 81 Residuals: Sum of squares= , Std.Dev.= Fit: R-squared= , Adjusted R-squared = Model test: F[ 8, 81] = , Prob value = Diagnostic: Log-L = , Restricted(b=0) Log-L = LogAmemiyaPrCrt.= -5.58, Akaike Info. Crt.= Estd. Autocorrelation of e(i,t) Variable Coefficient Standard Error t-ratio P[ T >t] Mean of X OUTPUT E FUEL E LOAD (Note: E+nn or E-nn means multiply by 10 to + or -nn power.) LIMDEP reports both the pooled OLS regression and the within effect model. Like the SAS TSCSREG procedure, LIMDEP provides correct MSE, SEE, R, and standard errors.

85 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: Between Group Effect Model: Group Mean Regression The between effect model uses aggregate information, group means of variables. In other words, the unit of analysis is not an individual observation, but groups or subjects. The number of observations jumps down to n from nt. This group mean regression produces different goodness-of-fits and parameter estimates from those of LSDV and the within effect model. Let us compute group means and run the OLS regression with them. The.collapse command computes aggregate information and saves into a new data set. Note that /// links two command lines.. collapse (mean) gm_cost=cost (mean) gm_output=output (mean) gm_fuel=fuel (mean) /// gm_load=load, by(airline). regress gm_cost gm_output gm_fuel gm_load Source SS df MS Number of obs = F( 3, ) = Model Prob > F = Residual R-squared = Adj R-squared = Total Root MSE =.1585 gm_cost Coef. Std. Err. t P> t [95% Conf. Interval] gm_output gm_fuel gm_load _cons The SAS PANEL procedure has the /BTWNG and /BTWNT option to estimate the between effect model. The TSCSREG procedure does not have this option. PROC PANEL DATA=masil.airline; ID airline year; MODEL cost = output fuel load /BTWNG; RUN; The PANEL Procedure Between Groups Estimates Dependent Variable: cost Model Description Estimation Method BtwGrps Number of Cross Sections 6 Time Series Length 15 Fit Statistics SSE DFE MSE Root MSE 0.158

86 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 31 R-Square Parameter Estimates Standard Variable DF Estimate Error t Value Pr > t Label Intercept Intercept output fuel load The STATA.xtreg command has the be option to fit the between effect model. This command, however, does not report the ANOVA table.. xtreg cost output fuel load, be i(airline) Between regression (regression on group means) Number of obs = 90 Group variable (i): airline Number of groups = 6 R-sq: within = Obs per group: min = 15 between = avg = 15.0 overall = max = 15 F(3,) = sd(u_i + avg(e_i.))= Prob > F = cost Coef. Std. Err. t P> t [95% Conf. Interval] output fuel load _cons LIMDEP has the Mean; subcommand to fit the between effect model. --> REGRESS;Lhs=COST;Rhs=ONE,OUTPUT,FUEL,LOAD;Panel;Str=AIRLINE;Means$ Group Means Regression Ordinary least squares regression Weighting variable = none Dep. var. = YBAR(i.) Mean= , S.D.= Model size: Observations = 6, Parameters = 4, Deg.Fr.= Residuals: Sum of squares= e-01, Std.Dev.=.1584 Fit: R-squared= , Adjusted R-squared = Model test: F[ 3, ] = , Prob value = Diagnostic: Log-L = 7.185, Restricted(b=0) Log-L = LogAmemiyaPrCrt.= , Akaike Info. Crt.= Variable Coefficient Standard Error b/st.er. P[ Z >z] Mean of X OUTPUT E-11 FUEL LOAD Constant

87 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: Testing Fixed Group Effects (F-test) How do we know whether there are fixed group effects? The null hypothesis is that all dummy parameters except one are zero: H 0 : μ 1 =... = μ n 1 = 0. In order to conduct a F-test, let us take the SSE (e e) of from the pooled OLS regression and.96 from the LSDVs (LSDV1 through LSDV3) or the within effect model. Alternatively, you may draw R of.9974 from LSDV1 or LSDV3 and.9883 from the pooled OLS. Do not, however, use LSDV and the within effect model for R. ( ) (6 1) ( ) (6 1) The Fstatistic is computed as = ~ [5,81]. (.96) (90 6 3) (1.9974) (90 6 3) The large F statistic rejects the null hypothesis in favor of the fixed group effect model (p<.0000). The SAS TSCSREG and PANEL procedures and STATA.xtreg command by default conduct the F test. Alternatively, you may conduct the same test with LSDV1. In SAS, add the TEST statement in the REG procedure and run the procedure again (other outputs are skipped). PROC REG DATA=masil.airline; MODEL cost = g1-g5 output fuel load; TEST g1 = g = g3 = g4 = g5 = 0; RUN; The REG Procedure Model: MODEL1 Test 1 Results for Dependent Variable cost Mean Source DF Square F Value Pr > F Numerator <.0001 Denominator In STATA, run the.test command, a follow-up command for the Wald test, right after estimating the model.. quietly regress cost g1-g5 output fuel load // LSDV1. test g1 g g3 g4 g5 ( 1) g1 = 0 ( ) g = 0 ( 3) g3 = 0 ( 4) g4 = 0 ( 5) g5 = 0 F( 5, 81) = Prob > F =

88 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: Summary Table 6 summarizes the estimation of panel data models in SAS, STATA, and LIMDEP. The SAS REG and TSCSREG procedures are generally preferred to STATA and LIMDEP commands. Table 6 Comparison of the Fixed Effect Model in SAS, STATA, LIMDEP * SAS 9.1 STATA 9.0 LIMDEP 8.0 OLS estimation PROC REG;. regress (cnsreg) Regress$ LSDV1 Correct Correct Correct (slightly different F) LSDV Incorrect F, (adjusted) R Incorrect F, (adjusted) R Correct (slightly different F) Correct R LSDV3 Correct. cnsreg command No R, ANOVA table but F Correct (slightly different F) Different dummy coefficients Panel Estimation PROC TSCSREG;. xtreg Regress; Panel$ PROC PANEL; Estimation type LSDV1 Within and between effect Within effect SSE (e e) Correct No Correct MSE or SEE Correct (adjusted) No Correct (adjusted) SEE Model test (F) No Incorrect Slightly different F (adjusted) R Correct Incorrect Correct Intercept Correct LSDV3 intercept No Coefficients Correct Correct Correct Standard errors Correct (adjusted) Correct (adjusted) Correct (adjusted) Effect test (F) Yes Yes No Between effect Yes (PROC PANEL;) Yes (the be option) N/A * Yes/No means whether the software reports the statistics. Correct/incorrect indicates whether the statistics are different from those of the least squares dummy variable (LSDV) 1 without a dummy variable.

89 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: The Fixed Time Effect Model The fixed time effect model investigates how time affects the intercept using time dummy variables. The logic and method are the same as those of the fixed group effect model. 5.1 Least Squares Dummy Variable Models The least squares dummy variable (LSDV) model produces fifteen regression equations. This section does not present all outputs, but one or two for each LSDV approach. Time01: cost = *output -.484*fuel *load Time0: cost = *output -.484*fuel *load Time03: cost = *output -.484*fuel *load Time04: cost = *output -.484*fuel *load Time05: cost = *output -.484*fuel *load Time06: cost = *output -.484*fuel *load Time07: cost = *output -.484*fuel *load Time08: cost = *output -.484*fuel *load Time09: cost = *output -.484*fuel *load Time10: cost = *output -.484*fuel *load Time11: cost = *output -.484*fuel *load Time1: cost = *output -.484*fuel *load Time13: cost = *output -.484*fuel *load Time14: cost = *output -.484*fuel *load Time15: cost = *output -.484*fuel *load LSDV1 without a Dummy Let us begin with the SAS REG procedure. The test statement examines fixed time effects. PROC REG DATA=masil.airline; MODEL cost = t1-t14 output fuel load; RUN; The REG Procedure Model: MODEL1 Dependent Variable: cost Number of Observations Read 90 Number of Observations Used 90 Analysis of Variance Sum of Mean Source DF Squares Square F Value Pr > F Model <.0001 Error Corrected Total Root MSE R-Square

90 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 35 Dependent Mean Adj R-Sq Coeff Var Parameter Estimates Parameter Standard Variable DF Estimate Error t Value Pr > t Intercept <.0001 t t t t t t t t t t t t t t output <.0001 fuel load <.0001 The following are the corresponding STATA and LIMDEP commands for LSDV1 (outputs are skipped).. regress cost t1-t14 output fuel load REGRESS;Lhs=COST;Rhs=ONE,T1,T,T3,T4,T5,T6,T7,T8,T9,T10,T11,T1,T13,T14,OUTPUT,FUEL,LOAD$ 5.1. LSDV without the Intercept Let us use LIMDEP to fit LSDV because it reports correct (although slightly different) F and R statistics. --> REGRESS;Lhs=COST;Rhs=T1,T,T3,T4,T5,T6,T7,T8,T9,T10,T11,T1,T13,T14,T15,OUTPUT,FUEL,LOAD$ Ordinary least squares regression Weighting variable = none Dep. var. = COST Mean= , S.D.= Model size: Observations = 90, Parameters = 18, Deg.Fr.= 7 Residuals: Sum of squares= , Std.Dev.=.194 Fit: R-squared= , Adjusted R-squared =.9880 Model test: F[ 17, 7] = 439.6, Prob value = Diagnostic: Log-L = , Restricted(b=0) Log-L = LogAmemiyaPrCrt.= , Akaike Info. Crt.= Model does not contain ONE. R-squared and F can be negative! Autocorrel: Durbin-Watson Statistic =.93900, Rho = Variable Coefficient Standard Error t-ratio P[ T >t] Mean of X

91 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 36 T E-01 T E-01 T E-01 T E-01 T E-01 T E-01 T E-01 T E-01 T E-01 T E-01 T E-01 T E-01 T E-01 T E-01 T E-01 OUTPUT E FUEL LOAD (Note: E+nn or E-nn means multiply by 10 to + or -nn power.) The following are the corresponding SAS REG procedure and STATA command for LSDV (outputs are skipped). PROC REG DATA=masil.airline; MODEL cost = t1-t15 output fuel load /NOINT; RUN;. regress cost t1-t15 output fuel load, noc LSDV3 with a Restriction In SAS, you need to use the RESTRICT statement to impose a restriction. PROC REG DATA=masil.airline; MODEL cost = t1-t15 output fuel load; RESTRICT t1+t+t3+t4+t5+t6+t7+t8+t9+t10+t11+t1+t13+t14+t15=0; RUN; The REG Procedure Model: MODEL1 Dependent Variable: cost NOTE: Restrictions have been applied to parameter estimates. Number of Observations Read 90 Number of Observations Used 90 Analysis of Variance Sum of Mean Source DF Squares Square F Value Pr > F Model <.0001 Error Corrected Total

92 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 37 Root MSE R-Square Dependent Mean Adj R-Sq Coeff Var Parameter Estimates Parameter Standard Variable DF Estimate Error t Value Pr > t Intercept <.0001 t t t t t t t t t t t t t t t output <.0001 fuel load <.0001 RESTRICT E * Probability computed using beta distribution. In STATA, define the restriction with the.constraint command and specify the restriction using the constraint() option of the.cnsreg command.. constraint define 3 t1+t+t3+t4+t5+t6+t7+t8+t9+t10+t11+t1+t13+t14+t15=0. cnsreg cost t1-t15 output fuel load, constraint(3) Constrained linear regression Number of obs = 90 F( 17, 7) = Prob > F = Root MSE =.194 ( 1) t1 + t + t3 + t4 + t5 + t6 + t7 + t8 + t9 + t10 + t11 + t1 + t13 + t14 + t15 = 0 cost Coef. Std. Err. t P> t [95% Conf. Interval] t t t t t t t t t t t t

93 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 38 t t t output fuel load _cons The following are the corresponding LIMDEP command for LSDV3 (outputs are skipped). REGRESS;Lhs=COST;Rhs=ONE,T1,T,T3,T4,T5,T6,T7,T8,T9,T10,T11,T1,T13,T14,T15,OUTPUT,FUEL,LOAD; Cls:b(1)+b()+b(3)+b(4)+b(5)+b(6)+b(7)+b(8)+b(9)+b(10)+b(11)+b(1)+b(13)+b(14)+b(15)=0$ 5. Within Time Effect Model The within effect mode for the fixed time effects needs to compute deviations from the time means. Keep in mind that the intercept should be suppressed Estimating the Time Effect Model Let us manually estimate the fixed time effect model first.. egen tm_cost = mean(cost), by(year) // compute time means. egen tm_output = mean(output), by(year). egen tm_fuel = mean(fuel), by(year). egen tm_load = mean(load), by(year) year tm_cost tm_output tm_fuel tm_load gen tw_cost = cost - tm_cost // transform variables. gen tw_output = output - tm_output. gen tw_fuel = fuel - tm_fuel. gen tw_load = load - tm_load. regress tw_cost tw_output tw_fuel tw_load, noc // within time effect Source SS df MS Number of obs = F( 3, 87) = Model Prob > F = Residual R-squared = Adj R-squared = Total Root MSE =.11184

94 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 39 tw_cost Coef. Std. Err. t P> t [95% Conf. Interval] tw_output tw_fuel tw_load * If you want to get intercepts of years, use d t = y t β ' x t. For example, the intercept of year 7 is 1.503= {.8677*(-1.304) + (-.4845)* ( )*.5607}. As discussed previously, the standard errors of the within effects model need to be adjusted. For instance, the correct standard error of fuel price is computed as.364 =.331*sqrt(87/7). 5.. Using the TSCSREG and PANEL procedures You need to sort the data set by variables (i.e., year and airline) to appear in the ID statement of the TSCSREG and PANEL procedures. PROC SORT DATA=masil.airline; BY year airline; PROC PANEL DATA=masil.airline; ID year airline; MODEL cost = output fuel load /FIXONE; RUN; The PANEL Procedure Fixed One Way Estimates Dependent Variable: cost Model Description Estimation Method FixOne Number of Cross Sections 15 Time Series Length 6 Fit Statistics SSE DFE 7 MSE Root MSE 0.19 R-Square F Test for No Fixed Effects Num DF Den DF F Value Pr > F Parameter Estimates Standard Variable DF Estimate Error t Value Pr > t Label

95 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 40 CS Cross Sectional Effect 1 CS Cross Sectional Effect CS Cross Sectional Effect 3 CS Cross Sectional Effect 4 CS Cross Sectional Effect 5 CS Cross Sectional Effect 6 CS Cross Sectional Effect 7 CS Cross Sectional Effect 8 CS Cross Sectional Effect 9 CS Cross Sectional Effect 10 CS Cross Sectional CS Cross Sectional Effect 1 CS Cross Sectional Effect 13 CS Cross Sectional Effect 14 Intercept <.0001 Intercept output <.0001 fuel load <.0001 The following TSCSREG procedure gives the same outputs. PROC TSCSREG DATA=masil.airline; ID year airline; MODEL cost = output fuel load /FIXONE; RUN; 5..3 Using STATA The STATA.xtreg command uses the fe option for the fixed effect model.. xtreg cost output fuel load, fe i(year) Fixed-effects (within) regression Number of obs = 90 Group variable (i): year Number of groups = 15 R-sq: within = Obs per group: min = 6 between = avg = 6.0 overall = max = 6 F(3,7) = corr(u_i, Xb) = Prob > F = cost Coef. Std. Err. t P> t [95% Conf. Interval]

96 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: output fuel load _cons sigma_u sigma_e rho (fraction of variance due to u_i) F test that all u_i=0: F(14, 7) = 1.17 Prob > F = Using LIMDEP You need to pay attention to the Str=; subcommand for stratification. --> REGRESS;Lhs=COST;Rhs=ONE,OUTPUT,FUEL,LOAD;Panel;Str=YEAR;Fixed$ OLS Without Group Dummy Variables Ordinary least squares regression Weighting variable = none Dep. var. = COST Mean= , S.D.= Model size: Observations = 90, Parameters = 4, Deg.Fr.= 86 Residuals: Sum of squares= , Std.Dev.=.1461 Fit: R-squared=.98890, Adjusted R-squared = Model test: F[ 3, 86] = , Prob value = Diagnostic: Log-L = , Restricted(b=0) Log-L = LogAmemiyaPrCrt.= -4.1, Akaike Info. Crt.= Panel Data Analysis of COST [ONE way] Unconditional ANOVA (No regressors) Source Variation Deg. Free. Mean Square Between Residual Total Variable Coefficient Standard Error t-ratio P[ T >t] Mean of X OUTPUT E FUEL E LOAD Constant (Note: E+nn or E-nn means multiply by 10 to + or -nn power.) Least Squares with Group Dummy Variables Ordinary least squares regression Weighting variable = none Dep. var. = COST Mean= , S.D.= Model size: Observations = 90, Parameters = 18, Deg.Fr.= 7 Residuals: Sum of squares= , Std.Dev.=.194 Fit: R-squared= , Adjusted R-squared =.9880 Model test: F[ 17, 7] = 439.6, Prob value = Diagnostic: Log-L = , Restricted(b=0) Log-L = LogAmemiyaPrCrt.= , Akaike Info. Crt.= Estd. Autocorrelation of e(i,t) Variable Coefficient Standard Error t-ratio P[ T >t] Mean of X

97 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 4 OUTPUT E FUEL LOAD (Note: E+nn or E-nn means multiply by 10 to + or -nn power.) Test Statistics for the Classical Model Model Log-Likelihood Sum of Squares R-squared (1) Constant term only D () Group effects only D (3) X - variables only D (4) X and group effects D Hypothesis Tests Likelihood Ratio Test F Tests Chi-squared d.f. Prob. F num. denom. Prob value () vs (1) (3) vs (1) (4) vs (1) (4) vs () (4) vs (3) Between Time Effect Model The between effect model regresses time means of dependent variables on those of independent variables. See also 3. and collapse (mean) tm_cost=cost (mean) tm_output=output (mean) tm_fuel=fuel /// (mean) tm_load=load, by(year). regress tm_cost tm_output tm_fuel tm_load // between time effect Source SS df MS Number of obs = F( 3, 11) = Model Prob > F = Residual R-squared = Adj R-squared = Total Root MSE =.054 tm_cost Coef. Std. Err. t P> t [95% Conf. Interval] tm_output tm_fuel tm_load _cons The SAS PANEL procedure has the /BTWNT option to estimate the between effect model. PROC PANEL DATA=masil.airline; ID airline year; MODEL cost = output fuel load /BTWNT; RUN; The PANEL Procedure

98 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 43 Between Time Periods Estimates Dependent Variable: cost Model Description Estimation Method BtwTime Number of Cross Sections 6 Time Series Length 15 Fit Statistics SSE DFE 11 MSE Root MSE 0.05 R-Square Parameter Estimates Standard Variable DF Estimate Error t Value Pr > t Label Intercept <.0001 Intercept output <.0001 fuel <.0001 load You may use the be option in the STATA.xtreg command and the Means; subcommand in LIMDEP (outputs are skipped).. xtreg cost output fuel load, be i(year) // between time effect model --> REGRESS;Lhs=COST;Rhs=ONE,OUTPUT,FUEL,LOAD;Panel;Str=YEAR;Means$ 5.4 Testing Fixed Time Effects. The null hypothesis is that all time dummy parameters except one are zero: ( ) (15 1) H 0 : τ 1 =... = τ T 1 = 0. The F statistic is ~ [14,7]. The p- (1.088) (6* ) value of.3180 does not reject the null hypothesis. The SAS TSCSREG and PANEL procedures and the STATA.xtreg command conduct the Wald test. You may get the same test using the TEST statement in LSDV1 and the STATA.test command (the output is skipped). PROC REG DATA=masil.airline; MODEL cost = t1-t14 output fuel load; TEST t1=t=t3=t4=t5=t6=t7=t8=t9=t10=t11=t1=t13=t14=0; RUN;. quietly regress cost t1-t14 output fuel load. test t1 t t3 t4 t5 t6 t7 t8 t9 t10 t11 t1 t13 t14

99 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: The Fixed Group and Time Effect Model The two-way fixed model considers both group and time effects. This model thus needs two sets of group and time dummy variables. LSDV and the between effect model are not valid in this model. 6.1 Least Squares Dummy Variable Models There are four approaches to avoid the perfect multicollinearity or the dummy variable trap. You may not suppress the intercept under any circumstances. Drop one cross-section and one time-series dummy variables. Drop one cross-section dummy and impose a restriction on the time-series dummies of τ t = 0 Drop one time-series dummy and impose a restriction on the cross-section dummies of μ g = 0 Include all dummy variables and impose two restrictions on the cross-section and timeseries dummies of μ = 0 τ = 0 g t 6. LSDV1 without Two Dummies Let us first run LSDV1 using STATA.. regress cost g1-g5 t1-t14 output fuel load Source SS df MS Number of obs = F(, 67) = Model Prob > F = Residual R-squared = Adj R-squared = Total Root MSE = cost Coef. Std. Err. t P> t [95% Conf. Interval] g g g g g t t t t t t t t t t t t t t

100 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 45 output fuel load _cons The following is the corresponding SAS REG procedure (outputs are skipped). PROC REG DATA=masil.airline; MODEL cost = g1-g5 t1-t14 output fuel load; RUN; The LIMDEP example is skipped here, since many dummy variables need to be listed in the Regress$ command. 6.3 LSDV1 + LSDV3: Dropping a Dummy and Imposing a Restriction In the second approach, you may drop either one group dummy or one time dummy. The following drops one time dummy, includes all group dummies, and imposes a restriction on group dummies. PROC REG DATA=masil.airline; MODEL cost = g1-g6 t1-t14 output fuel load; RESTRICT g1 + g + g3 + g4 + g5 + g6 = 0; RUN; The REG Procedure Model: MODEL1 Dependent Variable: cost NOTE: Restrictions have been applied to parameter estimates. Number of Observations Read 90 Number of Observations Used 90 Analysis of Variance Sum of Mean Source DF Squares Square F Value Pr > F Model <.0001 Error Corrected Total Root MSE R-Square Dependent Mean Adj R-Sq Coeff Var Parameter Estimates Parameter Standard

101 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 46 Variable DF Estimate Error t Value Pr > t Intercept <.0001 g g g <.0001 g <.0001 g g t t t t t t t t t t t t t t output <.0001 fuel load RESTRICT E * Probability computed using beta distribution. Alternatively, you may run the STATA.cnsreg command with the second constraint (output is skipped).. cnsreg cost g1-g6 t1-t14 output fuel load, constraint() The following drops one group dummy and imposes a restriction on time dummies.. cnsreg cost g1-g5 t1-t15 output fuel load, constraint(3) Constrained linear regression Number of obs = 90 F(, 67) = Prob > F = Root MSE = ( 1) t1 + t + t3 + t4 + t5 + t6 + t7 + t8 + t9 + t10 + t11 + t1 + t13 + t14 + t15 = 0 cost Coef. Std. Err. t P> t [95% Conf. Interval] g g g g g t t t t t t t t t

102 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 47 t t t t t t output fuel load _cons You may run the following SAS REG procedure to get the same result (output is skipped). PROC REG DATA=masil.airline; /* LSDV3 */ MODEL cost = g1-g5 t1-t15 output fuel load; RESTRICT t1+t+t3+t4+t5+t6+t7+t8+t9+t10+t11+t1+t13+t14+t15=0; RUN; 6.4 LSDV3 with Two Restrictions The third approach includes all group and time dummies and imposes two restrictions on group and time dummies.. cnsreg cost g1-g6 t1-t15 output fuel load, constraint( 3) Constrained linear regression Number of obs = 90 F(, 67) = Prob > F = Root MSE = ( 1) g1 + g + g3 + g4 + g5 + g6 = 0 ( ) t1 + t + t3 + t4 + t5 + t6 + t7 + t8 + t9 + t10 + t11 + t1 + t13 + t14 + t15 = 0 cost Coef. Std. Err. t P> t [95% Conf. Interval] g g g g g g t t t t t t t t t t t t t t t output fuel load _cons The following SAS REG procedure gives you the same result (output is skipped). PROC REG DATA=masil.airline;

103 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 48 MODEL cost = g1-g6 t1-t15 output fuel load; RESTRICT g1 + g + g3 + g4 + g5 + g6 = 0; RESTRICT t1+t+t3+t4+t5+t6+t7+t8+t9+t10+t11+t1+t13+t14+t15=0; RUN; 6.5 Two-way Within Effect Model The two-way within group and time effect model requires a transformation of the data set as * * yit = yit yi y t + y and xit = xit xi x t + x. The following commands do this task.. gen w_cost = cost - gm_cost - tm_cost + m_cost. gen w_output = output - gm_output - tm_output + m_output. gen w_fuel = fuel - gm_fuel - tm_fuel + m_fuel. gen w_load = load - gm_load - tm_load + m_load. tabstat cost output fuel load, stat(mean) stats cost output fuel load mean Now, run the OLS with the transformed variables. Do not forget to suppress the intercept.. regress w_cost w_output w_fuel w_load, noc // within effect Source SS df MS Number of obs = F( 3, 87) = Model Prob > F = Residual R-squared = Adj R-squared = Total Root MSE = w_cost Coef. Std. Err. t P> t [95% Conf. Interval] w_output w_fuel w_load Note again that R, MSE, standard errors, and DF error are not correct. The dummy variable * * coefficients are computed as d g = ( y g y ) b'( xg x ) and d = ( y t y ) b'( x t x ) The standard errors also need to be adjusted; for instance, the standard error of the load factor is.617=.97*sqrt(87/67). t. 6.6 Using the TSCSREG and PANEL Procedures The SAS TSCSREG and PANEL procedures have the /FIXTWO option to fit the two-way fixed effect model. PROC TSCSREG DATA=masil.airline; ID airline year; MODEL cost = output fuel load /FIXTWO; RUN;

104 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 49 The TSCSREG Procedure Dependent Variable: cost Model Description Estimation Method FixTwo Number of Cross Sections 6 Time Series Length 15 Fit Statistics SSE DFE 67 MSE Root MSE R-Square F Test for No Fixed Effects Num DF Den DF F Value Pr > F <.0001 Parameter Estimates Standard Variable DF Estimate Error t Value Pr > t Label CS Cross Sectional Effect 1 CS Cross Sectional Effect CS Cross Sectional Effect 3 CS <.0001 Cross Sectional Effect 4 CS Cross Sectional Effect 5 TS Time Series Effect 1 TS Time Series Effect TS Time Series Effect 3 TS Time Series Effect 4 TS Time Series Effect 5 TS Time Series Effect 6 TS Time Series Effect 7 TS Time Series Effect 8

105 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 50 TS Time Series Effect 9 TS Time Series Effect 10 TS Time Series Effect 11 TS Time Series Effect 1 TS Time Series Effect 13 TS Time Series Effect 14 Intercept <.0001 Intercept output <.0001 fuel load The STATA.xtreg command does not fit the two-way fixed or random effect model. The following LIMDEP command fits the two-way fixed model. Note that this command has Str$ and Period$ specifications to specify stratification and time variables. This command presents the pooled model and one-way group effect model as well, but reports the incorrect intercept in the two-way fixed model, (.081). REGRESS;Lhs=COST;Rhs=ONE,OUTPUT,FUEL,LOAD;Panel;Str=AIRLINE;Period=YEAR;Fixed$ 6.7 Testing Fixed Group and Time Effects The null hypothesis is that parameters of group and time dummies are zero: H 0 : μ 1 =... = μ n 1 = 0 and τ 1 =... = τ T 1 = 0. The F test compares the pooled regression and two-way group and time effect model. The F statistic of rejects the null hypothesis at the.01 significance level (p<.0000). ( ) ( ) (.1768) (6* ) ~ [19,67] The SAS TSCSREG and PANEL procedures conduct the F-test for the group and time effects. You may also run the following SAS REG procedure and.regress command to perform the same test. PROC REG DATA=masil.airline; MODEL cost = g1-g5 t1-t14 output fuel load; TEST g1=g=g3=g4=g5=t1=t=t3=t4=t5=t6=t7=t8=t9=t10=t11=t1=t13=t14=0; RUN;. quietly regress cost g1-g5 t1-t14 output fuel load. test g1 g g3 g4 g5 t1 t t3 t4 t5 t6 t7 t8 t9 t10 t11 t1 t13 t14

106 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: Random Effect Models The random effects model examines how group and/or time affect error variances. This model is appropriate for n individuals who were drawn randomly from a large population. This chapter focuses on the feasible generalized least squares (FGLS) with variance component estimation methods from Baltagi and Chang (1994), Fuller and Battese (1974), and Wansbeek and Kapteyn (1989) The One-way Random Group Effect Model When the omega matrix is not known, you have to estimateθ using the SSEs of the pooled model (.0317) and the fixed effect model (.96). ˆ ε The variance component of errorσ is =.9687/(6*15-6-3) The variance component of group ˆ σ u is = /(6-4) /15 Thus, θˆ is = * /(6-4) Now, transform the dependent and independent variables including the intercept.. gen rg_cost = cost *gm_cost // transform variables. gen rg_output = output *gm_output. gen rg_fuel = fuel *gm_fuel. gen rg_load = load *gm_load. gen rg_int = // for the intercept Finally, run the OLS with the transformed variables. Do not forget to suppress the intercept. This is the groupwise heteroscedastic regression model (Greene 003).. regress rg_cost rg_int rg_output rg_fuel rg_load, noc Source SS df MS Number of obs = F( 4, 86) = Model Prob > F = Residual R-squared = Adj R-squared = Total Root MSE = rg_cost Coef. Std. Err. t P> t [95% Conf. Interval] rg_int Baltagi and Cheng (1994) introduce various ANOVA estimation methods, such as a modified Wallace and Hussain method, the Wansbeek and Kapteyn method, the Swamy and Arora method, and Henderson s method III. They also discuss maximum likelihood (ML) estimators, restricted ML estimators, minimum norm quadratic unbiased estimators (MINQUE), and minimum variance quadratic unbiased estimators (MIVQUE). Based on a Monte Carlo simulation, they argue that ANOVA estimators are Best Quadratic Unbiased estimators of the variance components for the balanced model, whereas ML, restricted ML, MINQUE, and MIVQUE are recommended for the unbalanced models.

107 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 5 rg_output rg_fuel rg_load Estimations in SAS, STATA, and LIMDEP The SAS TSCSREG and PANEL procedures have the /RANONE option to fit the one-way random effect model. These procedures by default use the Fuller and Battese (1974) estimation method, which produces slightly different estimates from FGLS. PROC TSCSREG DATA=masil.airline; ID airline year; MODEL cost = output fuel load /RANONE; RUN; The TSCSREG Procedure Dependent Variable: cost Model Description Estimation Method RanOne Number of Cross Sections 6 Time Series Length 15 Fit Statistics SSE DFE 86 MSE Root MSE R-Square Variance Component Estimates Variance Component for Cross Sections Variance Component for Error Hausman Test for Random Effects DF m Value Pr > m Parameter Estimates Standard Variable DF Estimate Error t Value Pr > t Intercept <.0001 output <.0001

108 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 53 fuel <.0001 load <.0001 The PANEL procedure has the /VCOMP=WK option for the Wansbeek and Kapteyn (1989) method, which is close to groupwise heteroscedastic regression. The BP option of the MODEL statement, not available in the TSCSREG procedure, conducts the Breusch-Pagen LM test for random effects. Note that two procedures estimate the same variance component for error (.0036) but a different variance component for groups (.018 versus.0160), PROC PANEL DATA=masil.airline; ID airline year; MODEL cost = output fuel load /RANONE BP VCOMP=WK; RUN; The PANEL Procedure Wansbeek and Kapteyn Variance Components (RanOne) Dependent Variable: cost Model Description Estimation Method RanOne Number of Cross Sections 6 Time Series Length 15 Fit Statistics SSE DFE 86 MSE Root MSE R-Square Variance Component Estimates Variance Component for Cross Sections Variance Component for Error Hausman Test for Random Effects DF m Value Pr > m Breusch Pagan Test for Random Effects (One Way) DF m Value Pr > m <.0001 Parameter Estimates

109 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 54 Standard Variable DF Estimate Error t Value Pr > t Intercept <.0001 output <.0001 fuel <.0001 load <.0001 The STATA.xtreg command has the re option to produce FGLS estimates. The.iis command specifies the panel identification variable, such as a grouping or cross-section variable that is used in the i() option.. iis airline. xtreg cost output fuel load, re i(airline) theta Random-effects GLS regression Number of obs = 90 Group variable (i): airline Number of groups = 6 R-sq: within = Obs per group: min = 15 between = avg = 15.0 overall = max = 15 Random effects u_i ~ Gaussian Wald chi(3) = corr(u_i, X) = 0 (assumed) Prob > chi = theta = cost Coef. Std. Err. z P> z [95% Conf. Interval] output fuel load _cons sigma_u sigma_e rho (fraction of variance due to u_i) The theta option reports the estimated theta (.8767). The sigma_u and sigma_e are square roots of the variance components for groups and errors (.0036=.0601^). In LIMDEP, you have to specify Panel$ and Het$ subcommands for the groupwise heteroscedastic model. Note that LIMDEP presents the pooled OLS regression and least square dummy variable model as well. --> REGRESS;Lhs=COST;Rhs=ONE,OUTPUT,FUEL,LOAD;Panel;Str=AIRLINE;Het=AIRLINE$ OLS Without Group Dummy Variables Ordinary least squares regression Weighting variable = none Dep. var. = COST Mean= , S.D.= Model size: Observations = 90, Parameters = 4, Deg.Fr.= 86 Residuals: Sum of squares= , Std.Dev.=.1461 Fit: R-squared=.98890, Adjusted R-squared = Model test: F[ 3, 86] = , Prob value = Diagnostic: Log-L = , Restricted(b=0) Log-L = LogAmemiyaPrCrt.= -4.1, Akaike Info. Crt.= -1.84

110 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 55 Panel Data Analysis of COST [ONE way] Unconditional ANOVA (No regressors) Source Variation Deg. Free. Mean Square Between Residual Total Variable Coefficient Standard Error t-ratio P[ T >t] Mean of X OUTPUT E FUEL E LOAD Constant (Note: E+nn or E-nn means multiply by 10 to + or -nn power.) Least Squares with Group Dummy Variables Ordinary least squares regression Weighting variable = none Dep. var. = COST Mean= , S.D.= Model size: Observations = 90, Parameters = 9, Deg.Fr.= 81 Residuals: Sum of squares= , Std.Dev.= Fit: R-squared= , Adjusted R-squared = Model test: F[ 8, 81] = , Prob value = Diagnostic: Log-L = , Restricted(b=0) Log-L = LogAmemiyaPrCrt.= -5.58, Akaike Info. Crt.= Estd. Autocorrelation of e(i,t) White/Hetero. corrected covariance matrix used Variable Coefficient Standard Error t-ratio P[ T >t] Mean of X OUTPUT E FUEL E LOAD (Note: E+nn or E-nn means multiply by 10 to + or -nn power.) Test Statistics for the Classical Model Model Log-Likelihood Sum of Squares R-squared (1) Constant term only D () Group effects only D (3) X - variables only D (4) X and group effects D Hypothesis Tests Likelihood Ratio Test F Tests Chi-squared d.f. Prob. F num. denom. Prob value () vs (1) (3) vs (1) (4) vs (1) (4) vs () (4) vs (3) Error: 45: REGR;PANEL. Could not invert VC matrix for Hausman test.

111 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: Random Effects Model: v(i,t) = e(i,t) + u(i) Estimates: Var[e] =.36160D-0 Var[u] = D-01 Corr[v(i,t),v(i,s)] = Lagrange Multiplier Test vs. Model (3) = ( 1 df, prob value = ) (High values of LM favor FEM/REM over CR model.) Fixed vs. Random Effects (Hausman) =.00 ( 3 df, prob value = ) (High (low) values of H favor FEM (REM).) Reestimated using GLS coefficients: Estimates: Var[e] =.36491D-0 Var[u] =.39309D-01 Var[e] above is an average. Groupwise heteroscedasticity model was estimated. Sum of Squares D Variable Coefficient Standard Error b/st.er. P[ Z >z] Mean of X OUTPUT E FUEL E LOAD Constant (Note: E+nn or E-nn means multiply by 10 to + or -nn power.) Like SAS TSCSREG and PANEL procedures, LIMDEP estimates a slightly different variance component for groups (.0119), thus producing different parameter estimates. In addition, the Hausman test is not successful in this example. 7.3 The One-way Random Time Effect Model Let us computeθˆ using the SSEs of the between effect model (.0056) and the fixed effect model (1.088). The variance component for error ˆ σ ε is = /(15*6-15-3) The variance component for time ˆ σ v is = /(15-4) /6 Theθˆ is = * /(15-4). gen rt_cost = cost - (-1.663)*tm_cost // transform variables. gen rt_output = output - (-1.663)*tm_output. gen rt_fuel = fuel - (-1.663)*tm_fuel. gen rt_load = load - (-1.663)*tm_load. gen rt_int = 1 - (-1.663) // for the intercept. regress rt_cost rt_int rt_output rt_fuel rt_load, noc

112 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 57 Source SS df MS Number of obs = F( 4, 86) =. Model Prob > F = Residual R-squared = Adj R-squared = Total Root MSE = rt_cost Coef. Std. Err. t P> t [95% Conf. Interval] rt_int rt_output rt_fuel rt_load However, the negative value of the variance component for time is not likely. This section presents examples of procedures and commands for the one-way time random effect model without outputs. In SAS, use the TSCSREG or PANEL procedure with the /RANONE option. PROC SORT DATA=masil.airline; BY year airline; PROC TSCSREG DATA=masil.airline; ID year airline; MODEL cost = output fuel load /RANONE; RUN; PROC PANEL DATA=masil.airline; ID year airline; MODEL cost = output fuel load /RANONE BP; RUN; In STATA, you have to switch the grouping and time variables using the.tsset command.. tsset year airline panel variable: year, 1 to 15 time variable: airline, 1 to 6. xtreg cost output fuel load, re i(year) theta In LIMDEP, you need to use the Period$ and Random$ subcommands. REGRESS;Lhs=COST;Rhs=ONE,OUTPUT,FUEL,LOAD;Panel;Pds=15;Het=YEAR$ 7.4 The Two-way Random Effect Model in SAS The random group and time effect model is formulated as yit = α + β ' X ti + ui + γ t + ε it. Let us first estimate the two way FGLS using the SAS PANEL procedure with the /RANTWO option. The BP option conducts the Breusch-Pagan LM test for the two-way random effect model. PROC PANEL DATA=masil.airline; ID airline year; MODEL cost = output fuel load /RANTWO BP;

113 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 58 RUN; The PANEL Procedure Fuller and Battese Variance Components (RanTwo) Dependent Variable: cost Model Description Estimation Method RanTwo Number of Cross Sections 6 Time Series Length 15 Fit Statistics SSE 0.3 DFE 86 MSE Root MSE R-Square Variance Component Estimates Variance Component for Cross Sections Variance Component for Time Series Variance Component for Error Hausman Test for Random Effects DF m Value Pr > m Breusch Pagan Test for Random Effects (Two Way) DF m Value Pr > m <.0001 Parameter Estimates Standard Variable DF Estimate Error t Value Pr > t Intercept <.0001 output <.0001 fuel <.0001 load <.0001 Similarly, you may run the TSCSREG procedure with the /RANTWO option.

114 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 59 PROC TSCSREG DATA=masil.airline; ID airline year; MODEL cost = output fuel load /RANTWO; RUN; 7.5 Testing Random Effect Models The Breusch-Pagan Lagrange multiplier (LM) test is designed to test random effects. The null hypothesis of the one-way random group effect model is that variances of groups are zero: H 0 : σ u = 0. If the null hypothesis is not rejected, the pooled regression model is appropriate. The e e of the pooled OLS is and e' e is *15 15 *.0665 LM is = 1 ~ χ (1) (15 1) with p < With the large chi-squared, we reject the null hypothesis in favor of the random group effect model. The SAS PANEL procedure with the /BP option and the LIMDEP Panel$ and Het$ subcommands report the LM statistic. In STATA, run the.xttest0 command right after estimating the one-way random effect model.. quietly xtreg cost output fuel load, re i(airline). xttest0 Breusch and Pagan Lagrangian multiplier test for random effects: cost[airline,t] = Xb + u[airline] + e[airline,t] Estimated results: Var sd = sqrt(var) cost e u Test: Var(u) = 0 chi(1) = Prob > chi = The null hypothesis of the one-way random time effect is that variance components for time are zero, H 0 : σ v = 0. The following LM test uses Baltagi s formula. The small chi-squared of does not reject the null hypothesis at the.01 level. LM is = Tn ( n 1) ( ne ) t e it 1 = 15* (6 1) ~ χ (1) with p<.135. quietly xtreg cost output fuel load, re i(year). xttest0 Breusch and Pagan Lagrangian multiplier test for random effects:

115 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 60 cost[year,t] = Xb + u[year] + e[year,t] Estimated results: Var sd = sqrt(var) cost e u 0 0 Test: Var(u) = 0 chi(1) = 1.55 Prob > chi = The two way random effects model has the null hypothesis that variance components for groups and time are all zero. The LM statistic with two degrees of freedom is = (p<.0001). 7.6 Fixed Effects versus Random Effects How do we compare a fixed effect model and its counterpart random effect model? The Hausman specification test examines if the individual effects are uncorrelated with the other regressors in the model. Since computation is complicated, let us conduct the test in STATA.. tsset airline year panel variable: airline, 1 to 6 time variable: year, 1 to 15. quietly xtreg cost output fuel load, fe. estimates store fixed_group. quietly xtreg cost output fuel load, re. hausman fixed_group Coefficients ---- (b) (B) (b-b) sqrt(diag(v_b-v_b)) fix_group. Difference S.E output fuel load b = consistent under Ho and Ha; obtained from xtreg B = inconsistent under Ha, efficient under Ho; obtained from xtreg Test: Ho: difference in coefficients not systematic chi(3) = (b-b)'[(v_b-v_b)^(-1)](b-b) =.1 Prob>chi = (V_b-V_B is not positive definite) The Hausman statistic.1 is different from the PANEL procedure s 1.63 and Greene (003) s It is because SAS, STATA, and LIMDEP use different estimation methods to produce slightly different parameter estimates. These tests, however, do not reject the null hypothesis in favor of the random effect model.

116 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: Summary Table 7 summarizes random effect estimations in SAS, STATA, and LIMDEP. The SAS PANEL procedure is highly recommended. Table 7 Comparison of the Random Effect Model in SAS, STATA, LIMDEP * SAS 9.1 STATA 9.0 LIMDEP 8.0 Procedure/Command PROC TSCSREG PROC PANEL. xtreg Regress; Panel$ One-way /RANONE /RANONE WK re Str=;Pds=;Het;Random$ Two-way /RANTWO /RANTWO No Problematic SSE (e e) Slightly different Correct No No MSE or SEE Slightly different Correct No No Model test (F) No No Wald test No (adjusted) R Slightly different Slightly different Incorrect No Intercept Slightly different Correct Correct Slightly different Coefficients Slightly different Correct Correct Slightly different Standard errors Slightly different Correct Correct Slightly different Variance for group Slightly different Correct Correct (sigma) Slightly different Variance for error Correct Correct Correct (sigma) Correct Theta No No theta No Breusch-Pagan (LM) No BP option. xttest0 Yes Hausman Test (H) Incorrect Yes. hausman Yes (unstable) * Yes/No means whether the software reports the statistics. Correct/incorrect indicates whether the statistics are different from those of the groupwise heteroscedastic regression.

117 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 6 8. The Poolability Test In order to conduct the poolability test, you need to run group by group OLS regressions and/or time by time OLS regressions. If the null hypothesis is rejected, the panel data are not poolable. In this case, you may consider the random coefficient model and hierarchical regression model. 8.1 Group by Group OLS Regression In SAS, use the BY statement in the REG procedure. Do not forget to sort the data set in advance. PROC SORT DATA=masil.airline; BY airline; PROC REG DATA=masil.airline; MODEL cost = output fuel load; BY airline; RUN; In STATA, the if qualifier makes it easy to run group by group regressions.. forvalues i= 1(1)6 { // run group by group regression display "OLS regression for group " `i' regress cost output fuel load if airline==`i' } OLS regression for group 1 Source SS df MS Number of obs = F( 3, 11) = Model Prob > F = Residual R-squared = Adj R-squared = Total Root MSE =.0486 cost Coef. Std. Err. t P> t [95% Conf. Interval] output fuel load _cons OLS regression for group Source SS df MS Number of obs = F( 3, 11) = Model Prob > F = Residual R-squared = Adj R-squared = Total Root MSE =.066 cost Coef. Std. Err. t P> t [95% Conf. Interval] output fuel load _cons

118 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 63 OLS regression for group 3 Source SS df MS Number of obs = F( 3, 11) = Model Prob > F = Residual R-squared = Adj R-squared = Total Root MSE =.0456 cost Coef. Std. Err. t P> t [95% Conf. Interval] output fuel load _cons OLS regression for group 4 Source SS df MS Number of obs = F( 3, 11) = Model Prob > F = Residual R-squared = Adj R-squared = Total Root MSE =.0561 cost Coef. Std. Err. t P> t [95% Conf. Interval] output fuel load _cons OLS regression for group 5 Source SS df MS Number of obs = F( 3, 11) = Model Prob > F = Residual R-squared = Adj R-squared = Total Root MSE = cost Coef. Std. Err. t P> t [95% Conf. Interval] output fuel load _cons OLS regression for group 6 Source SS df MS Number of obs = F( 3, 11) = Model Prob > F = Residual R-squared = Adj R-squared = Total Root MSE = cost Coef. Std. Err. t P> t [95% Conf. Interval] output fuel load

119 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 64 _cons Poolability Test across Groups The null hypothesis of the poolability test across groups is H 0 : β ik = β k. The e e is , the SSE of the pooled OLS regression. The e ie ' i is.1007 = ( (6 1) (15 4) Thus, the F statistic is ~ [ 0,66] The large rejects the null hypothesis of poolability (p<.0000). We conclude that the panel data are not poolable with respect to group. 8.3 Poolability Test over Time The null hypothesis of the poolability test over time is computed from the 15 time by time regression. H : β = β 0 tk k. The sum of e ' t te is. di /// /// The F statistic is.4175[ 84,30] ( ) (15 1)4 = (6 4) The small F statistic does not reject the null hypothesis in favor of poolable panel data with respect to time (p<.9991).

120 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: Conclusion Panel data models investigate group and time effects using fixed effect and random effect models. The fixed effect models ask how group and/or time affect the intercept, while the random effect models analyze error variance structures affected by group and/or time. Slopes are assumed unchanged in both fixed effect and random effect models. Fixed effect models are estimated by least squares dummy variable (LSDV) regression, the within effect model, and the between effect model. LSDV has three approaches to avoid perfect multicollinearity. LSDV1 drops a dummy, LSDV suppresses the intercept, and LSDV3 includes all dummies and imposes restrictions instead. LSDV1 is commonly used since it produces correct statistics. LSDV provides actual parameter estimates of group intercepts, but reports incorrect R and F statistic. Note that the dummy parameters of three LSDV approaches have different meanings and thus different t-tests. The within effect model does not use dummy variables but deviations from the group means. Thus, this model is useful when there are many groups and/or time periods in the panel data set (no incidental parameter problem at all). The dummy parameter estimates need to be computed afterward. Because of its larger degrees of freedom, the within effect model produces incorrect MSE and standard errors of parameters. As a result, you need to adjust the standard errors to conduct the correct t-tests. Random effect models are estimated by the generalized least squares (GLS) and the feasible generalization least squares (FGLS). When the variance structure is known, GLS is used. If unknown, FGLS estimates theta. Parameter estimates may vary depending on estimation methods. Fixed effects are tested by the F-test and random effects by the Breusch-Pagan Lagrange multiplier test. The Hausman specification test compares a fixed effect model and a random effect model. If the null hypothesis of uncorrelation is rejected, the fixed effect model is preferred. Poolabiltiy is tested by running group by group or time by time regressions. Among the four statistical packages addressed in this document, I would recommend SAS and STATA. In particular, the SAS PANEL procedure, although experimental now, provides various ways of analyzing panel data. STATA is very handy to manipulate panel data, but it does not fit two-way effect models. LIMDEP is able to estimate various panel data models, but it is not stable enough. SPSS is not recommended for panel data models.

121 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 66 APPENDIX: Data sets Data set 1: Data of the top 50 information technology firms presented in OECD Information Technology Outlook 004 ( firm = IT company name type = type of IT firm rnd = 00 R&D investment in current USD millions income = 000 net income in current USD millions d1 = 1 for equipment and software firms and 0 for telecommunication and electronics. tab type d1 d1 Type of Firm 0 1 Total Telecom Electronics IT Equipment Comm. Equipment Service & S/W Total sum rnd income Variable Obs Mean Std. Dev. Min Max rnd income Data set : Cost data for U.S. airlines ( ) presented in Greene (003). URL: airline = airline (six airlines) year = year (fifteen years) output0 = output in revenue passenger miles, index number cost0 = total cost in $1,000 fuel0 = fuel price load = load factor, the average capacity utilization of the fleet. tsset panel variable: airline, 1 to 6 time variable: year, 1 to 15. sum output0 cost0 fuel0 load Variable Obs Mean Std. Dev. Min Max output cost fuel load

122 005 The Trustees of Indiana University (1/10/005) Linear Regression Model for Panel Data: 67 References Baltagi, Badi H Econometric Analysis of Panel Data. Wiley, John & Sons. Baltagi, Badi H., and Young-Jae Chang "Incomplete Panels: A Comparative Study of Alternative Estimators for the Unbalanced One-way Error Component Regression Model." Journal of Econometrics, 6(): Breusch, T. S., and A. R. Pagan "The Lagrange Multiplier Test and its Applications to Model Specification in Econometrics." Review of Economic Studies, 47(1): Fox, John Applied Regression Analysis, Linear Models, and Related Methods. Newbury Park, CA: Sage. Freund, Rudolf J., and Ramon C. Littell SAS System for Regression, 3 rd ed. Cary, NC: SAS Institute. Fuller, Wayne A. and George E. Battese "Transformations for Estimation of Linear Models with Nested-Error Structure." Journal of the American Statistical Association, 68(343) (September): Fuller, Wayne A. and George E. Battese "Estimation of Linear Models with Crossed- Error Structure." Journal of Econometrics, : Greene, William H. 00. LIMDEP Version 8.0 Econometric Modeling Guide, 4th ed. Plainview, New York: Econometric Software. Greene, William H Econometric Analysis, 5th ed. Upper Saddle River, NJ: Prentice Hall. Hausman, J. A "Specification Tests in Econometrics." Econometrica, 46(6): SAS Institute SAS/ETS 9.1 User s Guide. Cary, NC: SAS Institute. SAS Institute SAS/STAT 9.1 User s Guide. Cary, NC: SAS Institute. STATA Press STATA Base Reference Manual, Release 9. College Station, TX: STATA Press. STATA Press STATA Longitudinal/Panel Data Reference Manual, Release 9. College Station, TX: STATA Press. STATA Press STATA Time-Series Reference Manual, Release 9. College Station, TX: STATA Press. Wooldridge, Jeffrey M. 00. Econometric Analysis of Cross Section and Panel Data. Cambridge, MA: MIT Press. Acknowledgements I have to thank Dr. Heejoon Kang in the Kelley School of Business and Dr. David H. Good in the School of Public and Environmental Affairs, Indiana University at Bloomington, for their insightful lectures. I am also grateful to Jeremy Albright and Kevin Wilhite at the UITS Center for Statistical and Mathematical Computing for comments and suggestions. Revision History First draft

123 , The Trustees of Indiana University Categorical Dependent Variable Models: 1 Categorical Dependent Variable Models Using SAS, STATA, LIMDEP, and SPSS Hun Myoung Park This document summarizes regression models for categorical dependent variables and illustrates how to estimate individual models using SAS 9.1, STATA 9.0, LIMDEP 8.0, and SPSS Introduction. The Binary Logit Model 3. The Binary Probit Model 4. Bivariate Logit/Probit Models 5. Ordered Logit/Probit Models 6. The Multinomial Logit Model 7. The Conditional Logit Model 8. The Nested Logit Model 9. Conclusion 10. Appendix 1. Introduction The categorical variable here refers to a variable that is binary, ordinal, or nominal. Event count data are discrete (categorical) but often considered continuous. When the dependent variable is categorical, the ordinary least squares (OLS) method can no longer produce the best linear unbiased estimator (BLUE); that is, OLS is biased and inefficient. Consequently, researchers have developed various categorical dependent variable models (CDVMs). The nonlinearity of CDVMs makes it difficult to interpret outputs, since the effect of a change in a variable depends on the values of all other variables in the model (Long 1997). 1.1 Categorical Dependent Variable Models In CDVMs, the left-hand side (LHS) variable or dependent variable is neither interval nor ratio, but rather categorical. The level of measurement and data generation process (DGP) of a dependent variable determines the proper type of CDVM. Thus, binary responses are modeled with the binary logit and probit regressions, ordinal responses are formulated into the ordered logit/probit regression models, and nominal responses are analyzed by multinomial logit, conditional logit, or nested logit models. Independent variables on the right-hand side (RHS) may be interval, ratio, or binary (dummy). The CDVMs adopt the maximum likelihood (ML) estimation method, whereas OLS uses the moment based method. The ML method requires assumptions about probability distribution functions, such as the logistic function and the complementary log-log function. Logit models use the standard logistic probability distribution, while probit models assume the standard

124 , The Trustees of Indiana University Categorical Dependent Variable Models: normal distribution. This document focuses on logit and probit models only. Table 1 summarizes CDVMs in comparison with OLS. OLS CDVMs Table 1. Ordinary Least Squares and CDVMs Model Dependent (LHS) Estimation Independent (RHS) Interval or ratio Moment based method Ordinary least squares Binary response Binary (0 or 1) Ordinal response Ordinal (1 st, nd, 3 rd ) Nominal response Nominal (A, B, C ) Event count data Count (0, 1,, 3 ) Maximum likelihood method A linear function of interval/ratio or binary variables β + β X + β X Logit Models versus Probit Models How do logit models differ from probit models? The core difference lies in the distribution of errors. In the logit model, errors are assumed to follow the standard logistic distribution with ε π e mean 0 and variance, λ( ε ) =. The errors of the probit model are assumed to follow ε 3 (1 + e ) 1 the standard normal distribution, φ( ε ) = e. π Figure 1. Comparison of the Standard Normal and Standard Logistic Probability Distributions ε PDF of the Standard Normal Distribution CDF of the Standard Normal Distribution PDF of the Standard Logistic Distribution CDF of the Standard Logistic Distribution The probability density function (PDF) of the standard normal probability distribution has a higher peak and thinner tails than the standard logistic probability distribution (Figure 1). The

125 , The Trustees of Indiana University Categorical Dependent Variable Models: 3 standard logistic distribution looks as if someone has weighed down the peak of the standard normal distribution and strained its tails. As a result, the cumulative density function (CDF) of the standard normal distribution is steeper in the middle than the CDF of the standard logistic distribution and quickly approaches zero on the left and one on the right. The two models, of course, produce different parameter estimates. In binary response models, the estimates of a logit model are roughly π 3 times larger than those of the corresponding probit model. These estimators, however, are almost the same in terms of the standardized impacts of independent variables and predictions (Long 1997). In general, logit models reach convergence in estimation fairly well. Some (multinomial) probit models may take a long time to reach convergence, although the probit works well for bivariate models. 1.3 Estimation in SAS, STATA, LIMDEP, and SPSS SAS provides several procedures for CDVMs, such as LOGISTIC, PROBIT, GENMOD, QLIM, MDC, and CATMOD. Since these procedures support various models, a CDVM can be estimated by multiple procedures. For example, you may run a binary logit model using the LOGISTIC, PROBIT, GENMODE, and QLIM. The LOGISTIC and PROBIT procedures of SAS/STAT have been commonly used, but the QLIM and MDC procedures of SAS/ETS are noted for their advanced features. Table. Procedures and Commands for CDVMs Model SAS 9.1 Stata 9.0 LIMDEP 8.0 SPSS13.0 OLS (Ordinary least squares) REG.regress Regress$ Regression Binary Bivariate Ordinal Nominal Binary logit QLIM, GENMOD, LOGISTIC, PROBIT, CATMOD.logit, logistic Logit$ Logistic regression Binary probit QLIM, GENMOD, LOGISTIC, PROBIT.probit Probit$ Probit Bivariate logit QLIM Bivariate probit QLIM.biprobit Bivariateprobit$ - Ordered logit QLIM, PROBIT, LOGISTIC.ologit Ordered$, Logit$ Plum Generalized logit -.gologit * - - Ordered probit QLIM, PROBIT, LOGISTIC.oprobit Ordered$ Plum Multinomial logit CATMOD.mlogit Mlogit$, Logit$ Nomreg Conditional logit MDC, PHREG.clogit Clogit$, Logit$ Coxreg Nested logit MDC.nlogit Nlogit$ ** - Multinomial probit MDC.mprobit - - * User-written commands written by Fu (1998) and Williams (005) ** The Nlogit$ command is supported by NLOGIT 3.0, which is sold separately. The QLIM (Qualitative and LImited dependent variable Model) procedure analyzes various categorical and limited dependent variable regression models such as censored, truncated, and sample-selection models. This QLIM procedure also handles Box-Cox regression and bivariate

126 , The Trustees of Indiana University Categorical Dependent Variable Models: 4 probit and logit models. The MDC (Multinomial Discrete Choice) Procedure can estimate multinomial probit, conditional logit, and nested (multinomial) logit models. Unlike SAS, STATA has individualized commands for corresponding CDVMs. For example, the.logit and.probit commands respectively fit the binary logit and probit models. The LIMDEP Logit$ and Probit$ commands support a variety of CDVMs that are addressed in Greene s Econometric Analysis (003). SPSS supports some related commands for CDVMs but has limited ability to analyze categorical data. Because of its limitation, SPSS outputs are skipped here. Table summarizes the procedures and commands for CDVMs Long and Freese s SPost Module STATA users may take advantages of user-written modules such as J. Scott Long and Jeremy Freese s SPost. The module allows researchers to conduct follow-up analyses of various CDVMs including event count data models. See section. for major SPost commands. In order to install SPost, execute the following commands consecutively. For more details, visit J. Scott Long s Web site at net from net install spost9_ado, replace. net get spost9_do, replace If you want to use Vincent Kang Fu s gologit (000) and Richard Williams gologit (005) for the generalized ordered logit model, type in the following.. net search gologit. net install gologit from( net install gologit from(

127 , The Trustees of Indiana University Categorical Dependent Variable Models: 5. The Binary Logit Regression Model exp( xβ ) The binary logit model is represented as Pr ob( y = 1 x) = = Λ( xβ ), where Λ 1+ exp( xβ ) indicates a link function, the cumulative standard logistic probability distribution function. This chapter examines how car ownership (owncar) is affected by monthly income (income), age, and gender (male). See the appendix for details about the data set..1 Binary Logit in STATA (.logit) STATA provides two equivalent commands for the binary logit model, which present the same result in different ways. The.logit command produces coefficients with respect to logit (log of odds), while the.logistic reports estimates as odd ratios.. logistic owncar income age male Logistic regression Number of obs = 437 LR chi(3) = 18.4 Prob > chi = Log likelihood = Pseudo R = 0.03 owncar Odds Ratio Std. Err. z P> z [95% Conf. Interval] income age male logit In order to get the coefficients (log of odds), simply run the.logit without any argument right after the.logistic command. Or run an independent.logit command with all arguments.. logit owncar income age male Iteration 0: log likelihood = Iteration 1: log likelihood = Iteration : log likelihood = Iteration 3: log likelihood = Logistic regression Number of obs = 437 LR chi(3) = 18.4 Prob > chi = Log likelihood = Pseudo R = 0.03 owncar Coef. Std. Err. z P> z [95% Conf. Interval] income age male _cons

128 , The Trustees of Indiana University Categorical Dependent Variable Models: 6 Note that a coefficient of the.logit is the logarithmic transformed corresponding estimator of the.logistic. For example, = log(1.7966). STATA has post-estimation commands that conduct follow-up analyses. The.predict command computes predictions, residuals, or standard errors of the prediction and stores them into a new variable.. predict r, residual The.test and.lrtest commands respectively conduct the Wald test and likelihood ratio test.. test income age ( 1) income = 0 ( ) age = 0 chi( ) = 1.57 Prob > chi = Using the SPost Module in STATA The SPost module provides useful follow-up analysis commands (ado files) for various categorical dependent variable models (Long and Freese 003). The.fitstat command calculates various goodness-of-fit statistics such as log likelihood, McFadden s R (or Pseudo R ), Akaike Information Criterion (AIC), and (Bayesian Information Criterion (BIC).. fitstat Measures of Fit for logistic of owncar Log-Lik Intercept Only: Log-Lik Full Model: D(433): LR(3): Prob > LR: McFadden's R: 0.03 McFadden's Adj R: Maximum Likelihood R: Cragg & Uhler's R: McKelvey and Zavoina's R: Efron's R: Variance of y*: Variance of error: 3.90 Count R: Adj Count R: AIC: 1.7 AIC*n: BIC: BIC': The likelihood ratio for goodness of fit is computed as,. di *( (-8.965)) The.listcoef command lists unstandardized coefficients (parameter estimates), factor and percent changes, and standardized coefficients to help interpret results. The help option tells how to read the outputs.. listcoef, help

129 , The Trustees of Indiana University Categorical Dependent Variable Models: 7 logistic (N=437): Factor Change in Odds Odds of: 1 vs owncar b z P> z e^b e^bstdx SDofX income age male b = raw coefficient z = z-score for test of b=0 P> z = p-value for z-test e^b = exp(b) = factor change in odds for unit increase in X e^bstdx = exp(b*sd of X) = change in odds for SD increase in X SDofX = standard deviation of X The.prtab command constructs a table of predicted values (events) for all combinations of categorical variables listed. The following example shows that 60 percent of female and 70 percent of male students are likely to own cars, given the mean values of income and age.. prtab male logistic: Predicted probabilities of positive outcome for owncar male Prediction income age male x= The.prvalue lists predicted probabilities of positive and negative outcomes for a given set of values for the independent variables. Note both the.prtab and.prvalue commands report the identical predicted probability that male students own cars,.6017, holding other variables at their means.. prvalue, x(male=0) rest(mean) logistic: Predictions for owncar Pr(y=1 x): % ci: (0.586,0.6706) Pr(y=0 x): % ci: (0.394,0.4714) income age male x= The most useful command is the.prchange, which calculates marginal effects (changes) and discrete changes at the given set of values of independent variables. The help option tells how to read the outputs. For instance, the predicted probability that a male students owns a car is.094 (0->1) higher than that of female students, holding other variables at their mean.. prchange, help

130 , The Trustees of Indiana University Categorical Dependent Variable Models: 8 logit: Changes in Predicted Probabilities for owncar min->max 0->1 -+1/ -+sd/ MargEfct income age male Pr(y x) income age male x= sd(x)= Pr(y x): probability of observing each y for specified x values Avg Chg : average of absolute value of the change across categories Min->Max: change in predicted probability as x changes from its minimum to its maximum 0->1: change in predicted probability as x changes from 0 to 1 -+1/: change in predicted probability as x changes from 1/ unit below base value to 1/ unit above -+sd/: change in predicted probability as x changes from 1/ standard dev below base to 1/ standard dev above MargEfct: the partial derivative of the predicted probability/rate with respect to a given independent variable The SPost module also includes the.prgen, which computes a series of predictions by holding all variables but one interval variable constant and allowing that variable to vary (Long and Freese 003).. prgen income, from(.1) to(1.5) x(male=1) rest(median) generate(ppcar) logistic: Predicted values as income varies from.1 to 1.5. income age male x= The above command computes predicted probabilities that male students own cars when income changes from $100 through $1,500, holding age at its median of 1 and stores them into a new variable ppcar..3 Using the SAS LOGISTIC and PROBIT Procedures SAS has several procedures for the binary logit model such as the LOGISTIC, PROBIT, GENMOD, and QLIM. The LOGISTIC procedure is commonly used for the binary logit model, but the PROBIT procedure also estimates the binary logit. Let us first consider the LOGISTIC procedure. PROC LOGISTIC DESCENDING DATA = masil.students; MODEL owncar = income age male; RUN; The LOGISTIC Procedure

131 , The Trustees of Indiana University Categorical Dependent Variable Models: 9 Model Information Data Set MASIL.STUDENTS Response Variable owncar Number of Response Levels Model binary logit Optimization Technique Fisher's scoring Number of Observations Read 437 Number of Observations Used 437 Response Profile Ordered Total Value owncar Frequency Probability modeled is owncar=1. Model Convergence Status Convergence criterion (GCONV=1E-8) satisfied. Model Fit Statistics Intercept Intercept and Criterion Only Covariates AIC SC Log L Testing Global Null Hypothesis: BETA=0 Test Chi-Square DF Pr > ChiSq Likelihood Ratio Score Wald Analysis of Maximum Likelihood Estimates Standard Wald Parameter DF Estimate Error Chi-Square Pr > ChiSq Intercept income

132 , The Trustees of Indiana University Categorical Dependent Variable Models: 10 age male Odds Ratio Estimates Point 95% Wald Effect Estimate Confidence Limits income age male Association of Predicted Probabilities and Observed Responses Percent Concordant 58.9 Somers' D 0.46 Percent Discordant 34.3 Gamma 0.64 Percent Tied 6.8 Tau-a 0.11 Pairs 4345 c 0.63 The SAS LOGISTIC, PROBIT, and GENMOD procedures by default uses a smaller value in the dependent variable as success. Thus, the magnitudes of the coefficients remain the same, but the signs are opposite to those of the QLIM procedure, STATA, and LIMDEP. The DESCENDING option forces SAS to use a larger value as success. Alternatively, you may explicitly specify the category of successful event using the EVENT option as follows. PROC LOGISTIC DESCENDING DATA = masil.students; MODEL owncar(event= 1 ) = income age male; RUN; The SAS LOGISTIC procedure computes odds changes when independent variables increase by the units specified in the UNITS statement. The SD below indicates a standard deviation increase in income and age (e.g., - means a two unit decrease in independent variables). PROC LOGISTIC DESCENDING DATA = masil.students; MODEL owncar = income age male; UNITS income=sd age=sd; RUN; The UNITS statement adds the Adjusted Odds Ratios to the end of the outputs above. Note that the odds changes of the two variables are identical to those under the e^bstdx of the previous SPost.listcoef output. Adjusted Odds Ratios Effect Unit Estimate income age Now, let us use the PROBIT procedure to estimate the same binary logit model. The PROBIT requires the CLASS statement to list categorical variables. The /DIST=LOGISTIC option indicates the probability distribution to be used in maximum likelihood estimation.

133 , The Trustees of Indiana University Categorical Dependent Variable Models: 11 PROC PROBIT DATA = masil.students; CLASS owncar; MODEL owncar = income age male /DIST=LOGISTIC; RUN; Probit Procedure Model Information Data Set MASIL.STUDENTS Dependent Variable owncar Number of Observations 437 Name of Distribution Logistic Log Likelihood Number of Observations Read 437 Number of Observations Used 437 Class Level Information Name Levels Values owncar 0 1 Response Profile Ordered Total Value owncar Frequency PROC PROBIT is modeling the probabilities of levels of owncar having LOWER Ordered Values in the response profile table. Algorithm converged. Type III Analysis of Effects Wald Effect DF Chi-Square Pr > ChiSq income age male Analysis of Parameter Estimates Standard 95% Confidence Chi- Parameter DF Estimate Error Limits Square Pr > ChiSq

134 , The Trustees of Indiana University Categorical Dependent Variable Models: 1 Intercept income age male Unlike LOGISTIC, PROBIT does not have the DESCENDING option. Thus, you have to switch the signs of coefficients when comparing with those of STATA and LIMDEP. The PROBIT procedure also does not have the UNITS statement to compute changes in odds..4 Using the SAS GENMOD and QLIM Procedures The GENMOD provides flexible methods to estimate generalized linear model. The DISTRIBUTION (DIST) and the LINK=LOGIT options respectively specify a probability distribution and a link function. PROC GENMOD DATA = masil.students DESC; MODEL owncar = income age male /DIST=BINOMIAL LINK=LOGIT; RUN; The GENMOD Procedure Model Information Data Set Distribution Link Function Dependent Variable MASIL.STUDENTS Binomial Logit owncar Number of Observations Read 437 Number of Observations Used 437 Number of Events 84 Number of Trials 437 Response Profile Ordered Total Value owncar Frequency PROC GENMOD is modeling the probability that owncar='1'. Criteria For Assessing Goodness Of Fit Criterion DF Value Value/DF Deviance Scaled Deviance Pearson Chi-Square Scaled Pearson X

135 , The Trustees of Indiana University Categorical Dependent Variable Models: 13 Log Likelihood Algorithm converged. Analysis Of Parameter Estimates Standard Wald 95% Confidence Chi- Parameter DF Estimate Error Limits Square Pr > ChiSq Intercept income age male Scale NOTE: The scale parameter was held fixed. If you have categorical (string) independent variables, list the variables in the CLASS statement without creating dummy variables. PROC GENMOD DATA = masil.students DESC; CLASS male; MODEL owncar = income age male /DIST=BINOMIAL LINK=LOGIT; RUN; Users may also provide their own link functions using the FWDLINK and INVLINK statements instead of the LINK=LOGIT option. PROC GENMOD DATA = masil.students DESC; FWDLINK link=log(_mean_/(1-_mean_)); INVLINK invlink=1/(1+exp(-1*_xbeta_)); MODEL owncar = income age male /DIST=BINOMIAL; RUN; All three GENMOD examples discussed so far produce the identical result. The QLIM procedure estimates not only logit and probit models, but also censored, truncated, and sample-selected models. You may provide characteristics of the dependent variable either in the ENDOGENOUS statement or the option of the MODEL statement. PROC QLIM DATA=masil.students; MODEL owncar = income age male; ENDOGENOUS owncar ~ DISCRETE (DIST=LOGIT); RUN; Or, PROC QLIM DATA=masil.students; MODEL owncar = income age male /DISCRETE (DIST=LOGIT); RUN; The QLIM Procedure

136 , The Trustees of Indiana University Categorical Dependent Variable Models: 14 Discrete Response Profile of owncar Index Value Frequency Percent Model Fit Summary Number of Endogenous Variables 1 Endogenous Variable owncar Number of Observations 437 Log Likelihood Maximum Absolute Gradient E-6 Number of Iterations 8 AIC Schwarz Criterion Goodness-of-Fit Measures Measure Value Formula Likelihood Ratio (R) * (LogL - LogL0) Upper Bound of R (U) * LogL0 Aldrich-Nelson R / (R+N) Cragg-Uhler exp(-r/n) Cragg-Uhler (1-exp(-R/N)) / (1-exp(-U/N)) Estrella (1-R/U)^(U/N) Adjusted Estrella ((LogL-K)/LogL0)^(-/N*LogL0) McFadden's LRI 0.03 R / U Veall-Zimmermann (R * (U+N)) / (U * (R+N)) McKelvey-Zavoina N = # of observations, K = # of regressors Algorithm converged. Parameter Estimates Standard Approx Parameter Estimate Error t Value Pr > t Intercept income age male Finally, the CATMOD procedure fits the logit model to the functions of categorical response variables. This procedure, however, produces slightly different estimators compared to those of other procedures discussed so far. This procedure is, therefore, less recommended for the binary logit model. The DIRECT statement specifies interval or ratio variables used in the MODEL. The /NOPROFILE suppresses the display of the population profiles and the response profiles.

137 , The Trustees of Indiana University Categorical Dependent Variable Models: 15 PROC CATMOD DATA = masil.students; DIRECT income age; MODEL owncar = income age male /NOPROFILE; RUN;.5 Binary Logit in LIMDEP (Logit$) The Logit$ command in LIMDEP estimates various logit models. The dependent variable is specified in the Lhs$ (left-hand side) subcommand and a list of independent variables in the Rhs$ (right-hand side). You have to explicitly specify the ONE for the intercept. The Marginal Effects$ and the Means$ subcommands compute marginal effects at the mean values of independent variables. LOGIT; Lhs=owncar; Rhs=ONE,income,age,male; Marginal Effects; Means$ Normal exit from iterations. Exit status= Multinomial Logit Model Maximum Likelihood Estimates Model estimated: Sep 17, 005 at 05:31:8PM. Dependent variable OWNCAR Weighting variable None Number of observations 437 Iterations completed 5 Log likelihood function Restricted log likelihood Chi squared Degrees of freedom 3 Prob[ChiSqd > value] = E-03 Hosmer-Lemeshow chi-squared = P-value= with deg.fr. = Variable Coefficient Standard Error b/st.er. P[ Z >z] Mean of X Characteristics in numerator of Prob[Y = 1] Constant INCOME E AGE E MALE (Note: E+nn or E-nn means multiply by 10 to + or -nn power.) Information Statistics for Discrete Choice Model. M=Model MC=Constants Only M0=No Model Criterion F (log L) LR Statistic vs. MC Degrees of Freedom Prob. Value for LR Entropy for probs Normalized Entropy

138 , The Trustees of Indiana University Categorical Dependent Variable Models: 16 Entropy Ratio Stat Bayes Info Criterion BIC - BIC(no model) Pseudo R-squared Pct. Correct Prec Means: y=0 y=1 y= y=3 yu=4 y=5, y=6 y>=7 Outcome Pred.Pr Notes: Entropy computed as Sum(i)Sum(j)Pfit(i,j)*logPfit(i,j). Normalized entropy is computed against M0. Entropy ratio statistic is computed against M0. BIC = *criterion - log(n)*degrees of freedom. If the model has only constants or if it has no constants, the statistics reported here are not useable Partial derivatives of probabilities with respect to the vector of characteristics. They are computed at the means of the Xs Variable Coefficient Standard Error b/st.er. P[ Z >z] Mean of X Characteristics in numerator of Prob[Y = 1] Constant INCOME E AGE E E Marginal effect for dummy variable is P 1 - P 0. MALE E E (Note: E+nn or E-nn means multiply by 10 to + or -nn power.) Fit Measures for Binomial Choice Model Logit model for variable OWNCAR Proportions P0= P1= N = 437 N0= 153 N1= 84 LogL = LogL0 = Estrella = 1-(L/L0)^(-L0/n) = Efron McFadden Ben./Lerman Cramer Veall/Zim. Rsqrd_ML Information Akaike I.C. Schwarz I.C. Criteria Frequencies of actual & predicted outcomes Predicted outcome has maximum probability. Threshold value for predicting Y=1 =.5000 Predicted Actual 0 1 Total

139 , The Trustees of Indiana University Categorical Dependent Variable Models: Total Note that the marginal effects above are identical to those of the SPost.prchange command in section.. LIMDEP computes discrete changes for binary variables like male..6 Binary Logit in SPSS SPSS has the Logistic regression command for the binary logit model. LOGISTIC REGRESSION VAR=owncar /METHOD=ENTER income age male /CRITERIA PIN(.05) POUT(.10) ITERATE(0) CUT(.5). Table 3 summarizes parameter estimates and goodness-of-fit statistics across procedures and commands for the binary logit model. Estimates and their standard errors produced are almost identical except some rounding errors. As shown in Table 3, the QLIM and LOGISTIC are recommended for categorical dependent variables. Note that the PROBIT procedure returns the opposite signs of estimates. Table 3. Parameter Estimates and Goodness-of-fit Statistics of the Binary Logit Model LOGISTIC PROBIT GENMOD QLIM STATA LIMDEP Intercept (1.4745) (1.4745) (1.4745) (1.4745) (1.4745) (1.4745) income (.5736).010 (.5736) (.5736) (.5736) (.5736) (.5736) age.466 (.0695) (.0695).466 (.0695).466 (.0695).466 (.0695).466 (.0695) male.4145 (.056) (.056).4145 (.056).4145 (.056).4145 (.056).4145 (.056) Log likelihood * Likelihood test Pseudo R AIC ** ** Schwarz BIC * The LOGISTIC procedure reports (-*log likelihood). ** AIC*N

140 , The Trustees of Indiana University Categorical Dependent Variable Models: The Binary Probit Regression Model The probit model is represented as Pr ob( y = 1 x) = Φ( xβ ), where Φ indicates the cumulative standard normal probability distribution function. 3.1 Binary Probit in STATA (.probit) STATA has the.probit command to estimate the binary probit regression model.. probit owncar income age male Iteration 0: log likelihood = Iteration 1: log likelihood = Iteration : log likelihood = Iteration 3: log likelihood = Probit regression Number of obs = 437 LR chi(3) = Prob > chi = Log likelihood = Pseudo R = owncar Coef. Std. Err. z P> z [95% Conf. Interval] income age male _cons In order to get standardized estimates and factor changes, run the SPost.listcoef command.. listcoef probit (N=437): Unstandardized and Standardized Estimates Observed SD: Latent SD: owncar b z P> z bstdx bstdy bstdxy SDofX income age male You may compute the marginal effects and discrete change using the SPost.prchange.. prchange, x(income=1 age=1 male=0) probit: Changes in Predicted Probabilities for owncar min->max 0->1 -+1/ -+sd/ MargEfct income age

141 , The Trustees of Indiana University Categorical Dependent Variable Models: 19 male Pr(y x) income age male x= sd(x)= Using the PROBIT and LOGISTIC Procedures The PROBIT and LOGISTIC procedures estimate the binary probit model. Keep in mind that the coefficients of PROBIT has opposite signs. PROC PROBIT DATA = masil.students; CLASS owncar; MODEL owncar = income age male; RUN; Probit Procedure Model Information Data Set MASIL.STUDENTS Dependent Variable owncar Number of Observations 437 Name of Distribution Normal Log Likelihood Number of Observations Read 437 Number of Observations Used 437 Class Level Information Name Levels Values owncar 0 1 Response Profile Ordered Total Value owncar Frequency PROC PROBIT is modeling the probabilities of levels of owncar having LOWER Ordered Values in the response profile table. Algorithm converged.

142 , The Trustees of Indiana University Categorical Dependent Variable Models: 0 Type III Analysis of Effects Wald Effect DF Chi-Square Pr > ChiSq income age male Analysis of Parameter Estimates Standard 95% Confidence Chi- Parameter DF Estimate Error Limits Square Pr > ChiSq Intercept income age male The LOGISTIC procedure requires a normal probability distribution as a link function (/LINK=PROBIT or /LINK=NORMIT). PROC LOGISTIC DATA = masil.students DESC; MODEL owncar = income age male /LINK=PROBIT; RUN; The LOGISTIC Procedure Model Information Data Set MASIL.STUDENTS Response Variable owncar Number of Response Levels Model binary probit Optimization Technique Fisher's scoring Number of Observations Read 437 Number of Observations Used 437 Response Profile Ordered Total Value owncar Frequency Probability modeled is owncar=1. Model Convergence Status

143 , The Trustees of Indiana University Categorical Dependent Variable Models: 1 Convergence criterion (GCONV=1E-8) satisfied. Model Fit Statistics Intercept Intercept and Criterion Only Covariates AIC SC Log L Testing Global Null Hypothesis: BETA=0 Test Chi-Square DF Pr > ChiSq Likelihood Ratio Score Wald Analysis of Maximum Likelihood Estimates Standard Wald Parameter DF Estimate Error Chi-Square Pr > ChiSq Intercept income age male Association of Predicted Probabilities and Observed Responses Percent Concordant 57.8 Somers' D 0.49 Percent Discordant 3.9 Gamma 0.74 Percent Tied 9.3 Tau-a Pairs 4345 c Using the GENMODE and QLIM Procedures The GENMOD procedure also estimates the binary probit model using the /DIST=BINOMIAL and /LINK=PROBIT options in the MODEL statement. PROC GENMOD DATA = masil.students DESC; MODEL owncar = income age male /DIST=BINOMIAL LINK=PROBIT; RUN; The GENMOD Procedure Model Information Data Set MASIL.STUDENTS

144 , The Trustees of Indiana University Categorical Dependent Variable Models: Distribution Link Function Dependent Variable Binomial Probit owncar Number of Observations Read 437 Number of Observations Used 437 Number of Events 84 Number of Trials 437 Response Profile Ordered Total Value owncar Frequency PROC GENMOD is modeling the probability that owncar='1'. Criteria For Assessing Goodness Of Fit Criterion DF Value Value/DF Deviance Scaled Deviance Pearson Chi-Square Scaled Pearson X Log Likelihood Algorithm converged. Analysis Of Parameter Estimates Standard Wald 95% Confidence Chi- Parameter DF Estimate Error Limits Square Pr > ChiSq Intercept income age male Scale NOTE: The scale parameter was held fixed. The QLIM procedure provides various goodness-of-fit statistics. The DIST=NORMAL option indicates the normal probability distribution used in estimation. PROC QLIM DATA=masil.students; MODEL owncar = income age male /DISCRETE (DIST=NORMAL); RUN; The QLIM Procedure

145 , The Trustees of Indiana University Categorical Dependent Variable Models: 3 Discrete Response Profile of owncar Index Value Frequency Percent Model Fit Summary Number of Endogenous Variables 1 Endogenous Variable owncar Number of Observations 437 Log Likelihood Maximum Absolute Gradient E-8 Number of Iterations 10 AIC Schwarz Criterion Goodness-of-Fit Measures Measure Value Formula Likelihood Ratio (R) * (LogL - LogL0) Upper Bound of R (U) * LogL0 Aldrich-Nelson R / (R+N) Cragg-Uhler exp(-r/n) Cragg-Uhler (1-exp(-R/N)) / (1-exp(-U/N)) Estrella (1-R/U)^(U/N) Adjusted Estrella ((LogL-K)/LogL0)^(-/N*LogL0) McFadden's LRI R / U Veall-Zimmermann (R * (U+N)) / (U * (R+N)) McKelvey-Zavoina N = # of observations, K = # of regressors Algorithm converged. Parameter Estimates Standard Approx Parameter Estimate Error t Value Pr > t Intercept income age male Binary Probit in LIMDEP (Probit$) The LIMDEP Probit$ command estimates various probit models. Do not forget to include the ONE for the intercept.

146 , The Trustees of Indiana University Categorical Dependent Variable Models: 4 PROBIT; Lhs=owncar; Rhs=ONE,income,age,male$ Normal exit from iterations. Exit status= Binomial Probit Model Maximum Likelihood Estimates Model estimated: Sep 17, 005 at 10:8:56PM. Dependent variable OWNCAR Weighting variable None Number of observations 437 Iterations completed 4 Log likelihood function Restricted log likelihood Chi squared Degrees of freedom 3 Prob[ChiSqd > value] =.3854E-03 Hosmer-Lemeshow chi-squared = P-value= with deg.fr. = Variable Coefficient Standard Error b/st.er. P[ Z >z] Mean of X Index function for probability Constant INCOME E AGE E MALE (Note: E+nn or E-nn means multiply by 10 to + or -nn power.) Fit Measures for Binomial Choice Model Probit model for variable OWNCAR Proportions P0= P1= N = 437 N0= 153 N1= 84 LogL = LogL0 = Estrella = 1-(L/L0)^(-L0/n) = Efron McFadden Ben./Lerman Cramer Veall/Zim. Rsqrd_ML Information Akaike I.C. Schwarz I.C. Criteria Frequencies of actual & predicted outcomes Predicted outcome has maximum probability. Threshold value for predicting Y=1 =.5000 Predicted Actual 0 1 Total

147 , The Trustees of Indiana University Categorical Dependent Variable Models: Total Binary Probit in SPSS SPSS has the Probit command to fit the binary probit model. This command requires a variable (e.g., n in the following example) with constant 1. COMPUTE n=1. PROBIT owncar OF n WITH income age male /LOG NONE /MODEL PROBIT /PRINT FREQ /CRITERIA ITERATE(0) STEPLIMIT(.1). Table 4 summarizes parameter estimates and goodness-of-fit statistics produced. Note that the LOGISTIC procedure reports slightly different estimates and standard errors. I would recommend the SAS QLIM procedure, STATA, and LIMDEP for the binary probit model. Table 4.Parameter Estimates and Goodness-of-fit Statistics of the Binary Probit Model LOGISTIC PROBIT GENMOD QLIM STATA LIMDEP Intercept (.8796).837 (.8731) (.8731) (.8731) (.8731) (.8731) income.0005 (.3496) (.3477).0006 (.3477).0006 (.3477).0006 (.3477).0006 (.3477) age.1487 (.0413) (.0410).1487 (.0410).1487 (.0410).1487 (.0410).1487 (.0410) male.579 (.157) (.156).579 (.156).579 (.156).579 (.156).579 (.156) Log likelihood * Likelihood test Pseudo R AIC ** ** Schwarz BIC * The LOGISTIC procedure reports (-*log likelihood). ** AIC*N

148 , The Trustees of Indiana University Categorical Dependent Variable Models: 6 4. Bivariate Probit/Logit Regression Models Bivariate regression models have two equations for the two dependent variables. This chapter explains the bivariate regression model with two binary dependent variables. Like the seemingly unrelated regression model (SUR), biviriate probit/logit models assume that the independent, identically distributed errors are correlated (Greene 003). The bivariate probit model, although consuming relatively much time, is more likely to converge than the bivariate logit model. SAS supports both the bivariate probit and logit models, while STATA and LIMDEP estimate the bivariate probit model. Here we consider a model for car ownership (owncar) and housing type (offcamp). 4.1 Bivariate Probit in STATA (.biprobit) STATA has the.biprobit command to estimate the bivariate probit model. The two dependent variables precede a set of independent variables.. biprobit owncar offcamp income age male Fitting comparison equation 1: Iteration 0: log likelihood = Iteration 1: log likelihood = Iteration : log likelihood = Iteration 3: log likelihood = Fitting comparison equation : Iteration 0: log likelihood = Iteration 1: log likelihood = Iteration : log likelihood = Iteration 3: log likelihood = Iteration 4: log likelihood = Iteration 5: log likelihood = Comparison: log likelihood = Fitting full model: Iteration 0: log likelihood = Iteration 1: log likelihood = Iteration : log likelihood = Iteration 3: log likelihood = Iteration 4: log likelihood = Iteration 5: log likelihood = Iteration 6: log likelihood = Iteration 7: log likelihood = Bivariate probit regression Number of obs = 437 Wald chi(6) = Log likelihood = Prob > chi = Coef. Std. Err. z P> z [95% Conf. Interval]

149 , The Trustees of Indiana University Categorical Dependent Variable Models: 7 owncar income age male _cons offcamp income age male _cons /athrho rho Likelihood-ratio test of rho=0: chi(1) = Prob > chi = Bivariate Probit in SAS The SAS QLIM procedure is able to estimate both the bivariate logit and probit models. You need to provide two equations that may or may not have different sets of independent variables. PROC QLIM DATA=masil.students; MODEL owncar = income age male; MODEL offcamp = income age male; ENDOGENOUS owncar offcamp ~ DISCRETE(DIST=NORMAL); RUN; Or, simply, PROC QLIM DATA=masil.students; MODEL owncar offcamp = income age male /DISCRETE; RUN; The QLIM Procedure Discrete Response Profile of owncar Index Value Frequency Percent Discrete Response Profile of offcamp Index Value Frequency Percent Model Fit Summary Number of Endogenous Variables

150 , The Trustees of Indiana University Categorical Dependent Variable Models: 8 Endogenous Variable owncar offcamp Number of Observations 437 Log Likelihood Maximum Absolute Gradient.16967E-6 Number of Iterations 7 AIC Schwarz Criterion Algorithm converged. Parameter Estimates Standard Approx Parameter Estimate Error t Value Pr > t owncar.intercept owncar.income owncar.age owncar.male offcamp.intercept offcamp.income offcamp.age <.0001 offcamp.male _Rho Bivariate Probit in LIMDEP (Bivariateprobit$) LIMDEP has the Bivariateprobit$ command to estimate the bivariate probit model. The Lhs$ subcommand lists the two binary dependent variables, whereas Rh1$ and Rh$ respectively indicate independent variables for the two dependent variables. In this model, you may not switch the order of dependent variables (Lhs=owncar,offcamp;) to avoid convergence problems. BIVARIATEPROBIT; Lhs=offcamp,owncar; Rh1=ONE,income,age,male; Rh= ONE,income,age,male$ Normal exit from iterations. Exit status= FIML Estimates of Bivariate Probit Model Maximum Likelihood Estimates Model estimated: Sep 17, 005 at 10:36:5PM. Dependent variable OFFOWN Weighting variable None Number of observations 437 Iterations completed 35 Log likelihood function Variable Coefficient Standard Error b/st.er. P[ Z >z] Mean of X Index equation for OFFCAMP

151 , The Trustees of Indiana University Categorical Dependent Variable Models: 9 Constant INCOME AGE MALE Index equation for OWNCAR Constant INCOME E AGE E MALE Disturbance correlation RHO(1,) E (Note: E+nn or E-nn means multiply by 10 to + or -nn power.) Joint Frequency Table: Columns=OWNCAR Rows =OFFCAMP (N) = Count of Fitted Values 0 1 TOTAL ( 0) ( 0) ( 0) ( 0) ( 437) ( 437) TOTAL ( 0) ( 437) ( 437) SAS, STATA, and LIMDEP produce almost the same parameter estimates and standard errors with slight differences after the decimal point. 4.4 Bivariate Logit in SAS The QLIM procedure also estimates the bivariate logit model using the DIST=LOGIT option. Unfortunately, this model does not fit in SAS. PROC QLIM DATA=masil.students; MODEL owncar = income age male; MODEL offcamp = income age male; ENDOGENOUS offcamp owncar ~ DISCRETE(DIST=LOGIT); RUN; Or, PROC QLIM DATA=masil.students; MODEL owncar offcamp = income age male /DISCRETE(DIST=LOGIT); RUN;

152 , The Trustees of Indiana University Categorical Dependent Variable Models: Ordered Logit/Probit Regression Models Suppose we have an ordinal dependent variable such as the degree of illegal parking (0=none, 1=sometimes, and =often). The ordered logit and probit models have the parallel regression assumption, which is violated from time to time. 5.1 Ordered Logit/Probit in STATA (.ologit and.oprobit) STATA has the.ologit and.oprobit commands to estimate the ordered logit and probit models, respectively.. ologit parking income age male Iteration 0: log likelihood = Iteration 1: log likelihood = Iteration : log likelihood = Iteration 3: log likelihood = Iteration 4: log likelihood = Iteration 5: log likelihood = Ordered logistic regression Number of obs = 437 LR chi(3) = 7.85 Prob > chi = Log likelihood = Pseudo R = parking Coef. Std. Err. z P> z [95% Conf. Interval] income age male /cut /cut STATA estimates τ m, /cut1 and /cut, assuming β 0 = 0 (Long and Freese 003). This parameterization is different from that of SAS and LIMDEP, which assume τ = oprobit parking income age male Iteration 0: log likelihood = Iteration 1: log likelihood = Iteration : log likelihood = Iteration 3: log likelihood = Iteration 4: log likelihood = Ordered probit regression Number of obs = 437 LR chi(3) = 8.71 Prob > chi = Log likelihood = Pseudo R = parking Coef. Std. Err. z P> z [95% Conf. Interval] income

153 , The Trustees of Indiana University Categorical Dependent Variable Models: 31 age male /cut /cut The Parallel Assumption and the Generalized Ordered Logit Model The.brant command of SPost is valid only in the.ologit command. This command tests the parallel regression assumption of the ordinal regression model. The outputs here are skipped.. quietly ologit parking income male. brant The parallel regression assumption is often violated. If this is the case, you may use the multinomial regression model or estimate the generalized ordered logit model (GOLM) using either the.gologit command written by Fu (1998) or the.gologit command by Williams (005). Note that Fu s module does not impose the restriction of ( τ j xβ j ) ( τ j 1 xβ j 1 ) (Long s class note 003).. gologit parking income age male, autofit Testing parallel lines assumption using the.05 level of significance... Step 1: male meets the pl assumption (P Value = ) Step : income meets the pl assumption (P Value = ) Step 3: age meets the pl assumption (P Value = ) Step 4: All explanatory variables meet the pl assumption Wald test of parallel lines assumption for the final model: ( 1) [0]male - [1]male = 0 ( ) [0]income - [1]income = 0 ( 3) [0]age - [1]age = 0 chi( 3) = 0.04 Prob > chi = An insignificant test statistic indicates that the final model does not violate the proportional odds/ parallel lines assumption If you re-estimate this exact same model with gologit, instead of autofit you can save time by using the parameter pl(male income age) Generalized Ordered Logit Estimates Number of obs = 437 Wald chi(3) = 1.74 Prob > chi = Log likelihood = Pseudo R = ( 1) [0]male - [1]male = 0

154 , The Trustees of Indiana University Categorical Dependent Variable Models: 3 ( ) [0]income - [1]income = 0 ( 3) [0]age - [1]age = 0 parking Coef. Std. Err. z P> z [95% Conf. Interval] income age male _cons income age male _cons Ordered Logit in SAS The QLIM, LOGISTIC, and PROBIT procedures estimate ordered logit and probit models. As shown in Tables 3 and 4, the QLIM procedure is most recommended. Note that the DIST=LOGISTIC indicates the logit model to be estimated. PROC QLIM DATA=masil.students; MODEL parking = income age male /DISCRETE (DIST=LOGISTIC); RUN; The QLIM Procedure Discrete Response Profile of parking Index Value Frequency Percent Model Fit Summary Number of Endogenous Variables 1 Endogenous Variable parking Number of Observations 437 Log Likelihood Maximum Absolute Gradient E-7 Number of Iterations 3 AIC Schwarz Criterion Goodness-of-Fit Measures Measure Value Formula Likelihood Ratio (R) * (LogL - LogL0)

155 , The Trustees of Indiana University Categorical Dependent Variable Models: 33 Upper Bound of R (U) * LogL0 Aldrich-Nelson R / (R+N) Cragg-Uhler exp(-r/n) Cragg-Uhler (1-exp(-R/N)) / (1-exp(-U/N)) Estrella (1-R/U)^(U/N) Adjusted Estrella ((LogL-K)/LogL0)^(-/N*LogL0) McFadden's LRI R / U Veall-Zimmermann (R * (U+N)) / (U * (R+N)) McKelvey-Zavoina N = # of observations, K = # of regressors Algorithm converged. Parameter Estimates Standard Approx Parameter Estimate Error t Value Pr > t Intercept income age male _Limit <.0001 The SAS QLIM procedure estimates the intercept and τ, assuming τ = 1 0. The estimated intercept of SAS is equivalent to (0-/cut1) in STATA. The _Limit of SAS is the difference between cut points of STATA, = ( ). The SAS LOGISTIC and PROBIT procedures are also used to estimate the ordered logit and probit models. These procedures recognize binary or ordinal response models by examining the dependent variable. PROC LOGISTIC DATA = masil.students DESC; MODEL parking = income age male /LINK=LOGIT; RUN; Like the STATA.ologit command, The LOGISTIC procedure fits the model, assuming the intercept is zero. The parameter estimates and standard errors are slightly different from those of the QLIM procedure and the.ologit command. Other parts of the output are skipped. Analysis of Maximum Likelihood Estimates Standard Wald Parameter DF Estimate Error Chi-Square Pr > ChiSq Intercept Intercept income age male PROC PROBIT DATA = masil.students;

156 , The Trustees of Indiana University Categorical Dependent Variable Models: 34 RUN; CLASS parking; MODEL parking = income age male /DIST=LOGISTIC; The PROBIT procedure returns almost the same results as the QLIM procedure except for the signs of the estimates. Other parts of the output are skipped. Analysis of Parameter Estimates Standard 95% Confidence Chi- Parameter DF Estimate Error Limits Square Pr > ChiSq Intercept Intercept <.0001 income age male Ordered Probit in SAS The QLIM procedure by default estimates a probit model. The DIST=NORMAL, the default option, may be omitted. PROC QLIM DATA=masil.students; MODEL parking = income age male /DISCRETE (DIST=NORMAL); RUN; The QLIM Procedure Discrete Response Profile of parking Index Value Frequency Percent Model Fit Summary Number of Endogenous Variables 1 Endogenous Variable parking Number of Observations 437 Log Likelihood Maximum Absolute Gradient E-6 Number of Iterations 17 AIC Schwarz Criterion Goodness-of-Fit Measures Measure Value Formula

157 , The Trustees of Indiana University Categorical Dependent Variable Models: 35 Likelihood Ratio (R) * (LogL - LogL0) Upper Bound of R (U) * LogL0 Aldrich-Nelson R / (R+N) Cragg-Uhler exp(-r/n) Cragg-Uhler (1-exp(-R/N)) / (1-exp(-U/N)) Estrella (1-R/U)^(U/N) Adjusted Estrella ((LogL-K)/LogL0)^(-/N*LogL0) McFadden's LRI R / U Veall-Zimmermann (R * (U+N)) / (U * (R+N)) McKelvey-Zavoina N = # of observations, K = # of regressors Algorithm converged. Parameter Estimates Standard Approx Parameter Estimate Error t Value Pr > t Intercept income age male _Limit <.0001 The QLIM procedure and.oprobit command produce almost the same result except for the τ estimate. The _Limit of SAS is the difference of the cut points of STATA,.8831= ( ). The PROBIT and LOGISTIC procedures also estimate the ordered probit model. Keep in mind that the signs of the coefficients are reversed in the PROBIT procedure. PROC LOGISTIC DATA = masil.students DESC; MODEL parking = income age male /LINK=PROBIT; RUN; Analysis of Maximum Likelihood Estimates Standard Wald Parameter DF Estimate Error Chi-Square Pr > ChiSq Intercept Intercept income age <.0001 male PROC PROBIT DATA = masil.students; CLASS parking; MODEL parking = income age male /DIST=NORMAL; RUN;

158 , The Trustees of Indiana University Categorical Dependent Variable Models: 36 Analysis of Parameter Estimates Standard 95% Confidence Chi- Parameter DF Estimate Error Limits Square Pr > ChiSq Intercept Intercept <.0001 income age male Ordered Logit/Probit in LIMDEP (Ordered$) The LIMDEP Ordered$ command estimates ordered logit and probit models. The Logit$ subcommand runs the ordered logit model. ORDERED; Lhs=parking; Rhs=ONE,income,age,male; Logit$ Normal exit from iterations. Exit status= Ordered Probability Model Maximum Likelihood Estimates Model estimated: Sep 18, 005 at 05:53:44PM. Dependent variable PARKING Weighting variable None Number of observations 437 Iterations completed 13 Log likelihood function Restricted log likelihood Chi squared Degrees of freedom 3 Prob[ChiSqd > value] = E-05 Underlying probabilities based on Logistic Cell frequencies for outcomes Y Count Freq Y Count Freq Y Count Freq Variable Coefficient Standard Error b/st.er. P[ Z >z] Mean of X Index function for probability Constant INCOME AGE MALE Threshold parameters for index Mu(1) Cross tabulation of predictions. Row is actual, column is predicted.

159 , The Trustees of Indiana University Categorical Dependent Variable Models: 37 Model = Logistic. Prediction is number of the most probable cell Actual Row Sum Col Sum LIMDEP and SAS QLIM produce the same results for the ordered logit model. Note that _Limit in SAS is equivalent to Mu(1), the threshold parameter, in LIMDEP. The ordered probit model is estimated by the Ordered$ command without the Logit$ subcommand. The command by default fits the ordered logit model. The output is comparable to that of the QLIM procedure. ORDERED; Lhs=parking; Rhs=ONE,income,age,male$ Normal exit from iterations. Exit status= Ordered Probability Model Maximum Likelihood Estimates Model estimated: Sep 18, 005 at 05:55:4PM. Dependent variable PARKING Weighting variable None Number of observations 437 Iterations completed 11 Log likelihood function Restricted log likelihood Chi squared Degrees of freedom 3 Prob[ChiSqd > value] =.57557E-05 Underlying probabilities based on Normal Cell frequencies for outcomes Y Count Freq Y Count Freq Y Count Freq Variable Coefficient Standard Error b/st.er. P[ Z >z] Mean of X Index function for probability Constant INCOME AGE E MALE Threshold parameters for index Mu(1) (Note: E+nn or E-nn means multiply by 10 to + or -nn power.) Cross tabulation of predictions. Row is actual, column is predicted.

160 , The Trustees of Indiana University Categorical Dependent Variable Models: 38 Model = Probit. Prediction is number of the most probable cell Actual Row Sum Col Sum Ordered Logit/Probit in SPSS The Plum command estimates the ordered logit and probit models in SPSS. The Threshold points in SPSS are equivalent to the cut points in STATA. PLUM parking WITH income age male /CRITERIA = CIN(95) DELTA(0) LCONVERGE(0) MXITER(100) MXSTEP(5) PCONVERGE(1.0E-6) SINGULAR(1.0E-8) /LINK = LOGIT /PRINT = FIT PARAMETER SUMMARY. PLUM parking WITH income age male /CRITERIA = CIN(95) DELTA(0) LCONVERGE(0) MXITER(100) MXSTEP(5) PCONVERGE(1.0E-6) SINGULAR(1.0E-8) /LINK = PROBIT /PRINT = FIT PARAMETER SUMMARY.

161 , The Trustees of Indiana University Categorical Dependent Variable Models: The Multinomial Logit Regression Model Suppose we have a nominal dependent variable such as the mode of transportation (walk, bike, bus, and car). The multinomial logit and conditional logit models are commonly used; the multinomial probit model is not often used mainly due to the practical difficulty in estimation. However, STATA does have the.mprobit command to fit the model. In the multinomial logit model, the independent variables contain characteristics of individuals, while they are the attributes of the choices in the conditional logit model. In other words, the conditional logit estimates how alternative-specific, not individual-specific, variables affect the likelihood of observing a given outcome (Long 003). Therefore, data need to be appropriately arranged in advance. 6.1 Multinomial Logit/Probit in STATA (.mlogit and.mprobit) STATA has the.mlogit command for the multinomial logit model. The base() option indicates the value of the dependent variable to be used as the base category for the estimation. You may omit the default option, base(0).. mlogit transmode income age male, base(0) Iteration 0: log likelihood = Iteration 1: log likelihood = Iteration : log likelihood = Iteration 3: log likelihood = Iteration 4: log likelihood = Multinomial logistic regression Number of obs = 437 LR chi(9) = Prob > chi = Log likelihood = Pseudo R = transmode Coef. Std. Err. z P> z [95% Conf. Interval] income age male _cons income age male _cons income age male _cons (transmode==0 is the base outcome)

162 , The Trustees of Indiana University Categorical Dependent Variable Models: 40 Let us see if the base outcome changes. As shown in the following, the parameter estimates and standard errors are changed, whereas the goodness-of-fit remains unchanged. The two.mlogit commands with different bases fit the same model but present the result in different manner. The SAS CATMOD procedure in the next section uses the largest value as the base outcome.. mlogit transmode income age male, base(3) Iteration 0: log likelihood = Iteration 1: log likelihood = Iteration : log likelihood = Iteration 3: log likelihood = Iteration 4: log likelihood = Multinomial logistic regression Number of obs = 437 LR chi(9) = Prob > chi = Log likelihood = Pseudo R = transmode Coef. Std. Err. z P> z [95% Conf. Interval] income age male _cons income age male _cons income age male _cons (transmode==3 is the base outcome) The SPost.mlogtest command conducts a variety of statistical tests for the multinomial logit model. This command supports not only Wald and likelihood ratio tests, but also Hausman and Small-Hsiao tests for the independence of irrelevant alternatives (IIA) assumption. The.mlogtest command works with the.mlogit command only.. mlogtest, hausman smhsiao base **** Hausman tests of IIA assumption Ho: Odds(Outcome-J vs Outcome-K) are independent of other alternatives. Omitted chi df P>chi evidence for Ho for Ho for Ho for Ho

163 , The Trustees of Indiana University Categorical Dependent Variable Models: 41 **** Small-Hsiao tests of IIA assumption Ho: Odds(Outcome-J vs Outcome-K) are independent of other alternatives. Omitted lnl(full) lnl(omit) chi df P>chi evidence for Ho for Ho against Ho for Ho The STATA.mprobit command fits the multinomial probit model. The model took longer time to converge than the multinomial logit model.. mprobit transmode income age male Iteration 0: log likelihood = Iteration 1: log likelihood = Iteration : log likelihood = Iteration 3: log likelihood = Iteration 4: log likelihood = Multinomial probit regression Number of obs = 437 Wald chi(9) = Log likelihood = Prob > chi = transmode Coef. Std. Err. z P> z [95% Conf. Interval] _outcome_ income age male _cons _outcome_3 income age male _cons _outcome_4 income age male _cons (transmode=0 is the base outcome) 6. Multinomial Logit in SAS SAS has the CATMOD procedure for the multinomial logit model. In the CATMOD procedure, the RESPONSE statement is used to specify the functions of response probabilities. PROC CATMOD DATA = masil.students; DIRECT income age male; RESPONSE LOGITS;

164 , The Trustees of Indiana University Categorical Dependent Variable Models: 4 MODEL transmode = income age male /NOPROFILE; RUN; The CATMOD Procedure Data Summary Response transmode Response Levels 4 Weight Variable None Populations 414 Data Set STUDENTS Total Frequency 437 Frequency Missing 0 Observations 437 Maximum Likelihood Analysis Maximum likelihood computations converged. Maximum Likelihood Analysis of Variance Source DF Chi-Square Pr > ChiSq ƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒ Intercept income <.0001 age male Likelihood Ratio 1E Analysis of Maximum Likelihood Estimates Function Standard Chi- Parameter Number Estimate Error Square Pr > ChiSq ƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒ Intercept < income < <.0001 age male As mentioned before, the CATMOD procedure uses the largest value of the dependent variable as a base outcome. Accordingly, you need to compare the above with the STATA output of the base(3) option. The two outputs are the same except for the likelihood ratio. 6.3 Multinomial Logit in LIMDEP (Mlogit$)

165 , The Trustees of Indiana University Categorical Dependent Variable Models: 43 In LIMDEP, you may use either the Mlogit$ or simply the Logit$ commands to fit the multinomial logit model. Both commands produce the identical result. Like STATA, LIMDEP by default uses the smallest value as the base outcome. MLOGIT; Lhs=transmod; Rhs=ONE,income,age,male$ Or, use the old style command. LOGIT; Lhs=transmod; Rhs=ONE,income,age,male$ Normal exit from iterations. Exit status= Multinomial Logit Model Maximum Likelihood Estimates Model estimated: Sep 19, 005 at 09:19:3AM. Dependent variable TRANSMOD Weighting variable None Number of observations 437 Iterations completed 6 Log likelihood function Restricted log likelihood Chi squared Degrees of freedom 9 Prob[ChiSqd > value] = Variable Coefficient Standard Error b/st.er. P[ Z >z] Mean of X Characteristics in numerator of Prob[Y = 1] Constant INCOME AGE MALE Characteristics in numerator of Prob[Y = ] Constant INCOME AGE MALE Characteristics in numerator of Prob[Y = 3] Constant INCOME AGE E MALE (Note: E+nn or E-nn means multiply by 10 to + or -nn power.) Information Statistics for Discrete Choice Model. M=Model MC=Constants Only M0=No Model Criterion F (log L) LR Statistic vs. MC

166 , The Trustees of Indiana University Categorical Dependent Variable Models: 44 Degrees of Freedom Prob. Value for LR Entropy for probs Normalized Entropy Entropy Ratio Stat Bayes Info Criterion BIC - BIC(no model) Pseudo R-squared Pct. Correct Prec Means: y=0 y=1 y= y=3 yu=4 y=5, y=6 y>=7 Outcome Pred.Pr Notes: Entropy computed as Sum(i)Sum(j)Pfit(i,j)*logPfit(i,j). Normalized entropy is computed against M0. Entropy ratio statistic is computed against M0. BIC = *criterion - log(n)*degrees of freedom. If the model has only constants or if it has no constants, the statistics reported here are not useable Frequencies of actual & predicted outcomes Predicted outcome has maximum probability. Predicted Actual Total Total Note that the variable name TRANSMOD was truncated because LIMDEP allows up to eight characters for a variable name. LIMDEP and STATA produce the same result of the multinomial logit model. 6.4 Multinomial Logit in SPSS SPSS has the Nomreg command to estimate the multinomial logit model. Like SAS, SPSS by default uses the largest value as the base outcome. NOMREG transmode WITH income age male /CRITERIA CIN(95) DELTA(0) MXITER(100) MXSTEP(5) CHKSEP(0) LCONVERGE(0) PCONVERGE( ) SINGULAR( ) /MODEL /INTERCEPT INCLUDE /PRINT PARAMETER SUMMARY LRT.

167 , The Trustees of Indiana University Categorical Dependent Variable Models: The Conditional Logit Regression Model Imagine a choice of the travel modes among air flight, train, bus, and car. The data set and model here are adopted from Greene (003). The model examines how the generalized cost measure (cost), terminal waiting time (time), and household income (income) affect the choice. These independent variables are not characteristics of subjects (individuals), but attributes of the alternatives. Thus, the data arrangement of the conditional logit model is different from that of the multinomial logit model (Figure ). Figure. Data Arrangement for the Conditional Logit Model ++ subject mode choice air train bus cost time income air_inc The example data set has four observations per subject, each of which contains attributes of using air flight, train, bus, and car. The dependent variable choice is coded 1 only if a subject chooses that travel mode. The four dummy variables, air, train, bus, and car, are flagging the corresponding modes of transportation. See the appendix for details about the data set. 7.1 Conditional Logit in STATA (.clogit) STATA has the.clogit command to estimate the condition logit model. The group() option specifies the variable (e.g., identification number) that identifies unique individuals.. clogit choice air train bus cost time air_inc, group(subject) Iteration 0: log likelihood = Iteration 1: log likelihood = Iteration : log likelihood = Iteration 3: log likelihood = Iteration 4: log likelihood = Conditional (fixed-effects) logistic regression Number of obs = 840 LR chi(6) = Prob > chi = Log likelihood = Pseudo R = choice Coef. Std. Err. z P> z [95% Conf. Interval] air

168 , The Trustees of Indiana University Categorical Dependent Variable Models: 46 train bus cost time air_inc Let us run the.listcoef command to compute factor changes in odds. For a one unit increase in the waiting time for a given travel mode, for example, we can expect a decrease in the odds of using that travel by 9 percent (or a factor of.9084), holding other variables constant.. listcoef clogit (N=840): Factor Change in Odds Odds of: 1 vs choice b z P> z e^b air train bus cost time air_inc Conditional Logit in SAS SAS has the MDC procedure to fit the conditional logit model. The TYPE=CLOGIT indicates the conditional logit model; the ID statement specifies the identification variable; and the NCHOICE=4 tells that there are four choices of the travel mode. PROC MDC DATA=masil.travel; MODEL choice = air train bus cost time air_inc /TYPE=CLOGIT NCHOICE=4; ID subject; RUN; The MDC Procedure Conditional Logit Estimates Algorithm converged. Model Fit Summary Dependent Variable choice Number of Observations 10 Number of Cases 840 Log Likelihood Maximum Absolute Gradient.7315E-8 Number of Iterations 5 Optimization Method Newton-Raphson AIC

169 , The Trustees of Indiana University Categorical Dependent Variable Models: 47 Schwarz Criterion Discrete Response Profile Index CHOICE Frequency Percent Goodness-of-Fit Measures Measure Value Formula Likelihood Ratio (R) * (LogL - LogL0) Upper Bound of R (U) * LogL0 Aldrich-Nelson R / (R+N) Cragg-Uhler exp(-r/n) Cragg-Uhler 0.65 (1-exp(-R/N)) / (1-exp(-U/N)) Estrella (1-R/U)^(U/N) Adjusted Estrella ((LogL-K)/LogL0)^(-/N*LogL0) McFadden's LRI R / U Veall-Zimmermann (R * (U+N)) / (U * (R+N)) N = # of observations, K = # of regressors Conditional Logit Estimates Parameter Estimates Standard Approx Parameter DF Estimate Error t Value Pr > t air <.0001 train <.0001 bus <.0001 cost time <.0001 air_inc Alternatively, you may use the PHREG procedure that estimates the Cox proportional hazards model for survival data and the conditional logit model. In order to make the data set consistent with the survival analysis data, you need to create a failure time variable, failure=1 choice. The identification variable is specified in the STRATA statement. The NOSUMMARY option suppresses the display of the event and censored observation frequencies. PROC PHREG DATA=masil.travel NOSUMMARY; STRATA subject; MODEL failure*choice(0)=air train bus cost time air_inc; RUN;

170 , The Trustees of Indiana University Categorical Dependent Variable Models: 48 The PHREG Procedure Model Information Data Set MASIL.TRAVEL Dependent Variable failure Censoring Variable choice Censoring Value(s) 0 Ties Handling BRESLOW Number of Observations Read 840 Number of Observations Used 840 Convergence Status Convergence criterion (GCONV=1E-8) satisfied. Model Fit Statistics Without With Criterion Covariates Covariates - LOG L AIC SBC Testing Global Null Hypothesis: BETA=0 Test Chi-Square DF Pr > ChiSq Likelihood Ratio <.0001 Score <.0001 Wald <.0001 Analysis of Maximum Likelihood Estimates Parameter Standard Hazard Variable DF Estimate Error Chi-Square Pr > ChiSq Ratio air < train < bus < cost time < air_inc While the MDC procedure reports t statistics, the PHREG procedure computes chi-squared (e.g., =-3.5^). The PHREG presents the hazard ratio at the last column of the output, which is equivalent to the factor changes under the e^b column of the SPost.listcoef command.

171 , The Trustees of Indiana University Categorical Dependent Variable Models: Conditional Logit in LIMDEP (Clogit$) LIMDEP fits the conditional logit model using either the Clogit$ or the Logit$ command. The Clogit$ command has the Choices$ subcommand to list the choices available. CLOGIT; Lhs=choice; Rhs=air,train,bus,cost,time,air_inc; Choices=air,train,bus,car$ Normal exit from iterations. Exit status= Discrete choice (multinomial logit) model Maximum Likelihood Estimates Model estimated: Sep 19, 005 at 09:0:39PM. Dependent variable Choice Weighting variable None Number of observations 10 Iterations completed 6 Log likelihood function Log-L for Choice model = R=1-LogL/LogL* Log-L fncn R-sqrd RsqAdj Constants only Response data are given as ind. choice. Number of obs.= 10, skipped 0 bad obs Variable Coefficient Standard Error b/st.er. P[ Z >z] AIR TRAIN BUS COST E E TIME E E AIR_INC E E (Note: E+nn or E-nn means multiply by 10 to + or -nn power.) The Clogit$ command has the Ias$ subcommand to conduct the Hausman test for the IIA assumption (e.g., Ias=air,bus$). Unfortunately, the subcommand does not work in this model because the Hessian is not positive definite. The Logit$ command takes the panel data analysis approach. The Pds$ subcommand specifies the number of time periods. The two commands produce the same result. LOGIT; Lhs=choice; Rhs=air,train,bus,cost,time,air_inc; Pds=4$ Panel Data Binomial Logit Model Number of individuals = 10

172 , The Trustees of Indiana University Categorical Dependent Variable Models: 50 Number of periods = 4 Conditioning event is the sum of CHOICE Distribution of sums over the 4 periods: Sum Number Pct Normal exit from iterations. Exit status= Logit Model for Panel Data Maximum Likelihood Estimates Model estimated: Sep 19, 005 at 09:1:58PM. Dependent variable CHOICE Weighting variable None Number of observations 840 Iterations completed 6 Log likelihood function Hosmer-Lemeshow chi-squared = P-value= with deg.fr. = 8 Fixed Effects Logit Model for Panel Data Variable Coefficient Standard Error b/st.er. P[ Z >z] AIR TRAIN BUS COST E E TIME E E AIR_INC E E (Note: E+nn or E-nn means multiply by 10 to + or -nn power.) 7.4 Conditional Logit in SPSS Like the SAS PHREG procedure, the SPSS Coxreg command, which was designed for survival analysis data, provides a backdoor way of estimating the conditional logit model. COXREG failure WITH air train bus cost time air_inc /STATUS=choice(1) /STRATA=subject.

173 , The Trustees of Indiana University Categorical Dependent Variable Models: The Nested Logit Regression Model Now, consider a nested structure of choices. When the IIA assumption is violated, one of the alternatives is the nested (multinomial) logit model. This chapter replicates the nested logit model discussed in Greene (003). P ( choice, branch) = P( choice branch) * P( branch) P( choice branch) = Pchild ( α 1air + αtrain + α3bus + β1 cost + βtime) P branch) = P ( γ air _ inc + τ IV + τ IV ) ( parent income fly fly ground ground 8.1 Nested Logit in STATA (.nlogit) The STATA.nlogit command estimates the nested multinomial logit model. First you need to create a variable based on the specification of the tree using the.nlogitgen command. From the top, the parent-level has fly and ground branches; the fly branch of the child-level has air flight (1); the ground branch has train (), bus (3), and car (4).. nlogitgen tree = mode(fly: 1, ground: 3 4) new variable tree is generated with groups label list lb_tree lb_tree: 1 fly ground The.nlogittree command.displays the tree-structure defined by the.nlogitgen command.. nlogittree mode tree tree structure specified for the nested logit model top --> bottom tree mode fly 1 ground 3 4 The.nlogit command consists of three parts. The dependent or choice variable follows the command. Utility functions of the parent and child-levels are then specified. The group()option specifies an identification or grouping variable.. nlogit choice (mode=air train bus cost time) (tree=air_inc), /// group(subject) notree nolog Nested logit regression Levels = Number of obs = 840 Dependent variable = choice LR chi(8) =

174 , The Trustees of Indiana University Categorical Dependent Variable Models: 5 Log likelihood = Prob > chi = Coef. Std. Err. z P> z [95% Conf. Interval] mode air train bus cost time tree air_inc (incl. value parameters) tree /fly /ground LR test of homoskedasticity (iv = 1): chi()= Prob > chi = The notree option does not show the tree-structure and the nolog suppresses an iteration log of the log likelihood. Note that the /// joins the next command line with the current line. 8. Nested Logit in SAS The SAD MDC procedure fits the conditional logit model as well as the nested multinomial logit model. For the nested logit model, you have to use the UTILITY statement to specify utility functions of the parent (level ) and child level (level 1), and the NEST statement to construct the decision-tree structure. Note that 3 reads that there are three nodes at the child level under the branch at the parent-level. PROC MDC DATA=masil.travel; MODEL choice = air train bus cost time air_inc /TYPE=NLOGIT CHOICE=(mode); ID subject; UTILITY U(1,) = air train bus cost time, U(, 1 ) = air_inc; NEST LEVEL(1) = 1, 3 ), LEVEL() = 1); RUN; The MDC Procedure Nested Logit Estimates Algorithm converged. Model Fit Summary Dependent Variable choice Number of Observations 10 Number of Cases 840

175 , The Trustees of Indiana University Categorical Dependent Variable Models: 53 Log Likelihood Maximum Absolute Gradient Number of Iterations 15 Optimization Method Newton-Raphson AIC Schwarz Criterion Discrete Response Profile Index mode Frequency Percent Goodness-of-Fit Measures Measure Value Formula Likelihood Ratio (R) * (LogL - LogL0) Upper Bound of R (U) * LogL0 Aldrich-Nelson R / (R+N) Cragg-Uhler exp(-r/n) Cragg-Uhler (1-exp(-R/N)) / (1-exp(-U/N)) Estrella (1-R/U)^(U/N) Adjusted Estrella ((LogL-K)/LogL0)^(-/N*LogL0) McFadden's LRI R / U Veall-Zimmermann (R * (U+N)) / (U * (R+N)) N = # of observations, K = # of regressors Nested Logit Estimates Parameter Estimates Standard Approx Parameter DF Estimate Error t Value Pr > t air_l <.0001 train_l <.0001 bus_l <.0001 cost_l time_l <.0001 air_inc_lg INC_LG1C <.0001 INC_LG1C The /fly and /ground in the STATA output above are equivalent to the INC_LG1C1 and INC_LG1C in the SAS output. SAS and STATA produce the same result.

176 , The Trustees of Indiana University Categorical Dependent Variable Models: Conclusion The appropriate type of categorical dependent variable model (CDVM) is determined largely by the level of measurement of the dependent variable. The level of measurement should be, however, considered in conjunction with your theory and research questions (Long 1997). You must also examine the data generation process (DGP) of a dependent variable to understand its behavior. Sophisticated researchers pay special attention to censoring, truncation, sample selection, and other particular patterns of the DGP. If your dependent variable is a binary variable, you may use the binary logit or probit regression model. For ordinal responses, try to fit the ordered logit/probit regression models. If you have a nominal response variable, investigate the DGP carefully and then choose one of the multinomial logit, conditional logit, and nested logit models. In order to use the conditional logit and nested logit, the data set requires a different setup. You should check the key assumptions of the CDVMs when fitting the models. Examples are the parallel regression assumption in the ordered logit model and the independence of irrelevant alternatives (IIA) assumption in the multinomial logit model. You may conduct the Brant test and Hausman test for these assumptions. Since CDVMs are nonlinear, they produce estimates that are difficult to interpret intuitively. Consequently, researchers need to spend more time and effort interpreting the results substantively. Reporting parameter estimates and goodness-of-fit statistics is not sufficient. J. Scott Long (1997) and Long and Freese (003) provide good examples of meaningful interpretations using predicted probabilities, factor changes in odds, and marginal/discrete changes of predicted probabilities. Regarding statistical software for CDVMs, I would recommend the SAS QLIM and MDC procedures of SAS/ETS (see Table 3 and 4). SAS has other procedures such as LOGISTIC, GENMODE, and PROBIT for CDVMs, but the QLIM procedure seems best for binary and ordinal response models, and the MDC procedure is good for nominal dependent variable models. I also strongly recommend STATA with SPost, since it has various useful commands for CDVMs such as.prchange,.listcoef, and.prtab. I encourage SAS Institute to develop additional statements similar to those SPost commands. LIMDEP supports various CDVMs addressed in Greene (003) but does not seem stable and reliable. Thus, I recommend LIMDEP for CDVMs that SAS and STATA do not support. SPSS is not currently recommended for CDVMs.

177 , The Trustees of Indiana University Categorical Dependent Variable Models: 55 Appendix: Data Sets The first data set students is a subset of data provided for David H. Good s class in the School of Public and Environmental Affairs (SPEA). The data were manipulated for the sake of data security. owncar: 1 if a student owns a car parking: Illegal parking (0=none, 1=sometimes, and =often) offcamp: 1 if a student lives off-campus transmode: the mode of transportation (0=walk, 1=bike, =bus, 3=car) age: students age income: monthly income male: 1 for male and 0 for female. tab male owncar owncar male 0 1 Total Total tab male offcamp offcamp male 0 1 Total Total tab male parking parking male 0 1 Total Total tab male transmode transmode male Total Total

178 , The Trustees of Indiana University Categorical Dependent Variable Models: 56. sum income age Variable Obs Mean Std. Dev. Min Max income age The second data set travel on travel mode choice is adopted from Greene (003). You may get the data from subject: identification number mode: 1=Air, =Train, 3=Bus, 4=Car choice: 1 if the travel mode is chosen time: terminal waiting time, 0 for car cost: generalized cost measure income: household income air_inc: interaction of air flight and household income, air*income air: 1 for the air flight mode, 0 for others train: 1 for the train mode, 0 for others bus: 1 for the bus mode, 0 for others car: 1 for the car mode, 0 for others failure: failure time variable, 1-choice. tab choice mode mode choice Total Total sum time income air_inc Variable Obs Mean Std. Dev. Min Max time income air_inc

179 , The Trustees of Indiana University Categorical Dependent Variable Models: 57 References Allison, Paul D Logistic Regression Using the SAS System: Theory and Application. Cary, NC: SAS Institute. Greene, William H. 00. LIMDEP Version 8.0 Econometric Modeling Guide. Plainview, New York: Econometric Software. Greene, William H Econometric Analysis, 5 th ed. Upper Saddle River, NJ: Prentice Hall. Long, J. Scott, and Jeremy Freese Regression Models for Categorical Dependent Variables Using STATA, nd ed. College Station, TX: STATA Press. Long, J. Scott Regression Models for Categorical and Limited Dependent Variables. Advanced Quantitative Techniques in the Social Sciences. Sage Publications. Maddala, G. S Limited Dependent and Qualitative Variables in Econometrics. New York: Cambridge University Press. SAS Institute SAS/STAT 9.1 User's Guide. Cary, NC: SAS Institute. SPSS Inc SPSS 11.0 Syntax Reference Guide. Chicago, IL: SPSS Inc. STATA Press STATA Base Reference Manual, Release 8. College Station, TX: STATA Press. Stokes, Maura E., Charles S. Davis, and Gary G. Koch Categorical Data Analysis Using the SAS System, nd ed. Cary, NC: SAS Institute. Acknowledgements I am grateful to Jeremy Albright and Kevin Wilhite at the UITS Center for Statistical and Mathematical Computing for comments and suggestions. I also thank J. Scott Long in Sociology and David H. Good in the School of Public and Environmental Affairs, Indiana University, for their insightful lectures and data set. Revision History 003. First draft 004. Second draft 005. Third draft (Added bivariate logit/probit models and the nested logit model with LIMDEP examples).

Basic Statistical and Modeling Procedures Using SAS

Basic Statistical and Modeling Procedures Using SAS Basic Statistical and Modeling Procedures Using SAS One-Sample Tests The statistical procedures illustrated in this handout use two datasets. The first, Pulse, has information collected in a classroom

More information

I n d i a n a U n i v e r s i t y University Information Technology Services

I n d i a n a U n i v e r s i t y University Information Technology Services I n d i a n a U n i v e r s i t y University Information Technology Services Comparing Group Means: T-tests and One-way ANOVA Using Stata, SAS, R, and SPSS * Hun Myoung Park, Ph.D. [email protected] 003-009

More information

An Introduction to Statistical Tests for the SAS Programmer Sara Beck, Fred Hutchinson Cancer Research Center, Seattle, WA

An Introduction to Statistical Tests for the SAS Programmer Sara Beck, Fred Hutchinson Cancer Research Center, Seattle, WA ABSTRACT An Introduction to Statistical Tests for the SAS Programmer Sara Beck, Fred Hutchinson Cancer Research Center, Seattle, WA Often SAS Programmers find themselves in situations where performing

More information

SAS Software to Fit the Generalized Linear Model

SAS Software to Fit the Generalized Linear Model SAS Software to Fit the Generalized Linear Model Gordon Johnston, SAS Institute Inc., Cary, NC Abstract In recent years, the class of generalized linear models has gained popularity as a statistical modeling

More information

STATISTICA Formula Guide: Logistic Regression. Table of Contents

STATISTICA Formula Guide: Logistic Regression. Table of Contents : Table of Contents... 1 Overview of Model... 1 Dispersion... 2 Parameterization... 3 Sigma-Restricted Model... 3 Overparameterized Model... 4 Reference Coding... 4 Model Summary (Summary Tab)... 5 Summary

More information

Generalized Linear Models

Generalized Linear Models Generalized Linear Models We have previously worked with regression models where the response variable is quantitative and normally distributed. Now we turn our attention to two types of models where the

More information

Interaction effects and group comparisons Richard Williams, University of Notre Dame, http://www3.nd.edu/~rwilliam/ Last revised February 20, 2015

Interaction effects and group comparisons Richard Williams, University of Notre Dame, http://www3.nd.edu/~rwilliam/ Last revised February 20, 2015 Interaction effects and group comparisons Richard Williams, University of Notre Dame, http://www3.nd.edu/~rwilliam/ Last revised February 20, 2015 Note: This handout assumes you understand factor variables,

More information

I n d i a n a U n i v e r s i t y U n i v e r s i t y I n f o r m a t i o n T e c h n o l o g y S e r v i c e s

I n d i a n a U n i v e r s i t y U n i v e r s i t y I n f o r m a t i o n T e c h n o l o g y S e r v i c e s I n d i a n a U n i v e r s i t y U n i v e r s i t y I n f o r m a t i o n T e c h n o l o g y S e r v i c e s Linear Regression Models for Panel Data Using SAS, Stata, LIMDEP, and SPSS * Hun Myoung Park,

More information

Introduction to Regression and Data Analysis

Introduction to Regression and Data Analysis Statlab Workshop Introduction to Regression and Data Analysis with Dan Campbell and Sherlock Campbell October 28, 2008 I. The basics A. Types of variables Your variables may take several forms, and it

More information

Outline. Topic 4 - Analysis of Variance Approach to Regression. Partitioning Sums of Squares. Total Sum of Squares. Partitioning sums of squares

Outline. Topic 4 - Analysis of Variance Approach to Regression. Partitioning Sums of Squares. Total Sum of Squares. Partitioning sums of squares Topic 4 - Analysis of Variance Approach to Regression Outline Partitioning sums of squares Degrees of freedom Expected mean squares General linear test - Fall 2013 R 2 and the coefficient of correlation

More information

New SAS Procedures for Analysis of Sample Survey Data

New SAS Procedures for Analysis of Sample Survey Data New SAS Procedures for Analysis of Sample Survey Data Anthony An and Donna Watts, SAS Institute Inc, Cary, NC Abstract Researchers use sample surveys to obtain information on a wide variety of issues Many

More information

Please follow the directions once you locate the Stata software in your computer. Room 114 (Business Lab) has computers with Stata software

Please follow the directions once you locate the Stata software in your computer. Room 114 (Business Lab) has computers with Stata software STATA Tutorial Professor Erdinç Please follow the directions once you locate the Stata software in your computer. Room 114 (Business Lab) has computers with Stata software 1.Wald Test Wald Test is used

More information

Statistics, Data Analysis & Econometrics

Statistics, Data Analysis & Econometrics Using the LOGISTIC Procedure to Model Responses to Financial Services Direct Marketing David Marsh, Senior Credit Risk Modeler, Canadian Tire Financial Services, Welland, Ontario ABSTRACT It is more important

More information

ECON 142 SKETCH OF SOLUTIONS FOR APPLIED EXERCISE #2

ECON 142 SKETCH OF SOLUTIONS FOR APPLIED EXERCISE #2 University of California, Berkeley Prof. Ken Chay Department of Economics Fall Semester, 005 ECON 14 SKETCH OF SOLUTIONS FOR APPLIED EXERCISE # Question 1: a. Below are the scatter plots of hourly wages

More information

Failure to take the sampling scheme into account can lead to inaccurate point estimates and/or flawed estimates of the standard errors.

Failure to take the sampling scheme into account can lead to inaccurate point estimates and/or flawed estimates of the standard errors. Analyzing Complex Survey Data: Some key issues to be aware of Richard Williams, University of Notre Dame, http://www3.nd.edu/~rwilliam/ Last revised January 24, 2015 Rather than repeat material that is

More information

Statistics 104 Final Project A Culture of Debt: A Study of Credit Card Spending in America TF: Kevin Rader Anonymous Students: LD, MH, IW, MY

Statistics 104 Final Project A Culture of Debt: A Study of Credit Card Spending in America TF: Kevin Rader Anonymous Students: LD, MH, IW, MY Statistics 104 Final Project A Culture of Debt: A Study of Credit Card Spending in America TF: Kevin Rader Anonymous Students: LD, MH, IW, MY ABSTRACT: This project attempted to determine the relationship

More information

1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96

1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96 1 Final Review 2 Review 2.1 CI 1-propZint Scenario 1 A TV manufacturer claims in its warranty brochure that in the past not more than 10 percent of its TV sets needed any repair during the first two years

More information

Lesson 1: Comparison of Population Means Part c: Comparison of Two- Means

Lesson 1: Comparison of Population Means Part c: Comparison of Two- Means Lesson : Comparison of Population Means Part c: Comparison of Two- Means Welcome to lesson c. This third lesson of lesson will discuss hypothesis testing for two independent means. Steps in Hypothesis

More information

Marginal Effects for Continuous Variables Richard Williams, University of Notre Dame, http://www3.nd.edu/~rwilliam/ Last revised February 21, 2015

Marginal Effects for Continuous Variables Richard Williams, University of Notre Dame, http://www3.nd.edu/~rwilliam/ Last revised February 21, 2015 Marginal Effects for Continuous Variables Richard Williams, University of Notre Dame, http://www3.nd.edu/~rwilliam/ Last revised February 21, 2015 References: Long 1997, Long and Freese 2003 & 2006 & 2014,

More information

Using Stata for Categorical Data Analysis

Using Stata for Categorical Data Analysis Using Stata for Categorical Data Analysis NOTE: These problems make extensive use of Nick Cox s tab_chi, which is actually a collection of routines, and Adrian Mander s ipf command. From within Stata,

More information

xtmixed & denominator degrees of freedom: myth or magic

xtmixed & denominator degrees of freedom: myth or magic xtmixed & denominator degrees of freedom: myth or magic 2011 Chicago Stata Conference Phil Ender UCLA Statistical Consulting Group July 2011 Phil Ender xtmixed & denominator degrees of freedom: myth or

More information

NCSS Statistical Software

NCSS Statistical Software Chapter 06 Introduction This procedure provides several reports for the comparison of two distributions, including confidence intervals for the difference in means, two-sample t-tests, the z-test, the

More information

From the help desk: hurdle models

From the help desk: hurdle models The Stata Journal (2003) 3, Number 2, pp. 178 184 From the help desk: hurdle models Allen McDowell Stata Corporation Abstract. This article demonstrates that, although there is no command in Stata for

More information

Handling missing data in Stata a whirlwind tour

Handling missing data in Stata a whirlwind tour Handling missing data in Stata a whirlwind tour 2012 Italian Stata Users Group Meeting Jonathan Bartlett www.missingdata.org.uk 20th September 2012 1/55 Outline The problem of missing data and a principled

More information

Dongfeng Li. Autumn 2010

Dongfeng Li. Autumn 2010 Autumn 2010 Chapter Contents Some statistics background; ; Comparing means and proportions; variance. Students should master the basic concepts, descriptive statistics measures and graphs, basic hypothesis

More information

Multinomial and Ordinal Logistic Regression

Multinomial and Ordinal Logistic Regression Multinomial and Ordinal Logistic Regression ME104: Linear Regression Analysis Kenneth Benoit August 22, 2012 Regression with categorical dependent variables When the dependent variable is categorical,

More information

Quick Stata Guide by Liz Foster

Quick Stata Guide by Liz Foster by Liz Foster Table of Contents Part 1: 1 describe 1 generate 1 regress 3 scatter 4 sort 5 summarize 5 table 6 tabulate 8 test 10 ttest 11 Part 2: Prefixes and Notes 14 by var: 14 capture 14 use of the

More information

Experimental Design for Influential Factors of Rates on Massive Open Online Courses

Experimental Design for Influential Factors of Rates on Massive Open Online Courses Experimental Design for Influential Factors of Rates on Massive Open Online Courses December 12, 2014 Ning Li [email protected] Qing Wei [email protected] Yating Lan [email protected] Yilin Wei [email protected]

More information

August 2012 EXAMINATIONS Solution Part I

August 2012 EXAMINATIONS Solution Part I August 01 EXAMINATIONS Solution Part I (1) In a random sample of 600 eligible voters, the probability that less than 38% will be in favour of this policy is closest to (B) () In a large random sample,

More information

Testing Group Differences using T-tests, ANOVA, and Nonparametric Measures

Testing Group Differences using T-tests, ANOVA, and Nonparametric Measures Testing Group Differences using T-tests, ANOVA, and Nonparametric Measures Jamie DeCoster Department of Psychology University of Alabama 348 Gordon Palmer Hall Box 870348 Tuscaloosa, AL 35487-0348 Phone:

More information

MEASURES OF LOCATION AND SPREAD

MEASURES OF LOCATION AND SPREAD Paper TU04 An Overview of Non-parametric Tests in SAS : When, Why, and How Paul A. Pappas and Venita DePuy Durham, North Carolina, USA ABSTRACT Most commonly used statistical procedures are based on the

More information

ESTIMATING AVERAGE TREATMENT EFFECTS: IV AND CONTROL FUNCTIONS, II Jeff Wooldridge Michigan State University BGSE/IZA Course in Microeconometrics

ESTIMATING AVERAGE TREATMENT EFFECTS: IV AND CONTROL FUNCTIONS, II Jeff Wooldridge Michigan State University BGSE/IZA Course in Microeconometrics ESTIMATING AVERAGE TREATMENT EFFECTS: IV AND CONTROL FUNCTIONS, II Jeff Wooldridge Michigan State University BGSE/IZA Course in Microeconometrics July 2009 1. Quantile Treatment Effects 2. Control Functions

More information

Introduction to Quantitative Methods

Introduction to Quantitative Methods Introduction to Quantitative Methods October 15, 2009 Contents 1 Definition of Key Terms 2 2 Descriptive Statistics 3 2.1 Frequency Tables......................... 4 2.2 Measures of Central Tendencies.................

More information

GLM I An Introduction to Generalized Linear Models

GLM I An Introduction to Generalized Linear Models GLM I An Introduction to Generalized Linear Models CAS Ratemaking and Product Management Seminar March 2009 Presented by: Tanya D. Havlicek, Actuarial Assistant 0 ANTITRUST Notice The Casualty Actuarial

More information

NCSS Statistical Software

NCSS Statistical Software Chapter 06 Introduction This procedure provides several reports for the comparison of two distributions, including confidence intervals for the difference in means, two-sample t-tests, the z-test, the

More information

Syntax Menu Description Options Remarks and examples Stored results Methods and formulas References Also see. level(#) , options2

Syntax Menu Description Options Remarks and examples Stored results Methods and formulas References Also see. level(#) , options2 Title stata.com ttest t tests (mean-comparison tests) Syntax Syntax Menu Description Options Remarks and examples Stored results Methods and formulas References Also see One-sample t test ttest varname

More information

Data Analysis Tools. Tools for Summarizing Data

Data Analysis Tools. Tools for Summarizing Data Data Analysis Tools This section of the notes is meant to introduce you to many of the tools that are provided by Excel under the Tools/Data Analysis menu item. If your computer does not have that tool

More information

Aileen Murphy, Department of Economics, UCC, Ireland. WORKING PAPER SERIES 07-10

Aileen Murphy, Department of Economics, UCC, Ireland. WORKING PAPER SERIES 07-10 AN ECONOMETRIC ANALYSIS OF SMOKING BEHAVIOUR IN IRELAND Aileen Murphy, Department of Economics, UCC, Ireland. DEPARTMENT OF ECONOMICS WORKING PAPER SERIES 07-10 1 AN ECONOMETRIC ANALYSIS OF SMOKING BEHAVIOUR

More information

How to set the main menu of STATA to default factory settings standards

How to set the main menu of STATA to default factory settings standards University of Pretoria Data analysis for evaluation studies Examples in STATA version 11 List of data sets b1.dta (To be created by students in class) fp1.xls (To be provided to students) fp1.txt (To be

More information

Guide to Microsoft Excel for calculations, statistics, and plotting data

Guide to Microsoft Excel for calculations, statistics, and plotting data Page 1/47 Guide to Microsoft Excel for calculations, statistics, and plotting data Topic Page A. Writing equations and text 2 1. Writing equations with mathematical operations 2 2. Writing equations with

More information

KSTAT MINI-MANUAL. Decision Sciences 434 Kellogg Graduate School of Management

KSTAT MINI-MANUAL. Decision Sciences 434 Kellogg Graduate School of Management KSTAT MINI-MANUAL Decision Sciences 434 Kellogg Graduate School of Management Kstat is a set of macros added to Excel and it will enable you to do the statistics required for this course very easily. To

More information

Descriptive Statistics

Descriptive Statistics Descriptive Statistics Primer Descriptive statistics Central tendency Variation Relative position Relationships Calculating descriptive statistics Descriptive Statistics Purpose to describe or summarize

More information

Additional sources Compilation of sources: http://lrs.ed.uiuc.edu/tseportal/datacollectionmethodologies/jin-tselink/tselink.htm

Additional sources Compilation of sources: http://lrs.ed.uiuc.edu/tseportal/datacollectionmethodologies/jin-tselink/tselink.htm Mgt 540 Research Methods Data Analysis 1 Additional sources Compilation of sources: http://lrs.ed.uiuc.edu/tseportal/datacollectionmethodologies/jin-tselink/tselink.htm http://web.utk.edu/~dap/random/order/start.htm

More information

IBM SPSS Missing Values 22

IBM SPSS Missing Values 22 IBM SPSS Missing Values 22 Note Before using this information and the product it supports, read the information in Notices on page 23. Product Information This edition applies to version 22, release 0,

More information

Random effects and nested models with SAS

Random effects and nested models with SAS Random effects and nested models with SAS /************* classical2.sas ********************* Three levels of factor A, four levels of B Both fixed Both random A fixed, B random B nested within A ***************************************************/

More information

Statistics Review PSY379

Statistics Review PSY379 Statistics Review PSY379 Basic concepts Measurement scales Populations vs. samples Continuous vs. discrete variable Independent vs. dependent variable Descriptive vs. inferential stats Common analyses

More information

Data Analysis Methodology 1

Data Analysis Methodology 1 Data Analysis Methodology 1 Suppose you inherited the database in Table 1.1 and needed to find out what could be learned from it fast. Say your boss entered your office and said, Here s some software project

More information

Confidence Intervals for the Difference Between Two Means

Confidence Intervals for the Difference Between Two Means Chapter 47 Confidence Intervals for the Difference Between Two Means Introduction This procedure calculates the sample size necessary to achieve a specified distance from the difference in sample means

More information

HURDLE AND SELECTION MODELS Jeff Wooldridge Michigan State University BGSE/IZA Course in Microeconometrics July 2009

HURDLE AND SELECTION MODELS Jeff Wooldridge Michigan State University BGSE/IZA Course in Microeconometrics July 2009 HURDLE AND SELECTION MODELS Jeff Wooldridge Michigan State University BGSE/IZA Course in Microeconometrics July 2009 1. Introduction 2. A General Formulation 3. Truncated Normal Hurdle Model 4. Lognormal

More information

Recall this chart that showed how most of our course would be organized:

Recall this chart that showed how most of our course would be organized: Chapter 4 One-Way ANOVA Recall this chart that showed how most of our course would be organized: Explanatory Variable(s) Response Variable Methods Categorical Categorical Contingency Tables Categorical

More information

IAPRI Quantitative Analysis Capacity Building Series. Multiple regression analysis & interpreting results

IAPRI Quantitative Analysis Capacity Building Series. Multiple regression analysis & interpreting results IAPRI Quantitative Analysis Capacity Building Series Multiple regression analysis & interpreting results How important is R-squared? R-squared Published in Agricultural Economics 0.45 Best article of the

More information

MODEL I: DRINK REGRESSED ON GPA & MALE, WITHOUT CENTERING

MODEL I: DRINK REGRESSED ON GPA & MALE, WITHOUT CENTERING Interpreting Interaction Effects; Interaction Effects and Centering Richard Williams, University of Notre Dame, http://www3.nd.edu/~rwilliam/ Last revised February 20, 2015 Models with interaction effects

More information

Institute of Actuaries of India Subject CT3 Probability and Mathematical Statistics

Institute of Actuaries of India Subject CT3 Probability and Mathematical Statistics Institute of Actuaries of India Subject CT3 Probability and Mathematical Statistics For 2015 Examinations Aim The aim of the Probability and Mathematical Statistics subject is to provide a grounding in

More information

Data Mining: An Overview of Methods and Technologies for Increasing Profits in Direct Marketing. C. Olivia Rud, VP, Fleet Bank

Data Mining: An Overview of Methods and Technologies for Increasing Profits in Direct Marketing. C. Olivia Rud, VP, Fleet Bank Data Mining: An Overview of Methods and Technologies for Increasing Profits in Direct Marketing C. Olivia Rud, VP, Fleet Bank ABSTRACT Data Mining is a new term for the common practice of searching through

More information

Introduction. Hypothesis Testing. Hypothesis Testing. Significance Testing

Introduction. Hypothesis Testing. Hypothesis Testing. Significance Testing Introduction Hypothesis Testing Mark Lunt Arthritis Research UK Centre for Ecellence in Epidemiology University of Manchester 13/10/2015 We saw last week that we can never know the population parameters

More information

SUMAN DUVVURU STAT 567 PROJECT REPORT

SUMAN DUVVURU STAT 567 PROJECT REPORT SUMAN DUVVURU STAT 567 PROJECT REPORT SURVIVAL ANALYSIS OF HEROIN ADDICTS Background and introduction: Current illicit drug use among teens is continuing to increase in many countries around the world.

More information

Part 3. Comparing Groups. Chapter 7 Comparing Paired Groups 189. Chapter 8 Comparing Two Independent Groups 217

Part 3. Comparing Groups. Chapter 7 Comparing Paired Groups 189. Chapter 8 Comparing Two Independent Groups 217 Part 3 Comparing Groups Chapter 7 Comparing Paired Groups 189 Chapter 8 Comparing Two Independent Groups 217 Chapter 9 Comparing More Than Two Groups 257 188 Elementary Statistics Using SAS Chapter 7 Comparing

More information

Simple Linear Regression Inference

Simple Linear Regression Inference Simple Linear Regression Inference 1 Inference requirements The Normality assumption of the stochastic term e is needed for inference even if it is not a OLS requirement. Therefore we have: Interpretation

More information

Business Statistics. Successful completion of Introductory and/or Intermediate Algebra courses is recommended before taking Business Statistics.

Business Statistics. Successful completion of Introductory and/or Intermediate Algebra courses is recommended before taking Business Statistics. Business Course Text Bowerman, Bruce L., Richard T. O'Connell, J. B. Orris, and Dawn C. Porter. Essentials of Business, 2nd edition, McGraw-Hill/Irwin, 2008, ISBN: 978-0-07-331988-9. Required Computing

More information

Poisson Models for Count Data

Poisson Models for Count Data Chapter 4 Poisson Models for Count Data In this chapter we study log-linear models for count data under the assumption of a Poisson error structure. These models have many applications, not only to the

More information

VI. Introduction to Logistic Regression

VI. Introduction to Logistic Regression VI. Introduction to Logistic Regression We turn our attention now to the topic of modeling a categorical outcome as a function of (possibly) several factors. The framework of generalized linear models

More information

Biostatistics: Types of Data Analysis

Biostatistics: Types of Data Analysis Biostatistics: Types of Data Analysis Theresa A Scott, MS Vanderbilt University Department of Biostatistics [email protected] http://biostat.mc.vanderbilt.edu/theresascott Theresa A Scott, MS

More information

Chapter 7. One-way ANOVA

Chapter 7. One-way ANOVA Chapter 7 One-way ANOVA One-way ANOVA examines equality of population means for a quantitative outcome and a single categorical explanatory variable with any number of levels. The t-test of Chapter 6 looks

More information

Regression Analysis: A Complete Example

Regression Analysis: A Complete Example Regression Analysis: A Complete Example This section works out an example that includes all the topics we have discussed so far in this chapter. A complete example of regression analysis. PhotoDisc, Inc./Getty

More information

t Tests in Excel The Excel Statistical Master By Mark Harmon Copyright 2011 Mark Harmon

t Tests in Excel The Excel Statistical Master By Mark Harmon Copyright 2011 Mark Harmon t-tests in Excel By Mark Harmon Copyright 2011 Mark Harmon No part of this publication may be reproduced or distributed without the express permission of the author. [email protected] www.excelmasterseries.com

More information

An analysis method for a quantitative outcome and two categorical explanatory variables.

An analysis method for a quantitative outcome and two categorical explanatory variables. Chapter 11 Two-Way ANOVA An analysis method for a quantitative outcome and two categorical explanatory variables. If an experiment has a quantitative outcome and two categorical explanatory variables that

More information

DETERMINANTS OF CAPITAL ADEQUACY RATIO IN SELECTED BOSNIAN BANKS

DETERMINANTS OF CAPITAL ADEQUACY RATIO IN SELECTED BOSNIAN BANKS DETERMINANTS OF CAPITAL ADEQUACY RATIO IN SELECTED BOSNIAN BANKS Nađa DRECA International University of Sarajevo [email protected] Abstract The analysis of a data set of observation for 10

More information

HYPOTHESIS TESTING: CONFIDENCE INTERVALS, T-TESTS, ANOVAS, AND REGRESSION

HYPOTHESIS TESTING: CONFIDENCE INTERVALS, T-TESTS, ANOVAS, AND REGRESSION HYPOTHESIS TESTING: CONFIDENCE INTERVALS, T-TESTS, ANOVAS, AND REGRESSION HOD 2990 10 November 2010 Lecture Background This is a lightning speed summary of introductory statistical methods for senior undergraduate

More information

Modeling Lifetime Value in the Insurance Industry

Modeling Lifetime Value in the Insurance Industry Modeling Lifetime Value in the Insurance Industry C. Olivia Parr Rud, Executive Vice President, Data Square, LLC ABSTRACT Acquisition modeling for direct mail insurance has the unique challenge of targeting

More information

Final Exam Practice Problem Answers

Final Exam Practice Problem Answers Final Exam Practice Problem Answers The following data set consists of data gathered from 77 popular breakfast cereals. The variables in the data set are as follows: Brand: The brand name of the cereal

More information

Nonlinear Regression Functions. SW Ch 8 1/54/

Nonlinear Regression Functions. SW Ch 8 1/54/ Nonlinear Regression Functions SW Ch 8 1/54/ The TestScore STR relation looks linear (maybe) SW Ch 8 2/54/ But the TestScore Income relation looks nonlinear... SW Ch 8 3/54/ Nonlinear Regression General

More information

How To Check For Differences In The One Way Anova

How To Check For Differences In The One Way Anova MINITAB ASSISTANT WHITE PAPER This paper explains the research conducted by Minitab statisticians to develop the methods and data checks used in the Assistant in Minitab 17 Statistical Software. One-Way

More information

One-Way Analysis of Variance

One-Way Analysis of Variance One-Way Analysis of Variance Note: Much of the math here is tedious but straightforward. We ll skim over it in class but you should be sure to ask questions if you don t understand it. I. Overview A. We

More information

Statistical Functions in Excel

Statistical Functions in Excel Statistical Functions in Excel There are many statistical functions in Excel. Moreover, there are other functions that are not specified as statistical functions that are helpful in some statistical analyses.

More information

INTERPRETING THE ONE-WAY ANALYSIS OF VARIANCE (ANOVA)

INTERPRETING THE ONE-WAY ANALYSIS OF VARIANCE (ANOVA) INTERPRETING THE ONE-WAY ANALYSIS OF VARIANCE (ANOVA) As with other parametric statistics, we begin the one-way ANOVA with a test of the underlying assumptions. Our first assumption is the assumption of

More information

Chapter 29 The GENMOD Procedure. Chapter Table of Contents

Chapter 29 The GENMOD Procedure. Chapter Table of Contents Chapter 29 The GENMOD Procedure Chapter Table of Contents OVERVIEW...1365 WhatisaGeneralizedLinearModel?...1366 ExamplesofGeneralizedLinearModels...1367 TheGENMODProcedure...1368 GETTING STARTED...1370

More information

Chapter 5 Analysis of variance SPSS Analysis of variance

Chapter 5 Analysis of variance SPSS Analysis of variance Chapter 5 Analysis of variance SPSS Analysis of variance Data file used: gss.sav How to get there: Analyze Compare Means One-way ANOVA To test the null hypothesis that several population means are equal,

More information

business statistics using Excel OXFORD UNIVERSITY PRESS Glyn Davis & Branko Pecar

business statistics using Excel OXFORD UNIVERSITY PRESS Glyn Davis & Branko Pecar business statistics using Excel Glyn Davis & Branko Pecar OXFORD UNIVERSITY PRESS Detailed contents Introduction to Microsoft Excel 2003 Overview Learning Objectives 1.1 Introduction to Microsoft Excel

More information

Survey, Statistics and Psychometrics Core Research Facility University of Nebraska-Lincoln. Log-Rank Test for More Than Two Groups

Survey, Statistics and Psychometrics Core Research Facility University of Nebraska-Lincoln. Log-Rank Test for More Than Two Groups Survey, Statistics and Psychometrics Core Research Facility University of Nebraska-Lincoln Log-Rank Test for More Than Two Groups Prepared by Harlan Sayles (SRAM) Revised by Julia Soulakova (Statistics)

More information

LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING

LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING In this lab you will explore the concept of a confidence interval and hypothesis testing through a simulation problem in engineering setting.

More information

The Dummy s Guide to Data Analysis Using SPSS

The Dummy s Guide to Data Analysis Using SPSS The Dummy s Guide to Data Analysis Using SPSS Mathematics 57 Scripps College Amy Gamble April, 2001 Amy Gamble 4/30/01 All Rights Rerserved TABLE OF CONTENTS PAGE Helpful Hints for All Tests...1 Tests

More information

Tutorial 5: Hypothesis Testing

Tutorial 5: Hypothesis Testing Tutorial 5: Hypothesis Testing Rob Nicholls [email protected] MRC LMB Statistics Course 2014 Contents 1 Introduction................................ 1 2 Testing distributional assumptions....................

More information

11. Analysis of Case-control Studies Logistic Regression

11. Analysis of Case-control Studies Logistic Regression Research methods II 113 11. Analysis of Case-control Studies Logistic Regression This chapter builds upon and further develops the concepts and strategies described in Ch.6 of Mother and Child Health:

More information

Two-Sample T-Tests Allowing Unequal Variance (Enter Difference)

Two-Sample T-Tests Allowing Unequal Variance (Enter Difference) Chapter 45 Two-Sample T-Tests Allowing Unequal Variance (Enter Difference) Introduction This procedure provides sample size and power calculations for one- or two-sided two-sample t-tests when no assumption

More information

DATA INTERPRETATION AND STATISTICS

DATA INTERPRETATION AND STATISTICS PholC60 September 001 DATA INTERPRETATION AND STATISTICS Books A easy and systematic introductory text is Essentials of Medical Statistics by Betty Kirkwood, published by Blackwell at about 14. DESCRIPTIVE

More information

From the help desk: Bootstrapped standard errors

From the help desk: Bootstrapped standard errors The Stata Journal (2003) 3, Number 1, pp. 71 80 From the help desk: Bootstrapped standard errors Weihua Guan Stata Corporation Abstract. Bootstrapping is a nonparametric approach for evaluating the distribution

More information

Some Essential Statistics The Lure of Statistics

Some Essential Statistics The Lure of Statistics Some Essential Statistics The Lure of Statistics Data Mining Techniques, by M.J.A. Berry and G.S Linoff, 2004 Statistics vs. Data Mining..lie, damn lie, and statistics mining data to support preconceived

More information

Data Mining and Data Warehousing. Henryk Maciejewski. Data Mining Predictive modelling: regression

Data Mining and Data Warehousing. Henryk Maciejewski. Data Mining Predictive modelling: regression Data Mining and Data Warehousing Henryk Maciejewski Data Mining Predictive modelling: regression Algorithms for Predictive Modelling Contents Regression Classification Auxiliary topics: Estimation of prediction

More information

II. DISTRIBUTIONS distribution normal distribution. standard scores

II. DISTRIBUTIONS distribution normal distribution. standard scores Appendix D Basic Measurement And Statistics The following information was developed by Steven Rothke, PhD, Department of Psychology, Rehabilitation Institute of Chicago (RIC) and expanded by Mary F. Schmidt,

More information

Lecture 15. Endogeneity & Instrumental Variable Estimation

Lecture 15. Endogeneity & Instrumental Variable Estimation Lecture 15. Endogeneity & Instrumental Variable Estimation Saw that measurement error (on right hand side) means that OLS will be biased (biased toward zero) Potential solution to endogeneity instrumental

More information

Gamma Distribution Fitting

Gamma Distribution Fitting Chapter 552 Gamma Distribution Fitting Introduction This module fits the gamma probability distributions to a complete or censored set of individual or grouped data values. It outputs various statistics

More information

MODELING AUTO INSURANCE PREMIUMS

MODELING AUTO INSURANCE PREMIUMS MODELING AUTO INSURANCE PREMIUMS Brittany Parahus, Siena College INTRODUCTION The findings in this paper will provide the reader with a basic knowledge and understanding of how Auto Insurance Companies

More information

Section Format Day Begin End Building Rm# Instructor. 001 Lecture Tue 6:45 PM 8:40 PM Silver 401 Ballerini

Section Format Day Begin End Building Rm# Instructor. 001 Lecture Tue 6:45 PM 8:40 PM Silver 401 Ballerini NEW YORK UNIVERSITY ROBERT F. WAGNER GRADUATE SCHOOL OF PUBLIC SERVICE Course Syllabus Spring 2016 Statistical Methods for Public, Nonprofit, and Health Management Section Format Day Begin End Building

More information

1. The parameters to be estimated in the simple linear regression model Y=α+βx+ε ε~n(0,σ) are: a) α, β, σ b) α, β, ε c) a, b, s d) ε, 0, σ

1. The parameters to be estimated in the simple linear regression model Y=α+βx+ε ε~n(0,σ) are: a) α, β, σ b) α, β, ε c) a, b, s d) ε, 0, σ STA 3024 Practice Problems Exam 2 NOTE: These are just Practice Problems. This is NOT meant to look just like the test, and it is NOT the only thing that you should study. Make sure you know all the material

More information

Predictor Coef StDev T P Constant 970667056 616256122 1.58 0.154 X 0.00293 0.06163 0.05 0.963. S = 0.5597 R-Sq = 0.0% R-Sq(adj) = 0.

Predictor Coef StDev T P Constant 970667056 616256122 1.58 0.154 X 0.00293 0.06163 0.05 0.963. S = 0.5597 R-Sq = 0.0% R-Sq(adj) = 0. Statistical analysis using Microsoft Excel Microsoft Excel spreadsheets have become somewhat of a standard for data storage, at least for smaller data sets. This, along with the program often being packaged

More information

SPSS Tests for Versions 9 to 13

SPSS Tests for Versions 9 to 13 SPSS Tests for Versions 9 to 13 Chapter 2 Descriptive Statistic (including median) Choose Analyze Descriptive statistics Frequencies... Click on variable(s) then press to move to into Variable(s): list

More information

Module 5: Multiple Regression Analysis

Module 5: Multiple Regression Analysis Using Statistical Data Using to Make Statistical Decisions: Data Multiple to Make Regression Decisions Analysis Page 1 Module 5: Multiple Regression Analysis Tom Ilvento, University of Delaware, College

More information

Two Correlated Proportions (McNemar Test)

Two Correlated Proportions (McNemar Test) Chapter 50 Two Correlated Proportions (Mcemar Test) Introduction This procedure computes confidence intervals and hypothesis tests for the comparison of the marginal frequencies of two factors (each with

More information

Moderation. Moderation

Moderation. Moderation Stats - Moderation Moderation A moderator is a variable that specifies conditions under which a given predictor is related to an outcome. The moderator explains when a DV and IV are related. Moderation

More information