# Multicollinearity Richard Williams, University of Notre Dame, Last revised January 13, 2015

Save this PDF as:

Size: px
Start display at page:

Download "Multicollinearity Richard Williams, University of Notre Dame, http://www3.nd.edu/~rwilliam/ Last revised January 13, 2015"

## Transcription

1 Multicollinearity Richard Williams, University of Notre Dame, Last revised January 13, 2015 Stata Example (See appendices for full example).. use clear. reg y x1 x2, beta Source SS df MS Number of obs = F( 2, 97) = 3.24 Model Prob > F = Residual R-squared = Adj R-squared = Total Root MSE = y Coef. Std. Err. t P> t Beta x x _cons corr y x1 x2, means (obs=100) Variable Mean Std. Dev. Min Max y x x y x1 x y x x Note that The t-statistics for the coefficients are not significant. Yet, the overall F is significant. Even though both IVs have the same standard deviations and almost identical correlations with Y, their estimated effects are radically different. X1 and X2 are very highly correlated (r 12 =.95). The N is small These are all indicators that multicollinearity might be a problem in these data. (See the appendices for more ways of detecting problems using Stata.) Multicollinearity Page 1

2 What multicollinearity is. Let H = the set of all the X (independent) variables. Let G k = the set of all the X variables except X k. The formula for standard errors is then s bk = (1 R 2 X kgk 2 1 R s YH * ) * ( N K 1) s y X k = 2 1 R s YH y * Tol * ( N K 1) s k X k = Vif k * 2 1 R s YH * ( N K 1) s Questions: What happens to the standard errors as R 2 YH increases? As N increases? As the multiple correlation between one DV and the others increases? As K increases? From the above formulas, it is apparent that The bigger R 2 YH is, the smaller the standard error will be. Larger sample sizes decrease standard errors (because the denominator gets bigger). This reflects the fact that larger samples will produce more precise estimates of regression coefficients. The bigger R 2 XkGk is (i.e. the more highly correlated X k is with the other IVs in the model), the bigger the standard error will be. Indeed, if X k is perfectly correlated with the other IVs, the standard error will equal infinity. This is referred to as the problem of multicollinearity. The problem is that, as the Xs become more highly correlated, it becomes more and more difficult to determine which X is actually producing the effect on Y. Also, 1 - R 2 XkGk is referred to as the Tolerance of X k. A tolerance close to 1 means there is little multicollinearity, whereas a value close to 0 suggests that multicollinearity may be a threat. The reciprocal of the tolerance is known as the Variance Inflation Factor (VIF). The VIF shows us how much the variance of the coefficient estimate is being inflated by multicollinearity. For example, if the VIF for a variable were 9, its standard error would be three times as large as it would be if its VIF was 1. In such a case, the coefficient would have to be 3 times as large to be statistically significant. Adding more variables to the equation can increase the size of standard errors, especially if the extra variables do not produce increases in R 2 YH. Adding more variables decreases the (N-K-1) part of the denominator. More variables can also decrease the tolerance of the variable and hence increase the standard error. In short, adding extraneous variables to a model tends to reduce the precision of all your estimates. Causes of multicollinearity Improper use of dummy variables (e.g. failure to exclude one category) y X k Multicollinearity Page 2

3 Including a variable that is computed from other variables in the equation (e.g. family income = husband s income + wife s income, and the regression includes all 3 income measures) In effect, including the same or almost the same variable twice (height in feet and height in inches; or, more commonly, two different operationalizations of the same identical concept) The above all imply some sort of error on the researcher s part. But, it may just be that variables really and truly are highly correlated. Consequences of multicollinearity Even extreme multicollinearity (so long as it is not perfect) does not violate OLS assumptions. OLS estimates are still unbiased and BLUE (Best Linear Unbiased Estimators) Nevertheless, the greater the multicollinearity, the greater the standard errors. When high multicollinearity is present, confidence intervals for coefficients tend to be very wide and t- statistics tend to be very small. Coefficients will have to be larger in order to be statistically significant, i.e. it will be harder to reject the null when multicollinearity is present. Note, however, that large standard errors can be caused by things besides multicollinearity. When two IVs are highly and positively correlated, their slope coefficient estimators will tend to be highly and negatively correlated. When, for example, b 1 is greater than β 1, b 2 will tend to be less than β 2. Further, a different sample will likely produce the opposite result. In other words, if you overestimate the effect of one parameter, you will tend to underestimate the effect of the other. Hence, coefficient estimates tend to be very shaky from one sample to the next. Detecting high multicollinearity. Multicollinearity is a matter of degree. There is no irrefutable test that it is or is not a problem. But, there are several warning signals: None of the t-ratios for the individual coefficients is statistically significant, yet the overall F statistic is. If there are several variables in the model, though, and not all are highly correlated with the other variables, this alone may not be enough. You could get a mix of significant and insignificant results, disguising the fact that some coefficients are insignificant because of multicollinearity. Check to see how stable coefficients are when different samples are used. For example, you might randomly divide your sample in two. If coefficients differ dramatically, multicollinearity may be a problem. Or, try a slightly different specification of a model using the same data. See if seemingly innocuous changes (adding a variable, dropping a variable, using a different operationalization of a variable) produce big shifts. In particular, as variables are added, look for changes in the signs of effects (e.g. switches from positive to negative) that seem theoretically questionable. Such changes may make sense if you believe suppressor effects are present, but otherwise they may indicate multicollinearity. Examine the bivariate correlations between the IVs, and look for big values, e.g..80 and above. However, the problem with this is Multicollinearity Page 3

4 One IV may be a linear combination of several IVs, and yet not be highly correlated with any one of them Hard to decide on a cutoff point. The smaller the sample, the lower the cutoff point should probably be. Ergo, examining the tolerances or VIFs is probably superior to examining the bivariate correlations. Indeed, you may want to actually regress each X on all of the other X s, to help you pinpoint where the problem is. A commonly given rule of thumb is that VIFs of 10 or higher (or equivalently, tolerances of.10 or less) may be reason for concern. This is, however, just a rule of thumb; Allison says he gets concerned when the VIF is over 2.5 and the tolerance is under.40. In Stata you can use the vif command after running a regression, or you can use the collin command (written by Philip Ender at UCLA). Look at the correlations of the estimated coefficients (not the variables). High correlations between pairs of coefficients indicate possible collinearity problems. In Stata you get it by running the vce, corr command after a regression. Sometimes condition numbers are used (see the appendix). An informal rule of thumb is that if the condition number is 15, multicollinearity is a concern; if it is greater than 30 multicollinearity is a very serious concern. (But again, these are just informal rules of thumb.) In Stata you can use collin. Dealing with multicollinearity Make sure you haven t made any flagrant errors, e.g. improper use of computed or dummy variables. Increase the sample size. This will usually decrease standard errors, and make it less likely that results are some sort of sampling fluke. Use information from prior research. Suppose previous studies have shown that β 1 = 2*β 2. Then, create a new variable, X 3 = 2X 1 + X 2. Then, regress Y on X 3 instead of on X 1 and X 2. b 3 is then your estimate of β 2 and 2b 3 is your estimate of β 1. Use factor analysis or some other means to create a scale from the X s. It might even be legitimate just to add variables together. In fact, you should do this anyway if you feel the X s are simply different operationalizations of the same concept (e.g. several measures might tap the same personality trait). In Stata relevant commands include factor and alpha. Use joint hypothesis tests instead of doing t-tests for individual coefficients, do an F test for a group of coefficients (i.e. an incremental F test). So, if X1, X2, and X3 are highly correlated, do an F test of the hypothesis that β 1 = β 2 = β 3 = 0. It is sometimes suggested that you drop the offending variable. If you originally added the variable just to see what happens, dropping may be a fine idea. But, if the variable really belongs in the model, this can lead to specification error, which can be even worse than multicollinearity. It may be that the best thing to do is simply to realize that multicollinearity is present, and be aware of its consequences. Multicollinearity Page 4

5 Appendix: Stata example. We use the corr, regress, vif, vce, and collin commands.. use clear. corr y x1 x2, means (obs=100) Variable Mean Std. Dev. Min Max y x x y x1 x y x x reg y x1 x2, beta Source SS df MS Number of obs = F( 2, 97) = 3.24 Model Prob > F = Residual R-squared = Adj R-squared = Total Root MSE = y Coef. Std. Err. t P> t Beta x x _cons vif Variable VIF 1/VIF x x Mean VIF vce, corr x1 x2 _cons x x _cons Note that X1 and X2 are very highly correlated (r 12 =.95). Of course, the tolerances for these variables are therefore also very low and the VIFs exceed our rule of thumb of 10. Multicollinearity Page 5

6 The t-statistics for the coefficients are not significant. Yet, the overall F is significant. Even though both IVs have the same standard deviations and almost identical correlations with Y, their estimated effects are radically different. The correlation between the coefficients for X1 and X2 is very high, -.95 The sample size is fairly small (N = 100). The condition number (reported below) is This falls within our rule of thumb range for concern. Again, this is based on the uncentered variables; if I thought centering was more appropriate I would just need to change the means of X1 and X2 to 0. (Doing so produces a condition number of 6.245, as Stata confirms below.) All of these are warning signs of multicollinearity. A change of as little as one or two cases could completely reverse the estimates of the effects.. * Use collin with uncentered data, the default. (Same as SPSS). collin x1 x2 if!missing(y) Collinearity Diagnostics SQRT R- Variable VIF VIF Tolerance Squared x x Mean VIF Cond Eigenval Index Condition Number Eigenvalues & Cond Index computed from scaled raw sscp (w/ intercept) Det(correlation matrix) Multicollinearity Page 6

7 . * Use collin with centered data using the corr option. collin x1 x2 if!missing(y), corr Collinearity Diagnostics SQRT R- Variable VIF VIF Tolerance Squared x x Mean VIF Cond Eigenval Index Condition Number Eigenvalues & Cond Index computed from deviation sscp (no intercept) Det(correlation matrix) collin is a user-written command; type findit collin to locate it and install it on your machine. Note that, with the collin command, you only give the names of the X variables, not the Y. If Y has missing data, you have to make sure that the same cases are analyzed by the collin command that were analyzed by the regress command. There are various ways of doing this. By adding the optional if!missing(y) I told Stata to only analyze those cases that were NOT missing on Y. By default, collin computed the condition number using the raw data (same as SPSS); adding the corr parameter makes it compute the condition number using centered data. [NOTE: coldiag2 is yet another Stata routine that can give you even more information concerning eigenvalues, condition indices, etc.; type findit coldiag2 to locate and install it.] Incidentally, assuming X1 and X2 are measured the same way (e.g. years, dollars, whatever) a possible solution we might consider is to simply add X1 and X2 together. This would make even more sense if we felt X1 and X2 were alternative measures of the same thing. Adding them could be legitimate if (despite the large differences in their estimated effects) their two effects did not significantly differ from each other. In Stata, we can easily test this.. test x1 = x2 ( 1) x1 - x2 = 0 F( 1, 97) = 0.10 Prob > F = Given that the effects do not significantly differ, we can do the following:. gen x1plusx2 = x1 + x2 Multicollinearity Page 7

8 . reg y x1plusx2 Source SS df MS Number of obs = F( 1, 98) = 6.43 Model Prob > F = Residual R-squared = Adj R-squared = Total Root MSE = y Coef. Std. Err. t P> t [95% Conf. Interval] x1plusx _cons The multicollinearity problem is obviously gone (since we only have one IV). As noted before, factor analysis could be used for more complicated scale construction. Multicollinearity Page 8

9 Appendix: Models with nonlinear or nonadditive (interaction) terms. Sometimes models include variables that are nonlinear or nonadditive (interactive) functions of other variables in the model. For example, the IVS might include both X and X 2. Or, the model might include X1, X2, and X1*X2. We discuss the rationale for such models later in the semester. In such cases, the original variables and the variables computed from them can be highly correlated. Also, multicollinearity between nonlinear and nonadditive terms could make it difficult to determine whether there is multicollinearity involving other variables. Consider the following simple example, where X takes on the values 1 through five and is correlated with X 2.. clear all. set obs 5 obs was 0, now 5. gen X = _n. gen XSquare = X ^ 2. list X XSquare corr X XSquare, means (obs=5) Variable Mean Std. Dev. Min Max X XSquare X XSquare X XSquare As we see, the correlation between X and X 2 is very high (.9811). High correlations can likewise be found with interaction terms. It is sometimes suggested that, with such models, the original IVs should be centered before computing other variables from them. You center a variable by subtracting the mean from every case. The mean of the centered variable is then zero (the standard deviation, of course, stays the same). The correlations between the IVs will then often be far smaller. For example, if we center X in the above problem by subtracting the mean of 3 from each case before squaring, we get. replace X = X - 3 (5 real changes made). replace XSquare = X ^ 2 (5 real changes made) Multicollinearity Page 9

10 . list X XSquare corr X XSquare, means (obs=5) Variable Mean Std. Dev. Min Max X XSquare X XSquare X XSquare As you see, the extremely high correlation we had before drops to zero when the variable is centered before computing the squared term. We ll discuss nonlinear and nonadditive models later in the semester. We ll also see other reasons why centering can be advantageous. In particular, centering can make it a lot easier to understand and interpret effects under certain conditions. The specific topic of centering is briefly discussed on pages of Jaccard et al s Interaction Effects in Multiple Regression. Also see pp of Aiken and West s Multiple Regression: Testing and Interpreting Interactions. Note, incidentally, that centering is only recommended for the IVs; you generally do not need or want to center the DV. Multicollinearity Page 10

11 Appendix: Multicollinearity in Non-OLS techniques The examples above use OLS regression. As we will see, OLS regression is not an appropriate statistical technique for many sorts of problems. For example, if the dependent variable is a dichotomy (e.g. lived or died) logistic regression or probit models are generally better. However, as Menard notes in Applied Logistic Regression Analysis, much of the diagnostic information for multicollinearity (e.g. VIFs) can be obtained by calculating an OLS regression model using the same dependent and independent variables you are using in your logistic regression model. Because the concern is with the relationship among the independent variables, the functional form of the model for the dependent variable is irrelevant to the estimation of collinearity. (Menard 2002, p. 76). In other words, you could run an OLS regression, and ignore most of the results but still use the information that pertained to multicollinearity. Even more simply, in Stata, the collin command can generally be used regardless of whether the ultimate analysis will be done with OLS regression, logistic regression, or whatever. In short, multicollinearity is not a problem that is unique to OLS regression, and the various diagnostic procedures and remedies described here are not limited to OLS. Multicollinearity Page 11

12 Appendix: Condition Numbers (Optional) Warning: This is a weak area of mine so if you really want to understand these you should do a lot more reading. Sometimes eigenvalues, condition indices and the condition number will be referred to when examining multicollinearity. While all have their uses, I will focus on the condition number. The condition number (κ) is the condition index with the largest value; it equals the square root of the largest eigenvalue (λ max ) divided by the smallest eigenvalue (λ min ), i.e. κ = When there is no collinearity at all, the eigenvalues, condition indices and condition number will all equal one. As collinearity increases, eigenvalues will be both greater and smaller than 1 (eigenvalues close to zero indicate a multicollinearity problem), and the condition indices and the condition number will increase. An informal rule of thumb is that if the condition number is 15, multicollinearity is a concern; if it is greater than 30 multicollinearity is a very serious concern. (But again, these are just informal rules of thumb.) In SPSS, you get these values by adding the COLLIN parameter to the Regression command; in Stata you can use collin. CAUTION: There are different ways of computing eigenvalues, and they lead to different results. One common approach is to center the IVs first, i.e. subtract the mean from each variable. (Equivalent approaches analyze the standardized variables or the correlation matrix.) In other instances, the variables are left uncentered. SPSS takes the uncentered approach, whereas Stata s collin can do it both ways. If you center the variables yourself, then both approaches will yield identical results. If your variables have ratio-level measurement (i.e. have a true zero point) then not centering may make sense; if they don t have ratio-level measurement, then I think it makes more sense to center. In any event, be aware that authors handle this in different ways, and there is sometimes controversy over which approach is most appropriate. I have to admit that I don t fully understand all these issues myself; and I have not seen the condition number and related statistics widely used in Sociology, although they might enjoy wider use in other fields. See Belsley, Kuh and Welsch s Regression Diagnostics: Identifying Influential Data and Sources of Collinearity (1980) for an in-depth discussion. λ λ max min Multicollinearity Page 12

13 Consider the following hypothetical example: Appendix: SPSS Example (Optional) MATRIX DATA VARIABLES = Rowtype_ Y X1 X2/ FORMAT = FREE full /FILE = INLINE / N = 100. BEGIN DATA. MEAN STDDEV CORR CORR CORR END DATA. REGRESSION Regression matrix = in(*) /VARIABLES Y X1 X2 /DESCRIPTIVES /STATISTICS DEF TOL BCOV COLLIN TOL /DEPENDENT Y /method ENTER X1 X2. Descriptive Statistics Y X1 X2 Mean Std. Deviation N Correlations Pearson Correlation Y X1 X2 Y X1 X Variables Entered/Removed b Model Variables Entered Variables Removed Method 1 X2, X1 a. Enter a. All requested variables entered. b. Dependent Variable: Y Model Summary Model R R Square Adjusted R Square Std. Error of the Estimate a a. Predictors: (Constant), X2, X1 Multicollinearity Page 13

14 Model 1 Regression Residual Total ANOVA b Sum of Squares df Mean Square F Sig a a. Predictors: (Constant), X2, X1 b. Dependent Variable: Y Model 1 (Constant) X1 X2 a. Dependent Variable: Y Unstandardized Coefficients Coefficients a Standardized Coefficients Collinearity Statistics B Std. Error Beta t Sig. Tolerance VIF Coefficient Correlations a Model 1 Correlations Covariances X2 X1 X2 X1 a. Dependent Variable: Y X2 X Collinearity Diagnostics a Model 1 Dimension a. Dependent Variable: Y Condition Variance Proportions Eigenvalue Index (Constant) X1 X Multicollinearity Page 14

### IAPRI Quantitative Analysis Capacity Building Series. Multiple regression analysis & interpreting results

IAPRI Quantitative Analysis Capacity Building Series Multiple regression analysis & interpreting results How important is R-squared? R-squared Published in Agricultural Economics 0.45 Best article of the

### Interaction effects between continuous variables (Optional)

Interaction effects between continuous variables (Optional) Richard Williams, University of Notre Dame, http://www.nd.edu/~rwilliam/ Last revised February 0, 05 This is a very brief overview of this somewhat

### SPSS Guide: Regression Analysis

SPSS Guide: Regression Analysis I put this together to give you a step-by-step guide for replicating what we did in the computer lab. It should help you run the tests we covered. The best way to get familiar

### NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates

### MODEL I: DRINK REGRESSED ON GPA & MALE, WITHOUT CENTERING

Interpreting Interaction Effects; Interaction Effects and Centering Richard Williams, University of Notre Dame, http://www3.nd.edu/~rwilliam/ Last revised February 20, 2015 Models with interaction effects

### Chapter Seven. Multiple regression An introduction to multiple regression Performing a multiple regression on SPSS

Chapter Seven Multiple regression An introduction to multiple regression Performing a multiple regression on SPSS Section : An introduction to multiple regression WHAT IS MULTIPLE REGRESSION? Multiple

### Marginal Effects for Continuous Variables Richard Williams, University of Notre Dame, http://www3.nd.edu/~rwilliam/ Last revised February 21, 2015

Marginal Effects for Continuous Variables Richard Williams, University of Notre Dame, http://www3.nd.edu/~rwilliam/ Last revised February 21, 2015 References: Long 1997, Long and Freese 2003 & 2006 & 2014,

### ECON 142 SKETCH OF SOLUTIONS FOR APPLIED EXERCISE #2

University of California, Berkeley Prof. Ken Chay Department of Economics Fall Semester, 005 ECON 14 SKETCH OF SOLUTIONS FOR APPLIED EXERCISE # Question 1: a. Below are the scatter plots of hourly wages

### Regression Analysis. Data Calculations Output

Regression Analysis In an attempt to find answers to questions such as those posed above, empirical labour economists use a useful tool called regression analysis. Regression analysis is essentially a

### False. Model 2 is not a special case of Model 1, because Model 2 includes X5, which is not part of Model 1. What she ought to do is estimate

Sociology 59 - Research Statistics I Final Exam Answer Key December 6, 00 Where appropriate, show your work - partial credit may be given. (On the other hand, don't waste a lot of time on excess verbiage.)

### Please follow the directions once you locate the Stata software in your computer. Room 114 (Business Lab) has computers with Stata software

STATA Tutorial Professor Erdinç Please follow the directions once you locate the Stata software in your computer. Room 114 (Business Lab) has computers with Stata software 1.Wald Test Wald Test is used

### CRJ Doctoral Comprehensive Exam Statistics Friday August 23, :00pm 5:30pm

CRJ Doctoral Comprehensive Exam Statistics Friday August 23, 23 2:pm 5:3pm Instructions: (Answer all questions below) Question I: Data Collection and Bivariate Hypothesis Testing. Answer the following

### Using Stata 9 & Higher for OLS Regression Richard Williams, University of Notre Dame, http://www3.nd.edu/~rwilliam/ Last revised January 8, 2015

Using Stata 9 & Higher for OLS Regression Richard Williams, University of Notre Dame, http://www3.nd.edu/~rwilliam/ Last revised January 8, 2015 Introduction. This handout shows you how Stata can be used

### REGRESSION LINES IN STATA

REGRESSION LINES IN STATA THOMAS ELLIOTT 1. Introduction to Regression Regression analysis is about eploring linear relationships between a dependent variable and one or more independent variables. Regression

### Failure to take the sampling scheme into account can lead to inaccurate point estimates and/or flawed estimates of the standard errors.

Analyzing Complex Survey Data: Some key issues to be aware of Richard Williams, University of Notre Dame, http://www3.nd.edu/~rwilliam/ Last revised January 24, 2015 Rather than repeat material that is

### Introduction to Stata

Introduction to Stata September 23, 2014 Stata is one of a few statistical analysis programs that social scientists use. Stata is in the mid-range of how easy it is to use. Other options include SPSS,

### Department of Economics Session 2012/2013. EC352 Econometric Methods. Solutions to Exercises from Week 10 + 0.0077 (0.052)

Department of Economics Session 2012/2013 University of Essex Spring Term Dr Gordon Kemp EC352 Econometric Methods Solutions to Exercises from Week 10 1 Problem 13.7 This exercise refers back to Equation

### Collinearity of independent variables. Collinearity is a condition in which some of the independent variables are highly correlated.

Collinearity of independent variables Collinearity is a condition in which some of the independent variables are highly correlated. Why is this a problem? Collinearity tends to inflate the variance of

### Multiple Regression in SPSS This example shows you how to perform multiple regression. The basic command is regression : linear.

Multiple Regression in SPSS This example shows you how to perform multiple regression. The basic command is regression : linear. In the main dialog box, input the dependent variable and several predictors.

### Lecture 15. Endogeneity & Instrumental Variable Estimation

Lecture 15. Endogeneity & Instrumental Variable Estimation Saw that measurement error (on right hand side) means that OLS will be biased (biased toward zero) Potential solution to endogeneity instrumental

### Econ 371 Problem Set #3 Answer Sheet

Econ 371 Problem Set #3 Answer Sheet 4.1 In this question, you are told that a OLS regression analysis of third grade test scores as a function of class size yields the following estimated model. T estscore

### August 2012 EXAMINATIONS Solution Part I

August 01 EXAMINATIONS Solution Part I (1) In a random sample of 600 eligible voters, the probability that less than 38% will be in favour of this policy is closest to (B) () In a large random sample,

### Simple linear regression

Simple linear regression Introduction Simple linear regression is a statistical method for obtaining a formula to predict values of one variable from another where there is a causal relationship between

### MULTIPLE REGRESSION EXAMPLE

MULTIPLE REGRESSION EXAMPLE For a sample of n = 166 college students, the following variables were measured: Y = height X 1 = mother s height ( momheight ) X 2 = father s height ( dadheight ) X 3 = 1 if

### Stepwise Regression. Chapter 311. Introduction. Variable Selection Procedures. Forward (Step-Up) Selection

Chapter 311 Introduction Often, theory and experience give only general direction as to which of a pool of candidate variables (including transformed variables) should be included in the regression model.

### Regression Analysis (Spring, 2000)

Regression Analysis (Spring, 2000) By Wonjae Purposes: a. Explaining the relationship between Y and X variables with a model (Explain a variable Y in terms of Xs) b. Estimating and testing the intensity

### MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS

MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS MSR = Mean Regression Sum of Squares MSE = Mean Squared Error RSS = Regression Sum of Squares SSE = Sum of Squared Errors/Residuals α = Level of Significance

### Rockefeller College University at Albany

Rockefeller College University at Albany PAD 705 Handout: Hypothesis Testing on Multiple Parameters In many cases we may wish to know whether two or more variables are jointly significant in a regression.

### Nonlinear relationships Richard Williams, University of Notre Dame, http://www3.nd.edu/~rwilliam/ Last revised February 20, 2015

Nonlinear relationships Richard Williams, University of Notre Dame, http://www.nd.edu/~rwilliam/ Last revised February, 5 Sources: Berry & Feldman s Multiple Regression in Practice 985; Pindyck and Rubinfeld

### Interaction effects and group comparisons Richard Williams, University of Notre Dame, http://www3.nd.edu/~rwilliam/ Last revised February 20, 2015

Interaction effects and group comparisons Richard Williams, University of Notre Dame, http://www3.nd.edu/~rwilliam/ Last revised February 20, 2015 Note: This handout assumes you understand factor variables,

### Statistical Modelling in Stata 5: Linear Models

Statistical Modelling in Stata 5: Linear Models Mark Lunt Arthritis Research UK Centre for Excellence in Epidemiology University of Manchester 08/11/2016 Structure This Week What is a linear model? How

### Module 5: Multiple Regression Analysis

Using Statistical Data Using to Make Statistical Decisions: Data Multiple to Make Regression Decisions Analysis Page 1 Module 5: Multiple Regression Analysis Tom Ilvento, University of Delaware, College

### Heteroskedasticity Richard Williams, University of Notre Dame, Last revised January 30, 2015

Heteroskedasticity Richard Williams, University of Notre Dame, http://www3.nd.edu/~rwilliam/ Last revised January 30, 2015 These notes draw heavily from Berry and Feldman, and, to a lesser extent, Allison,

### Outline. Topic 4 - Analysis of Variance Approach to Regression. Partitioning Sums of Squares. Total Sum of Squares. Partitioning sums of squares

Topic 4 - Analysis of Variance Approach to Regression Outline Partitioning sums of squares Degrees of freedom Expected mean squares General linear test - Fall 2013 R 2 and the coefficient of correlation

### HYPOTHESIS TESTING: CONFIDENCE INTERVALS, T-TESTS, ANOVAS, AND REGRESSION

HYPOTHESIS TESTING: CONFIDENCE INTERVALS, T-TESTS, ANOVAS, AND REGRESSION HOD 2990 10 November 2010 Lecture Background This is a lightning speed summary of introductory statistical methods for senior undergraduate

### Moderator and Mediator Analysis

Moderator and Mediator Analysis Seminar General Statistics Marijtje van Duijn October 8, Overview What is moderation and mediation? What is their relation to statistical concepts? Example(s) October 8,

### Bivariate Analysis. Correlation. Correlation. Pearson's Correlation Coefficient. Variable 1. Variable 2

Bivariate Analysis Variable 2 LEVELS >2 LEVELS COTIUOUS Correlation Used when you measure two continuous variables. Variable 2 2 LEVELS X 2 >2 LEVELS X 2 COTIUOUS t-test X 2 X 2 AOVA (F-test) t-test AOVA

### Lectures 8, 9 & 10. Multiple Regression Analysis

Lectures 8, 9 & 0. Multiple Regression Analysis In which you learn how to apply the principles and tests outlined in earlier lectures to more realistic models involving more than explanatory variable and

### Using Stata 11 & higher for Logistic Regression Richard Williams, University of Notre Dame, Last revised March 28, 2015

Using Stata 11 & higher for Logistic Regression Richard Williams, University of Notre Dame, http://www3.nd.edu/~rwilliam/ Last revised March 28, 2015 NOTE: The routines spost13, lrdrop1, and extremes are

### SIMPLE LINEAR CORRELATION. r can range from -1 to 1, and is independent of units of measurement. Correlation can be done on two dependent variables.

SIMPLE LINEAR CORRELATION Simple linear correlation is a measure of the degree to which two variables vary together, or a measure of the intensity of the association between two variables. Correlation

### Nonlinear Regression Functions. SW Ch 8 1/54/

Nonlinear Regression Functions SW Ch 8 1/54/ The TestScore STR relation looks linear (maybe) SW Ch 8 2/54/ But the TestScore Income relation looks nonlinear... SW Ch 8 3/54/ Nonlinear Regression General

### Statistics 104 Final Project A Culture of Debt: A Study of Credit Card Spending in America TF: Kevin Rader Anonymous Students: LD, MH, IW, MY

Statistics 104 Final Project A Culture of Debt: A Study of Credit Card Spending in America TF: Kevin Rader Anonymous Students: LD, MH, IW, MY ABSTRACT: This project attempted to determine the relationship

### For example, enter the following data in three COLUMNS in a new View window.

Statistics with Statview - 18 Paired t-test A paired t-test compares two groups of measurements when the data in the two groups are in some way paired between the groups (e.g., before and after on the

### Simple Regression Theory II 2010 Samuel L. Baker

SIMPLE REGRESSION THEORY II 1 Simple Regression Theory II 2010 Samuel L. Baker Assessing how good the regression equation is likely to be Assignment 1A gets into drawing inferences about how close the

### Association Between Variables

Contents 11 Association Between Variables 767 11.1 Introduction............................ 767 11.1.1 Measure of Association................. 768 11.1.2 Chapter Summary.................... 769 11.2 Chi

### Lecture 16. Endogeneity & Instrumental Variable Estimation (continued)

Lecture 16. Endogeneity & Instrumental Variable Estimation (continued) Seen how endogeneity, Cov(x,u) 0, can be caused by Omitting (relevant) variables from the model Measurement Error in a right hand

### 1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96

1 Final Review 2 Review 2.1 CI 1-propZint Scenario 1 A TV manufacturer claims in its warranty brochure that in the past not more than 10 percent of its TV sets needed any repair during the first two years

### Econ 371 Problem Set #4 Answer Sheet. P rice = (0.485)BDR + (23.4)Bath + (0.156)Hsize + (0.002)LSize + (0.090)Age (48.

Econ 371 Problem Set #4 Answer Sheet 6.5 This question focuses on what s called a hedonic regression model; i.e., where the sales price of the home is regressed on the various attributes of the home. The

### Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression

Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression Objectives: To perform a hypothesis test concerning the slope of a least squares line To recognize that testing for a

### Using Minitab for Regression Analysis: An extended example

Using Minitab for Regression Analysis: An extended example The following example uses data from another text on fertilizer application and crop yield, and is intended to show how Minitab can be used to

### Econ 371 Problem Set #3 Answer Sheet

Econ 371 Problem Set #3 Answer Sheet 4.3 In this question, you are told that a OLS regression analysis of average weekly earnings yields the following estimated model. AW E = 696.7 + 9.6 Age, R 2 = 0.023,

### Stata Walkthrough 4: Regression, Prediction, and Forecasting

Stata Walkthrough 4: Regression, Prediction, and Forecasting Over drinks the other evening, my neighbor told me about his 25-year-old nephew, who is dating a 35-year-old woman. God, I can t see them getting

### Handling missing data in Stata a whirlwind tour

Handling missing data in Stata a whirlwind tour 2012 Italian Stata Users Group Meeting Jonathan Bartlett www.missingdata.org.uk 20th September 2012 1/55 Outline The problem of missing data and a principled

### MULTIPLE REGRESSION WITH CATEGORICAL DATA

DEPARTMENT OF POLITICAL SCIENCE AND INTERNATIONAL RELATIONS Posc/Uapp 86 MULTIPLE REGRESSION WITH CATEGORICAL DATA I. AGENDA: A. Multiple regression with categorical variables. Coding schemes. Interpreting

### Premaster Statistics Tutorial 4 Full solutions

Premaster Statistics Tutorial 4 Full solutions Regression analysis Q1 (based on Doane & Seward, 4/E, 12.7) a. Interpret the slope of the fitted regression = 125,000 + 150. b. What is the prediction for

### 4. Multiple Regression in Practice

30 Multiple Regression in Practice 4. Multiple Regression in Practice The preceding chapters have helped define the broad principles on which regression analysis is based. What features one should look

### Regression in Stata. Alicia Doyle Lynch Harvard-MIT Data Center (HMDC)

Regression in Stata Alicia Doyle Lynch Harvard-MIT Data Center (HMDC) Documents for Today Find class materials at: http://libraries.mit.edu/guides/subjects/data/ training/workshops.html Several formats

### Questions and Answers on Hypothesis Testing and Confidence Intervals

Questions and Answers on Hypothesis Testing and Confidence Intervals L. Magee Fall, 2008 1. Using 25 observations and 5 regressors, including the constant term, a researcher estimates a linear regression

### Lecture 13. Use and Interpretation of Dummy Variables. Stop worrying for 1 lecture and learn to appreciate the uses that dummy variables can be put to

Lecture 13. Use and Interpretation of Dummy Variables Stop worrying for 1 lecture and learn to appreciate the uses that dummy variables can be put to Using dummy variables to measure average differences

### Canonical Correlation

Chapter 400 Introduction Canonical correlation analysis is the study of the linear relations between two sets of variables. It is the multivariate extension of correlation analysis. Although we will present

### Chapter 13 Introduction to Linear Regression and Correlation Analysis

Chapter 3 Student Lecture Notes 3- Chapter 3 Introduction to Linear Regression and Correlation Analsis Fall 2006 Fundamentals of Business Statistics Chapter Goals To understand the methods for displaing

### Discussion Section 4 ECON 139/239 2010 Summer Term II

Discussion Section 4 ECON 139/239 2010 Summer Term II 1. Let s use the CollegeDistance.csv data again. (a) An education advocacy group argues that, on average, a person s educational attainment would increase

### International Statistical Institute, 56th Session, 2007: Phil Everson

Teaching Regression using American Football Scores Everson, Phil Swarthmore College Department of Mathematics and Statistics 5 College Avenue Swarthmore, PA198, USA E-mail: peverso1@swarthmore.edu 1. Introduction

### One-Way Analysis of Variance

One-Way Analysis of Variance Note: Much of the math here is tedious but straightforward. We ll skim over it in class but you should be sure to ask questions if you don t understand it. I. Overview A. We

### DEPARTMENT OF PSYCHOLOGY UNIVERSITY OF LANCASTER MSC IN PSYCHOLOGICAL RESEARCH METHODS ANALYSING AND INTERPRETING DATA 2 PART 1 WEEK 9

DEPARTMENT OF PSYCHOLOGY UNIVERSITY OF LANCASTER MSC IN PSYCHOLOGICAL RESEARCH METHODS ANALYSING AND INTERPRETING DATA 2 PART 1 WEEK 9 Analysis of covariance and multiple regression So far in this course,

### Measurement Error 2: Scale Construction (Very Brief Overview) Page 1

Measurement Error 2: Scale Construction (Very Brief Overview) Richard Williams, University of Notre Dame, http://www3.nd.edu/~rwilliam/ Last revised January 22, 2015 This handout draws heavily from Marija

### EPS 625 ANALYSIS OF COVARIANCE (ANCOVA) EXAMPLE USING THE GENERAL LINEAR MODEL PROGRAM

EPS 6 ANALYSIS OF COVARIANCE (ANCOVA) EXAMPLE USING THE GENERAL LINEAR MODEL PROGRAM ANCOVA One Continuous Dependent Variable (DVD Rating) Interest Rating in DVD One Categorical/Discrete Independent Variable

### 6: Regression and Multiple Regression

6: Regression and Multiple Regression Objectives Calculate regressions with one independent variable Calculate regressions with multiple independent variables Scatterplot of predicted and actual values

### ESTIMATING AVERAGE TREATMENT EFFECTS: IV AND CONTROL FUNCTIONS, II Jeff Wooldridge Michigan State University BGSE/IZA Course in Microeconometrics

ESTIMATING AVERAGE TREATMENT EFFECTS: IV AND CONTROL FUNCTIONS, II Jeff Wooldridge Michigan State University BGSE/IZA Course in Microeconometrics July 2009 1. Quantile Treatment Effects 2. Control Functions

### 1. The parameters to be estimated in the simple linear regression model Y=α+βx+ε ε~n(0,σ) are: a) α, β, σ b) α, β, ε c) a, b, s d) ε, 0, σ

STA 3024 Practice Problems Exam 2 NOTE: These are just Practice Problems. This is NOT meant to look just like the test, and it is NOT the only thing that you should study. Make sure you know all the material

### College Education Matters for Happier Marriages and Higher Salaries ----Evidence from State Level Data in the US

College Education Matters for Happier Marriages and Higher Salaries ----Evidence from State Level Data in the US Anonymous Authors: SH, AL, YM Contact TF: Kevin Rader Abstract It is a general consensus

### Lab 5 Linear Regression with Within-subject Correlation. Goals: Data: Use the pig data which is in wide format:

Lab 5 Linear Regression with Within-subject Correlation Goals: Data: Fit linear regression models that account for within-subject correlation using Stata. Compare weighted least square, GEE, and random

### Statistics 112 Regression Cheatsheet Section 1B - Ryan Rosario

Statistics 112 Regression Cheatsheet Section 1B - Ryan Rosario I have found that the best way to practice regression is by brute force That is, given nothing but a dataset and your mind, compute everything

### Data Mining and Data Warehousing. Henryk Maciejewski. Data Mining Predictive modelling: regression

Data Mining and Data Warehousing Henryk Maciejewski Data Mining Predictive modelling: regression Algorithms for Predictive Modelling Contents Regression Classification Auxiliary topics: Estimation of prediction

### Correlation and Simple Linear Regression

Correlation and Simple Linear Regression We are often interested in studying the relationship among variables to determine whether they are associated with one another. When we think that changes in a

### Moderation. Moderation

Stats - Moderation Moderation A moderator is a variable that specifies conditions under which a given predictor is related to an outcome. The moderator explains when a DV and IV are related. Moderation

### Bivariate Regression Analysis. The beginning of many types of regression

Bivariate Regression Analysis The beginning of many types of regression TOPICS Beyond Correlation Forecasting Two points to estimate the slope Meeting the BLUE criterion The OLS method Purpose of Regression

### 2. Simple Linear Regression

Research methods - II 3 2. Simple Linear Regression Simple linear regression is a technique in parametric statistics that is commonly used for analyzing mean response of a variable Y which changes according

### Sydney Roberts Predicting Age Group Swimmers 50 Freestyle Time 1. 1. Introduction p. 2. 2. Statistical Methods Used p. 5. 3. 10 and under Males p.

Sydney Roberts Predicting Age Group Swimmers 50 Freestyle Time 1 Table of Contents 1. Introduction p. 2 2. Statistical Methods Used p. 5 3. 10 and under Males p. 8 4. 11 and up Males p. 10 5. 10 and under

### Multiple Linear Regression in Data Mining

Multiple Linear Regression in Data Mining Contents 2.1. A Review of Multiple Linear Regression 2.2. Illustration of the Regression Process 2.3. Subset Selection in Linear Regression 1 2 Chap. 2 Multiple

### Outliers Richard Williams, University of Notre Dame, http://www3.nd.edu/~rwilliam/ Last revised April 7, 2016

Outliers Richard Williams, University of Notre Dame, http://www3.nd.edu/~rwilliam/ Last revised April 7, 2016 These notes draw heavily from several sources, including Fox s Regression Diagnostics; Pindyck

### Ridge Regression. Chapter 335. Introduction. Multicollinearity. Effects of Multicollinearity. Sources of Multicollinearity

Chapter 335 Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates are unbiased, but their variances

### " Y. Notation and Equations for Regression Lecture 11/4. Notation:

Notation: Notation and Equations for Regression Lecture 11/4 m: The number of predictor variables in a regression Xi: One of multiple predictor variables. The subscript i represents any number from 1 through

### C2.1. (i) (5 marks) The average participation rate is , the average match rate is summ prate

BOSTON COLLEGE Department of Economics EC 228 01 Econometric Methods Fall 2008, Prof. Baum, Ms. Phillips (tutor), Mr. Dmitriev (grader) Problem Set 2 Due at classtime, Thursday 2 Oct 2008 2.4 (i)(5 marks)

### Cointegration and the ECM

Cointegration and the ECM Two nonstationary time series are cointegrated if they tend to move together through time. For instance, we have established that the levels of the Fed Funds rate and the 3-year

### 11. Analysis of Case-control Studies Logistic Regression

Research methods II 113 11. Analysis of Case-control Studies Logistic Regression This chapter builds upon and further develops the concepts and strategies described in Ch.6 of Mother and Child Health:

### 2. Linear regression with multiple regressors

2. Linear regression with multiple regressors Aim of this section: Introduction of the multiple regression model OLS estimation in multiple regression Measures-of-fit in multiple regression Assumptions

### Chapter 7: Simple linear regression Learning Objectives

Chapter 7: Simple linear regression Learning Objectives Reading: Section 7.1 of OpenIntro Statistics Video: Correlation vs. causation, YouTube (2:19) Video: Intro to Linear Regression, YouTube (5:18) -

### DETERMINANTS OF CAPITAL ADEQUACY RATIO IN SELECTED BOSNIAN BANKS

DETERMINANTS OF CAPITAL ADEQUACY RATIO IN SELECTED BOSNIAN BANKS Nađa DRECA International University of Sarajevo nadja.dreca@students.ius.edu.ba Abstract The analysis of a data set of observation for 10

### Part 2: Analysis of Relationship Between Two Variables

Part 2: Analysis of Relationship Between Two Variables Linear Regression Linear correlation Significance Tests Multiple regression Linear Regression Y = a X + b Dependent Variable Independent Variable

### INTERPRETING THE REPEATED-MEASURES ANOVA

INTERPRETING THE REPEATED-MEASURES ANOVA USING THE SPSS GENERAL LINEAR MODEL PROGRAM RM ANOVA In this scenario (based on a RM ANOVA example from Leech, Barrett, and Morgan, 2005) each of 12 participants

### Regression Analysis: A Complete Example

Regression Analysis: A Complete Example This section works out an example that includes all the topics we have discussed so far in this chapter. A complete example of regression analysis. PhotoDisc, Inc./Getty

### e = random error, assumed to be normally distributed with mean 0 and standard deviation σ

1 Linear Regression 1.1 Simple Linear Regression Model The linear regression model is applied if we want to model a numeric response variable and its dependency on at least one numeric factor variable.

### The Philosophy of Hypothesis Testing, Questions and Answers 2006 Samuel L. Baker

HYPOTHESIS TESTING PHILOSOPHY 1 The Philosophy of Hypothesis Testing, Questions and Answers 2006 Samuel L. Baker Question: So I'm hypothesis testing. What's the hypothesis I'm testing? Answer: When you're

### Introduction to Quantitative Methods

Introduction to Quantitative Methods October 15, 2009 Contents 1 Definition of Key Terms 2 2 Descriptive Statistics 3 2.1 Frequency Tables......................... 4 2.2 Measures of Central Tendencies.................

### Multiple Regression in SPSS STAT 314

Multiple Regression in SPSS STAT 314 I. The accompanying data is on y = profit margin of savings and loan companies in a given year, x 1 = net revenues in that year, and x 2 = number of savings and loan

### MISSING DATA TECHNIQUES WITH SAS. IDRE Statistical Consulting Group

MISSING DATA TECHNIQUES WITH SAS IDRE Statistical Consulting Group ROAD MAP FOR TODAY To discuss: 1. Commonly used techniques for handling missing data, focusing on multiple imputation 2. Issues that could