UNDERSTANDING MULTIPLE REGRESSION

Save this PDF as:

Size: px
Start display at page:

Transcription

1 UNDERSTANDING Multiple regression analysis (MRA) is any of several related statistical methods for evaluating the effects of more than one independent (or predictor) variable on a dependent (or outcome) variable. Since MRA can handle all ANOVA problems (but the reverse is not true), some researchers prefer to use MRA exclusively. MRA answers two main questions: (1) What is the effect (as measured by a regression coefficient) on a dependent variable (DV) of a one-unit change in an independent variable (IV), while controlling for the effects of all the other IVs? (2) What is the total effect (as measured by the R 2 ) on the DV of all the IVs taken together? Recall that simple linear (bivariate) regression, primarily used for prediction, has a single independent (predictor) variable (X) and a single dependent (criterion) variable (Y). In bivariate linear regression, a straight line was fitted to the scatterplot of points; the equation of this line, called the regression equation, has the form Y ˆ = bx + a, where Yˆ (Y hat) is the predicted value of the criterion variable for a given X value on the predictor variable. The regression coefficient (b) is the slope of the line. The regression constant (a) is the Y intercept. In multiple linear regression, scores on the criterion variable (Y) are predicted using multiple predictor variables (X 1, X 2,, X k ). In multiple linear regression, we again have a single criterion variable (Y), but we have K predictor variables (k > 2). These predictor variables are combined into an equation, called the multiple regression equation, which can be used to predict scores on the criterion variable (Ŷ ) from scores on the predictor variables (X i s). The general form of this equation is: Yˆ = b1 X 1 + b2 X bk X k + a (where the bs are the regression coefficients for the respective predictor variables and a is the regression constant). SELECTING PREDICTOR VARIABLES Predictor variables need to serve a specific purpose on the regression analysis. That is, their selection should be supported in the literature. While researcher curiosity may play a part in selecting predictor variables, a rational (justification) of inclusion should be provided, with the best rational being supported in the applicable literature on the topic area. In selecting predictor variables to use in multiple regression, we want to maximize the amount of explained variance (R 2 ) and reduce error (improve accuracy). Our predictor variables should be chosen carefully. The predictor variables (Xs) should be highly correlated with the criterion variable (Y). However, the predictor variables should have a low correlation with other predictor variables (see multicollinearity). A suppressor variable is one that changes the R 2 but has a low correlation with the criterion variable and a high correlation with other predictor variables. NUMBER OF PREDICTOR VARIABLES Including more than 5 or 6 predictor variables in a regression analysis rarely produces a substantial increase in the multiple R. An adjusted R 2 is used when there are numerous predictor variables and a small number of observations (n). In studies where the number of observations is large (n > 100) and there are only 5 or 6 predictor variables this is not a major concern.

2 SCREENING DATA PRIOR TO ANALYSIS As with any statistical analysis, one should always conduct an initial screening of their data. One of the more common concerns with data error(s) is simply data entry errors. For example, making an incorrect entry of a variable value or omitting a value that should have been entered. The researcher should always double-check their data entry to avoid unnecessary (and completely avoidable) errors. Dealing with missing or potentially influential data can be handled in multiple ways. For example, the case(s) can be deleted (typically only if they account for less than 5% of the total sample) transformed or substituted using one of many options (see for example Tabachnick & Fidell, 2001). Data can be screened as ungrouped or grouped. With ungrouped data screening, we typically conduct frequency distributions and include histograms (with a normal curve superimposed), measures of central tendency, and measures of dispersion. This screening process will allow us to check for potential outliers as well as data entry errors. We can also create scatter plots to check for linearity and homoscedasticity. Detection of multivariate outliers is typically done through regression checking for mahalanobis distance values of concern and conducting a collinearity diagnosis (discussed in more detail below). Data can also be screened as grouped data. These procedures are similar to those for ungrouped data, with the exception that each group is analyzed separately. A split file option is conducted for separate analysis. Here again, descriptive statistics are examined, as well as histograms and plots. MULTICOLLINEARITY When there are moderate to high intercorrelations (e.g., r > +.90) among the predictors, the problem is referred to as multicollinearity. The extreme form of multicollinearity is singularity, when a perfect linear relationship exists between variables, or, in terms of correlation, when r = Multicollinearity poses a real problem for the researcher using multiple regression for three reasons: 1. It severely limits the size of R, because the predictors are going after much of the same variance on y. 2. Multicollinearity makes determining the importance of a given predictor difficult because the effects of the predictors are confounded due to the correlation among them. 3. Multicollinearity increases the variances of the regression coefficients. The greater these variances, the more unstable the prediction equation will be. The following are three common methods for diagnosing multicollinearity: 1. Examine the variance inflation factors (VIF) for the predictors. 1 2 The quantity is called the jth variance inflation factor, where R 2 j is the squared ( 1 R j ) multiple correlation for predicting the jth predictor from all other predictors. The VIF for a predictor indicates whether there is a strong linear association between it and all the remaining predictors. It is distinctly possible for a predictor to have only moderate and/or relatively weak associations with the other predictors in terms of simple correlations, and yet to have a quite high R when regressed on all the other predictors. PAGE 2

3 When is the value for the variance inflation factor large enough to cause concern? As indicated by Stevens (2002): While there is no set rule of thumb on numerical values to compare the VIF, it is generally believed that if any VIF exceeds 10, there is reason for at least some concern. In this case; then one should consider variable deletion or an alternative to least squares estimation. The variance inflation factors are easily obtained from SPSS or other statistical packages. 2. Examine the tolerance value, which refers to the degree to which one predictor can itself be predicted by the other predictors in the model. Tolerance is defined as 1 R 2. The higher the value of tolerance, the less overlap there is with other variables. The higher the tolerance value, the more useful the predictor is to the analysis; the smaller the tolerance value, the higher the degree of collinearity. While it is largely debated on target values, a tolerance value of.50 or higher is generally considered acceptable (Tabachnick & Fidell, 2001). Some statisticians accept a value as low as.20 before being concerned. Tolerance tells us two things. First, it tells us the degree of overlap among the predictors, helping us to see which predictors have information in common and which are relatively independent. Just because two variables substantially overlap in their information is not reason enough to eliminate one of them. Second, the tolerance statistic alerts us to the potential problems of instability in our model. With very low levels of tolerance, the stability of the model and sometimes even the accuracy of the arithmetic can be in danger. In the extreme case where one predictor can be perfectly predicted from the others, we will have what is called a singular covariance (or correlation) matrix and most programs will stop without generating a model. If you see a statement in your SPSS printout that says that the matrix is singular or not positive-definite, the most likely explanation is that one predictor has a tolerance of 0.00 and is perfectly correlated with other variables. In this case, you will have to drop at least one predictor to break up that relationship. Such a relationship most frequently occurs when one predictor is the simple sum or average of the others. Most multiple-regression programs (e.g., SPSS) have default values for tolerance (1 SMC, where SMC is the squared multiple correlation, or R 2 ) that protect the user against inclusion of multicollinear IVs (predictors). If the default values for the programs are in place, the IVs that are very highly correlated with other IVs already in the equation are not entered. This makes sense both statistically and logically because the IVs threaten the analysis due to inflation of regression coefficients and because they are not needed due to their correlation with other IVs. 3. Examine the condition index, which is a measure of tightness or dependency of one variable on the others. The condition index is monotonic with SMS, but not linear with it. A high condition index (values of 30 or greater) is associated with variance inflation in the standard error of the parameter estimate of the variable. When its standard error becomes very large, the parameter estimate is highly uncertain. If we identify a concern from the condition index, Tabachnick and Fidell (2001) suggest looking the variance proportions. We do not want two variance proportions to be greater than.50 for each item. This consideration does not include the column labeled constant, which is the Y intercept (the value of the dependent variable when the independent variables are all zero). PAGE 3

4 There are a several ways of combating multicollinearity. 1. Combine predictors that are highly correlated. 2. If one has initially a fairly large set of predictors consider doing a principal components analysis (a type of factor analysis) to reduce to a much smaller set of predictors. 3. Consider deleting (omitting) a variable that is highly correlated with another variable or variables. 4. Consider a technique called ridge regression, which is an alternative to OLS (ordinary least squares) methods of estimating regression coefficients that is intended to reduce the problems in regression analysis associated with multicollinearity. Checking Assumptions for the Regression Model Recall that in the linear regression model it is assumed that the errors are independent and follow a normal distribution with constant variance. The normality assumption can be checked through the use of the histogram of the standardized or studentized residuals. The independence assumption implies that the subjects are responding independently of one another. This is an important assumption in that it will directly affect the Type I error rate. Residual Plots There are various types of plots that are available for assessing potential problems with the regression model. One of the most useful graphs is the standardized residuals (r i ) versus the predicted values ( ŷ i ). If the assumptions of the linear regression model are tenable, then the standardized residuals should scatter randomly about a horizontal line defined by r i = 0. Any systematic pattern or clustering of the residuals suggests a model violation(s). What we are looking for is a distinctive pattern and/or a curvilinear relationship. If nonlinearity or nonconstant variances are found, there are various remedies. For non-linearity, perhaps a polynomial model is needed. Or sometimes a transformation of the data will enable a nonlinear model to be approximated by a linear one. For non-constant variance, weighted least squares is one possibility, or more commonly, a variance stabilizing transformation (such as square root or log) may be used. Outliers and Influential Data Points Since multiple regression is a mathematical maximization procedure, it can be very sensitive to data points within split off or are different from the rest of the points, that is, to outliers. Just one or two such points can affect the interpretation of results, and it is certainly moot as to whether one or two points should be permitted to have such a profound influence. Therefore, it is important to be able to detect outliers and influential data points. There is a distinction between the two because a point that is an outlier (either on y or for the predictors) will not necessarily be influential in affecting the regression equation. PAGE 4

5 Data Editing Outliers and influential cases can occur because of recording errors. Consequently, researchers should give more consideration to the data-editing phase of the data analysis process (i.e., always listing the data and examining the list for possible errors). There are many possible sources of error from the initial data collection to the final data entry. Take care to review your data for incorrect data entry (e.g., missing or excessive numbers, transposed numbers, etc.). Measuring Outliers on y For finding subjects whose predicted scores are quite different from their actual y scores (i.e., they do not fit the model well), the standardized residuals (r i ) can be used. If the model is correct, then they have a normal distribution with a mean of 0 and a standard deviation of 1. Thus, about 95% of the r i should lie within two standard deviations of the mean and about 99% within three standard deviations. Therefore, any standardized residual greater than about 3 in absolute value is unusual and should be carefully examined (Stevens, 2002). Tabachnick and Fidell (2001) suggest measuring standardized residuals at an alpha level of.001, which in turn would give us a critical value of Measuring Outliers on Set of Predictors The hat elements (h ii ) can be used here. It can be shown that the hat elements lie between 0 and 1, and that the average hat elements is n p, where p = k + 1. Because of this, Stevens suggests that 3p n be used. Any hat element (also called leverage) greater than 3p n should be carefully examined. This is a very simple and useful rule of thumb for quickly identifying subjects who are very different from the rest of the sample on the set of predictors. Measuring Influential Data Points An influential data point is one that when deleted produces a substantial change in at least one of the regression coefficients. That is, the prediction equations with and without the influential point are quite different. Cook s distance is very useful for identifying influential data points. It measures the combined influence of the case being an outlier on y and on the set of predictors. As indicated by Stevens (2002), a Cook s distance > 1 would generally be considered large. This provides a red flag, when examining the computer printout, for identifying influential points. Mahalanobis Distance Mahalanobis distance (D 2 ) indicates how far the case is from the centroid of all cases for the predictor variables. A large distance indicates an observation that is an outlier for the predictors. How large must D 2 be before one can say that a case is significantly separated from the rest of the data at either the.01 or.05 level of significance? If it is tenable that the predictors came from a multivariate normal population, then the critical values can be compared to an established table of values. If n is moderately large (50 or more), then D 2 is approximately proportional to h ii. Therefore, D 2 (n 1) h ii can be used. PAGE 5

6 Tabachnick and Fidell (2001) suggest using the Chi-Square critical values table as a means of detecting if a variable is a multivariate outlier. Using a criterion of α =.001 with df equal to the number of independent variables to identify the critical value for which the Mahalanobis distance must be greater than. According to Tabachnick and Fidell, we are not using N-1 for df because the Mahalanobis distance is evaluated with degrees of freedom equal to the number of variable (p. 93). Thus, all Mahalanobis distance values must be examined to see if the value exceeds the established critical value. Options for dealing with outliers 1. Delete a variable that may be responsible for many outliers especially if it is highly correlated with other variables in the analysis. If you decide that cases with extreme scores are not part of the population you sampled then delete them. If the case(s) represent(s) 5% or less of the total sample size Tabachnick and Fidell (2001) suggest that deletion becomes a viable option. 2. If cases with extreme scores are considered part of the population you sampled then a way to reduce the influence of a univariate oulier is to transform the variable to change the shape of the distribution to be more normal. Tukey said you are merely reexpressing what the data have to say in other terms (Howell, 2002). 3. Another strategy for dealing with a univariate outlier is to assign the outlying case(s) a raw score on the offending variable that is one unit larger (or smaller) than the next most extreme score in the distribution (Tabachnick & Fidell, 2001, p. 71). 4. Univariate transformations and score alterations often help reduce the impact of multivariate outliers but they can still be problems. These cases are usually deleted (Tabachnick & Fidell, 2001). All transformations, changes to scores, and deletions are reported in the results section with the rationale and with citations. Summary In summarizing the examination of standardized residuals will detect y outliers, and the hat elements or the Mahalanobis distances will detect outliers on the predictors. Such outliers will not necessarily be influential data points. To determine which outliers are influential data points, find those points whose Cook distances are > 1. Those points that are flagged as influential by Cook s distance need to be examined carefully to determine whether they should be deleted from the analysis. If there is a reason to believe that these cases arise from a process different from that for the rest of the data, then the cases should be deleted. If a data point is a significant outlier on y, but its Cook distance is < 1, there is no real need to delete the point since it does not have a large effect on the regression analysis. However, one should still be interested in studying such points further to understand why they did not fit the model. References Howell, D. C. (2002). Statistical Methods for Psychology (5th ed.). Pacific Grove, CA: Duxbury. Stevens, J. P. (2002). Applied multivariate statistics for the social sciences (4th ed.). Mahwah, NJ: LEA. Tabachnick, B. G., & Fidell, L. S. (2001). Using Multivariate Statistics (4th ed.). Boston, MA: Allyn and Bacon. PAGE 6

7 Critical Values for an Outlier on the Predictors as Judged by Mahalanobis D 2 Number of Predictors k = 2 k = 3 k = 4 k = 5 n 5% 1% 5% 1% 5% 1% 5% 1% Stevens (p. 133, 2002) PAGE 7

Simple linear regression

Simple linear regression Introduction Simple linear regression is a statistical method for obtaining a formula to predict values of one variable from another where there is a causal relationship between

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates

Running head: ASSUMPTIONS IN MULTIPLE REGRESSION 1. Assumptions in Multiple Regression: A Tutorial. Dianne L. Ballance ID#

Running head: ASSUMPTIONS IN MULTIPLE REGRESSION 1 Assumptions in Multiple Regression: A Tutorial Dianne L. Ballance ID#00939966 University of Calgary APSY 607 ASSUMPTIONS IN MULTIPLE REGRESSION 2 Assumptions

DISCRIMINANT FUNCTION ANALYSIS (DA)

DISCRIMINANT FUNCTION ANALYSIS (DA) John Poulsen and Aaron French Key words: assumptions, further reading, computations, standardized coefficents, structure matrix, tests of signficance Introduction Discriminant

1) Write the following as an algebraic expression using x as the variable: Triple a number subtracted from the number

1) Write the following as an algebraic expression using x as the variable: Triple a number subtracted from the number A. 3(x - x) B. x 3 x C. 3x - x D. x - 3x 2) Write the following as an algebraic expression

The aspect of the data that we want to describe/measure is the degree of linear relationship between and The statistic r describes/measures the degree

PS 511: Advanced Statistics for Psychological and Behavioral Research 1 Both examine linear (straight line) relationships Correlation works with a pair of scores One score on each of two variables ( and

Canonical Correlation

Chapter 400 Introduction Canonical correlation analysis is the study of the linear relations between two sets of variables. It is the multivariate extension of correlation analysis. Although we will present

Chapter 14: Analyzing Relationships Between Variables

Chapter Outlines for: Frey, L., Botan, C., & Kreps, G. (1999). Investigating communication: An introduction to research methods. (2nd ed.) Boston: Allyn & Bacon. Chapter 14: Analyzing Relationships Between

2. Simple Linear Regression

Research methods - II 3 2. Simple Linear Regression Simple linear regression is a technique in parametric statistics that is commonly used for analyzing mean response of a variable Y which changes according

II. DISTRIBUTIONS distribution normal distribution. standard scores

Appendix D Basic Measurement And Statistics The following information was developed by Steven Rothke, PhD, Department of Psychology, Rehabilitation Institute of Chicago (RIC) and expanded by Mary F. Schmidt,

Lecture 5: Linear least-squares Regression III: Advanced Methods William G. Jacoby Department of Political Science Michigan State University http://polisci.msu.edu/jacoby/icpsr/regress3 Simple Linear Regression

Multiple Regression: What Is It?

Multiple Regression Multiple Regression: What Is It? Multiple regression is a collection of techniques in which there are multiple predictors of varying kinds and a single outcome We are interested in

Chapter 10. Key Ideas Correlation, Correlation Coefficient (r),

Chapter 0 Key Ideas Correlation, Correlation Coefficient (r), Section 0-: Overview We have already explored the basics of describing single variable data sets. However, when two quantitative variables

UNDERSTANDING THE TWO-WAY ANOVA

UNDERSTANDING THE e have seen how the one-way ANOVA can be used to compare two or more sample means in studies involving a single independent variable. This can be extended to two independent variables

Module 5: Multiple Regression Analysis

Using Statistical Data Using to Make Statistical Decisions: Data Multiple to Make Regression Decisions Analysis Page 1 Module 5: Multiple Regression Analysis Tom Ilvento, University of Delaware, College

Correlation and Regression Analysis: SPSS

Correlation and Regression Analysis: SPSS Bivariate Analysis: Cyberloafing Predicted from Personality and Age These days many employees, during work hours, spend time on the Internet doing personal things,

Multivariate Analysis of Variance (MANOVA)

Multivariate Analysis of Variance (MANOVA) Aaron French, Marcelo Macedo, John Poulsen, Tyler Waterson and Angela Yu Keywords: MANCOVA, special cases, assumptions, further reading, computations Introduction

Lecture 11: Outliers and Influential data Regression III: Advanced Methods William G. Jacoby Michigan State University http://polisci.msu.edu/jacoby/icpsr/regress3 Outlying Observations: Why pay attention?

POLYNOMIAL AND MULTIPLE REGRESSION. Polynomial regression used to fit nonlinear (e.g. curvilinear) data into a least squares linear regression model.

Polynomial Regression POLYNOMIAL AND MULTIPLE REGRESSION Polynomial regression used to fit nonlinear (e.g. curvilinear) data into a least squares linear regression model. It is a form of linear regression

Inferential Statistics

Inferential Statistics Sampling and the normal distribution Z-scores Confidence levels and intervals Hypothesis testing Commonly used statistical methods Inferential Statistics Descriptive statistics are

The importance of graphing the data: Anscombe s regression examples

The importance of graphing the data: Anscombe s regression examples Bruce Weaver Northern Health Research Conference Nipissing University, North Bay May 30-31, 2008 B. Weaver, NHRC 2008 1 The Objective

Module 3: Correlation and Covariance

Using Statistical Data to Make Decisions Module 3: Correlation and Covariance Tom Ilvento Dr. Mugdim Pašiƒ University of Delaware Sarajevo Graduate School of Business O ften our interest in data analysis

Chapter Seven. Multiple regression An introduction to multiple regression Performing a multiple regression on SPSS

Chapter Seven Multiple regression An introduction to multiple regression Performing a multiple regression on SPSS Section : An introduction to multiple regression WHAT IS MULTIPLE REGRESSION? Multiple

Section 14 Simple Linear Regression: Introduction to Least Squares Regression

Slide 1 Section 14 Simple Linear Regression: Introduction to Least Squares Regression There are several different measures of statistical association used for understanding the quantitative relationship

1/27/2013. PSY 512: Advanced Statistics for Psychological and Behavioral Research 2

PSY 512: Advanced Statistics for Psychological and Behavioral Research 2 Introduce moderated multiple regression Continuous predictor continuous predictor Continuous predictor categorical predictor Understand

Exercise 1.12 (Pg. 22-23)

Individuals: The objects that are described by a set of data. They may be people, animals, things, etc. (Also referred to as Cases or Records) Variables: The characteristics recorded about each individual.

Chapter 9 Descriptive Statistics for Bivariate Data

9.1 Introduction 215 Chapter 9 Descriptive Statistics for Bivariate Data 9.1 Introduction We discussed univariate data description (methods used to eplore the distribution of the values of a single variable)

UNDERSTANDING ANALYSIS OF COVARIANCE (ANCOVA)

UNDERSTANDING ANALYSIS OF COVARIANCE () In general, research is conducted for the purpose of explaining the effects of the independent variable on the dependent variable, and the purpose of research design

Simple Regression Theory II 2010 Samuel L. Baker

SIMPLE REGRESSION THEORY II 1 Simple Regression Theory II 2010 Samuel L. Baker Assessing how good the regression equation is likely to be Assignment 1A gets into drawing inferences about how close the

INTRODUCTION DATA SCREENING

EXPLORATORY FACTOR ANALYSIS ORIGINALLY PRESENTED BY: DAWN HUBER FOR THE COE FACULTY RESEARCH CENTER MODIFIED AND UPDATED FOR EPS 624/725 BY: ROBERT A. HORN & WILLIAM MARTIN (SP. 08) The purpose of this

Chapter 7: Simple linear regression Learning Objectives

Chapter 7: Simple linear regression Learning Objectives Reading: Section 7.1 of OpenIntro Statistics Video: Correlation vs. causation, YouTube (2:19) Video: Intro to Linear Regression, YouTube (5:18) -

Lecture 16: Generalized Additive Models Regression III: Advanced Methods Bill Jacoby Michigan State University http://polisci.msu.edu/jacoby/icpsr/regress3 Goals of the Lecture Introduce Additive Models

AP Statistics 2001 Solutions and Scoring Guidelines

AP Statistics 2001 Solutions and Scoring Guidelines The materials included in these files are intended for non-commercial use by AP teachers for course and exam preparation; permission for any other use

SPSS: Descriptive and Inferential Statistics. For Windows

For Windows August 2012 Table of Contents Section 1: Summarizing Data...3 1.1 Descriptive Statistics...3 Section 2: Inferential Statistics... 10 2.1 Chi-Square Test... 10 2.2 T tests... 11 2.3 Correlation...

DESCRIPTIVE STATISTICS. The purpose of statistics is to condense raw data to make it easier to answer specific questions; test hypotheses.

DESCRIPTIVE STATISTICS The purpose of statistics is to condense raw data to make it easier to answer specific questions; test hypotheses. DESCRIPTIVE VS. INFERENTIAL STATISTICS Descriptive To organize,

Using Minitab for Regression Analysis: An extended example

Using Minitab for Regression Analysis: An extended example The following example uses data from another text on fertilizer application and crop yield, and is intended to show how Minitab can be used to

Multicollinearity Richard Williams, University of Notre Dame, http://www3.nd.edu/~rwilliam/ Last revised January 13, 2015

Multicollinearity Richard Williams, University of Notre Dame, http://www3.nd.edu/~rwilliam/ Last revised January 13, 2015 Stata Example (See appendices for full example).. use http://www.nd.edu/~rwilliam/stats2/statafiles/multicoll.dta,

Yiming Peng, Department of Statistics. February 12, 2013

Regression Analysis Using JMP Yiming Peng, Department of Statistics February 12, 2013 2 Presentation and Data http://www.lisa.stat.vt.edu Short Courses Regression Analysis Using JMP Download Data to Desktop

Regression Analysis (Spring, 2000)

Regression Analysis (Spring, 2000) By Wonjae Purposes: a. Explaining the relationship between Y and X variables with a model (Explain a variable Y in terms of Xs) b. Estimating and testing the intensity

2013 MBA Jump Start Program. Statistics Module Part 3

2013 MBA Jump Start Program Module 1: Statistics Thomas Gilbert Part 3 Statistics Module Part 3 Hypothesis Testing (Inference) Regressions 2 1 Making an Investment Decision A researcher in your firm just

Chapter 13 Introduction to Linear Regression and Correlation Analysis

Chapter 3 Student Lecture Notes 3- Chapter 3 Introduction to Linear Regression and Correlation Analsis Fall 2006 Fundamentals of Business Statistics Chapter Goals To understand the methods for displaing

Premaster Statistics Tutorial 4 Full solutions

Premaster Statistics Tutorial 4 Full solutions Regression analysis Q1 (based on Doane & Seward, 4/E, 12.7) a. Interpret the slope of the fitted regression = 125,000 + 150. b. What is the prediction for

Technology Step-by-Step Using StatCrunch

Technology Step-by-Step Using StatCrunch Section 1.3 Simple Random Sampling 1. Select Data, highlight Simulate Data, then highlight Discrete Uniform. 2. Fill in the following window with the appropriate

UNDERSTANDING THE ONE-WAY ANOVA

UNDERSTANDING The One-way Analysis of Variance (ANOVA) is a procedure for testing the hypothesis that K population means are equal, where K >. The One-way ANOVA compares the means of the samples or groups

HYPOTHESIS TESTING: CONFIDENCE INTERVALS, T-TESTS, ANOVAS, AND REGRESSION

HYPOTHESIS TESTING: CONFIDENCE INTERVALS, T-TESTS, ANOVAS, AND REGRESSION HOD 2990 10 November 2010 Lecture Background This is a lightning speed summary of introductory statistical methods for senior undergraduate

SPSS Explore procedure

SPSS Explore procedure One useful function in SPSS is the Explore procedure, which will produce histograms, boxplots, stem-and-leaf plots and extensive descriptive statistics. To run the Explore procedure,

Business Statistics. Successful completion of Introductory and/or Intermediate Algebra courses is recommended before taking Business Statistics.

Business Course Text Bowerman, Bruce L., Richard T. O'Connell, J. B. Orris, and Dawn C. Porter. Essentials of Business, 2nd edition, McGraw-Hill/Irwin, 2008, ISBN: 978-0-07-331988-9. Required Computing

Simple Regression and Correlation

Simple Regression and Correlation Today, we are going to discuss a powerful statistical technique for examining whether or not two variables are related. Specifically, we are going to talk about the ideas

Univariate Regression

Univariate Regression Correlation and Regression The regression line summarizes the linear relationship between 2 variables Correlation coefficient, r, measures strength of relationship: the closer r is

Answer: C. The strength of a correlation does not change if units change by a linear transformation such as: Fahrenheit = 32 + (5/9) * Centigrade

Statistics Quiz Correlation and Regression -- ANSWERS 1. Temperature and air pollution are known to be correlated. We collect data from two laboratories, in Boston and Montreal. Boston makes their measurements

Statistics. Measurement. Scales of Measurement 7/18/2012

Statistics Measurement Measurement is defined as a set of rules for assigning numbers to represent objects, traits, attributes, or behaviors A variableis something that varies (eye color), a constant does

Lesson Lesson Outline Outline

Lesson 15 Linear Regression Lesson 15 Outline Review correlation analysis Dependent and Independent variables Least Squares Regression line Calculating l the slope Calculating the Intercept Residuals and

CALCULATIONS & STATISTICS

CALCULATIONS & STATISTICS CALCULATION OF SCORES Conversion of 1-5 scale to 0-100 scores When you look at your report, you will notice that the scores are reported on a 0-100 scale, even though respondents

Spearman s correlation

Spearman s correlation Introduction Before learning about Spearman s correllation it is important to understand Pearson s correlation which is a statistical measure of the strength of a linear relationship

SIMPLE REGRESSION ANALYSIS

SIMPLE REGRESSION ANALYSIS Introduction. Regression analysis is used when two or more variables are thought to be systematically connected by a linear relationship. In simple regression, we have only two

Correlation and Regression

Correlation and Regression Scatterplots Correlation Explanatory and response variables Simple linear regression General Principles of Data Analysis First plot the data, then add numerical summaries Look

Moderator and Mediator Analysis

Moderator and Mediator Analysis Seminar General Statistics Marijtje van Duijn October 8, Overview What is moderation and mediation? What is their relation to statistical concepts? Example(s) October 8,

Introduction to Linear Regression

14. Regression A. Introduction to Simple Linear Regression B. Partitioning Sums of Squares C. Standard Error of the Estimate D. Inferential Statistics for b and r E. Influential Observations F. Regression

Diagrams and Graphs of Statistical Data

Diagrams and Graphs of Statistical Data One of the most effective and interesting alternative way in which a statistical data may be presented is through diagrams and graphs. There are several ways in

Session 7 Bivariate Data and Analysis

Session 7 Bivariate Data and Analysis Key Terms for This Session Previously Introduced mean standard deviation New in This Session association bivariate analysis contingency table co-variation least squares

hp calculators HP 50g Trend Lines The STAT menu Trend Lines Practice predicting the future using trend lines

The STAT menu Trend Lines Practice predicting the future using trend lines The STAT menu The Statistics menu is accessed from the ORANGE shifted function of the 5 key by pressing Ù. When pressed, a CHOOSE

Introduction to Linear Regression

14. Regression A. Introduction to Simple Linear Regression B. Partitioning Sums of Squares C. Standard Error of the Estimate D. Inferential Statistics for b and r E. Influential Observations F. Regression

Homework 11. Part 1. Name: Score: / null

Name: Score: / Homework 11 Part 1 null 1 For which of the following correlations would the data points be clustered most closely around a straight line? A. r = 0.50 B. r = -0.80 C. r = 0.10 D. There is

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

e = random error, assumed to be normally distributed with mean 0 and standard deviation σ

1 Linear Regression 1.1 Simple Linear Regression Model The linear regression model is applied if we want to model a numeric response variable and its dependency on at least one numeric factor variable.

January 26, 2009 The Faculty Center for Teaching and Learning

THE BASICS OF DATA MANAGEMENT AND ANALYSIS A USER GUIDE January 26, 2009 The Faculty Center for Teaching and Learning THE BASICS OF DATA MANAGEMENT AND ANALYSIS Table of Contents Table of Contents... i

Correlation key concepts:

CORRELATION Correlation key concepts: Types of correlation Methods of studying correlation a) Scatter diagram b) Karl pearson s coefficient of correlation c) Spearman s Rank correlation coefficient d)

Econometrics Simple Linear Regression

Econometrics Simple Linear Regression Burcu Eke UC3M Linear equations with one variable Recall what a linear equation is: y = b 0 + b 1 x is a linear equation with one variable, or equivalently, a straight

This chapter will demonstrate how to perform multiple linear regression with IBM SPSS

CHAPTER 7B Multiple Regression: Statistical Methods Using IBM SPSS This chapter will demonstrate how to perform multiple linear regression with IBM SPSS first using the standard method and then using the

Regression, least squares

Regression, least squares Joe Felsenstein Department of Genome Sciences and Department of Biology Regression, least squares p.1/24 Fitting a straight line X Two distinct cases: The X values are chosen

X X X a) perfect linear correlation b) no correlation c) positive correlation (r = 1) (r = 0) (0 < r < 1)

CORRELATION AND REGRESSION / 47 CHAPTER EIGHT CORRELATION AND REGRESSION Correlation and regression are statistical methods that are commonly used in the medical literature to compare two or more variables.

Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression

Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression Objectives: To perform a hypothesis test concerning the slope of a least squares line To recognize that testing for a

Factor Analysis. Chapter 420. Introduction

Chapter 420 Introduction (FA) is an exploratory technique applied to a set of observed variables that seeks to find underlying factors (subsets of variables) from which the observed variables were generated.

Simple Predictive Analytics Curtis Seare

Using Excel to Solve Business Problems: Simple Predictive Analytics Curtis Seare Copyright: Vault Analytics July 2010 Contents Section I: Background Information Why use Predictive Analytics? How to use

Assumptions. Assumptions of linear models. Boxplot. Data exploration. Apply to response variable. Apply to error terms from linear model

Assumptions Assumptions of linear models Apply to response variable within each group if predictor categorical Apply to error terms from linear model check by analysing residuals Normality Homogeneity

TRINITY COLLEGE. Faculty of Engineering, Mathematics and Science. School of Computer Science & Statistics

UNIVERSITY OF DUBLIN TRINITY COLLEGE Faculty of Engineering, Mathematics and Science School of Computer Science & Statistics BA (Mod) Enter Course Title Trinity Term 2013 Junior/Senior Sophister ST7002

Course Text. Required Computing Software. Course Description. Course Objectives. StraighterLine. Business Statistics

Course Text Business Statistics Lind, Douglas A., Marchal, William A. and Samuel A. Wathen. Basic Statistics for Business and Economics, 7th edition, McGraw-Hill/Irwin, 2010, ISBN: 9780077384470 [This

Course Objective This course is designed to give you a basic understanding of how to run regressions in SPSS.

SPSS Regressions Social Science Research Lab American University, Washington, D.C. Web. www.american.edu/provost/ctrl/pclabs.cfm Tel. x3862 Email. SSRL@American.edu Course Objective This course is designed

Association Between Variables

Contents 11 Association Between Variables 767 11.1 Introduction............................ 767 11.1.1 Measure of Association................. 768 11.1.2 Chapter Summary.................... 769 11.2 Chi

The Big Picture. Correlation. Scatter Plots. Data

The Big Picture Correlation Bret Hanlon and Bret Larget Department of Statistics Universit of Wisconsin Madison December 6, We have just completed a length series of lectures on ANOVA where we considered

A Review of Statistical Outlier Methods

Page 1 of 5 A Review of Statistical Outlier Methods Nov 2, 2006 By: Steven Walfish Pharmaceutical Technology Statistical outlier detection has become a popular topic as a result of the US Food and Drug

Bivariate Regression Analysis. The beginning of many types of regression

Bivariate Regression Analysis The beginning of many types of regression TOPICS Beyond Correlation Forecasting Two points to estimate the slope Meeting the BLUE criterion The OLS method Purpose of Regression

AP Statistics 2012 Scoring Guidelines

AP Statistics 2012 Scoring Guidelines The College Board The College Board is a mission-driven not-for-profit organization that connects students to college success and opportunity. Founded in 1900, the

Introduction to Quantitative Methods

Introduction to Quantitative Methods October 15, 2009 Contents 1 Definition of Key Terms 2 2 Descriptive Statistics 3 2.1 Frequency Tables......................... 4 2.2 Measures of Central Tendencies.................

Multiple Regression in SPSS This example shows you how to perform multiple regression. The basic command is regression : linear.

Multiple Regression in SPSS This example shows you how to perform multiple regression. The basic command is regression : linear. In the main dialog box, input the dependent variable and several predictors.

Dimensionality Reduction: Principal Components Analysis

Dimensionality Reduction: Principal Components Analysis In data mining one often encounters situations where there are a large number of variables in the database. In such situations it is very likely

Canonical Correlation Analysis

Canonical Correlation Analysis LEARNING OBJECTIVES Upon completing this chapter, you should be able to do the following: State the similarities and differences between multiple regression, factor analysis,

PROPERTIES OF THE SAMPLE CORRELATION OF THE BIVARIATE LOGNORMAL DISTRIBUTION

PROPERTIES OF THE SAMPLE CORRELATION OF THE BIVARIATE LOGNORMAL DISTRIBUTION Chin-Diew Lai, Department of Statistics, Massey University, New Zealand John C W Rayner, School of Mathematics and Applied Statistics,

Using JMP with a Specific

1 Using JMP with a Specific Example of Regression Ying Liu 10/21/ 2009 Objectives 2 Exploratory data analysis Simple liner regression Polynomial regression How to fit a multiple regression model How to

Statistics and research

Statistics and research Usaneya Perngparn Chitlada Areesantichai Drug Dependence Research Center (WHOCC for Research and Training in Drug Dependence) College of Public Health Sciences Chulolongkorn University,

Linear Models in STATA and ANOVA

Session 4 Linear Models in STATA and ANOVA Page Strengths of Linear Relationships 4-2 A Note on Non-Linear Relationships 4-4 Multiple Linear Regression 4-5 Removal of Variables 4-8 Independent Samples

CHAPTER 13 SIMPLE LINEAR REGRESSION. Opening Example. Simple Regression. Linear Regression

Opening Example CHAPTER 13 SIMPLE LINEAR REGREION SIMPLE LINEAR REGREION! Simple Regression! Linear Regression Simple Regression Definition A regression model is a mathematical equation that descries the

Unit 9 Describing Relationships in Scatter Plots and Line Graphs

Unit 9 Describing Relationships in Scatter Plots and Line Graphs Objectives: To construct and interpret a scatter plot or line graph for two quantitative variables To recognize linear relationships, non-linear

Age to Age Factor Selection under Changing Development Chris G. Gross, ACAS, MAAA

Age to Age Factor Selection under Changing Development Chris G. Gross, ACAS, MAAA Introduction A common question faced by many actuaries when selecting loss development factors is whether to base the selected

Numerical Summarization of Data OPRE 6301

Numerical Summarization of Data OPRE 6301 Motivation... In the previous session, we used graphical techniques to describe data. For example: While this histogram provides useful insight, other interesting

Simple Linear Regression, Scatterplots, and Bivariate Correlation

1 Simple Linear Regression, Scatterplots, and Bivariate Correlation This section covers procedures for testing the association between two continuous variables using the SPSS Regression and Correlate analyses.

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.

Algebra I Overview View unit yearlong overview here Many of the concepts presented in Algebra I are progressions of concepts that were introduced in grades 6 through 8. The content presented in this course