# 2. Simple Linear Regression

Size: px
Start display at page:

Transcription

1 Research methods - II 3 2. Simple Linear Regression Simple linear regression is a technique in parametric statistics that is commonly used for analyzing mean response of a variable Y which changes according to the magnitude of an intervention variable X. It forms the basis of one of the more important forms of inferential statistical analysis. For example, if we were to perform anthropometric measurements on a group of newborns, we may wish to study how one measurement, say mid-arm circumference, is related to body weight. Such a relationship once precisely established will then enable us to make judgement about body weight of an individual infant if we knew the mid-arm circumference. Definition of terms. In regression analysis there is usually the independent, or explanatory or predictor variable and a dependent or outcome or response variable. (These terms are used interchangeably and are often a source of confusion. Henceforth, we would refer to only explanatory and response variables). The effects of one or more independent variables combine to determine the response variable. In our example of body parameters in the newborn we may decide on mid-arm circumference as the independent and weight as the response variable. In observational studies the researcher can only observe the independent variable, and use it to predict the outcome variable. For example, in a study of social class and child mortality the observer can only measure different attributes which make up social class (e.g. father s occupation; housing; income; family size; mother s education; and so on) and use the information to predict child mortality. In intervention studies, on the other hand, the researcher is able to manipulate the independent variable (e.g. dose of drug) and thereby predict with greater certainty the outcome. In the first case, it is only possible to identify an association; in the second case a possible causal link. Relationship between two variables is best observed by means of a scatter plot. Then a straight line is drawn which would provide the best estimate of the observed trend. In other words, the line describes the relationship in the best possible manner. Even then for any given value of X there is variability in the values of Y. This is because of the inherent variability between individuals. The line drawn is therefore the line of means. In other words, it expresses the mean of all values of Y corresponding to a given value of X.

2 4 Simple Linear Regression Fig. 2.1 The regression Line Y axis Slope Intercept X-axis Where the line of means cuts the Y-axis we get the intercept. The intercept is the value of Y corresponding to X = 0. Its units are the units of the Y variable. The line has a slope. The slope is a measure of the change in the value of Y corresponding to a unit change in the value of X. Around the line of means there is variability in the value of Y for any given value of X. This variability is a factor in determining how useful the regression line is for predicting Y for a given value of X. The above description provides the assumptions on which simple regression analysis is based. This is summarized below: 1. The mean value of the independent variable Y increases or decreases linearly as the value of the independent variable X increases or decreases. To put it simply there is a linear relationship between X and Y. 2. For a given value of the independent variable X the corresponding values of the dependent variable Y are distributed Normally. The mean value of this distribution falls on the regression line. 3. The standard deviation of the values of dependent variable Y at any given value of independent variable X is the same for all values of X. In other words the variability in Y values is the same for all values of X.

3 Research methods - II 5 4. The deviations of all values of Y from the line of means are independent; i.e. the deviation in any one value of Y has no effect on the other values of Y for any given value of X. The observations are independent, and there is only one pair of observations on each subject. It is clear that the line of means is an important parameter. Its mathematical representation Y = α + βx is called the regression equation, and α and β are the regression coefficients. The closer the regression line comes to all the points on the scatter plot the better it is. In other words, one wants a line which would minimize the variation in the observed values of Y for all values of X. Some of the values of Y may well fall on the line. Therefore, minimizing the variation of points around the line is equivalent to minimizing the residual variation. The farther any one point is from the regression line the more the line differs from the data. The values of Y which fall on the regression line (i.e.values which fit the line) are called the Fits. The difference between an observed value of Y and a fitted value is the Residual (or Resid for short). Resid = Data Fit. Each residual may be either negative or positive. Many of the assumptions of linear regression can be stated in terms of the residuals (i.e. observed value of Y íilwwhg<)ruh[dpsoh 1. Linear relationship between X and Y can be checked by plotting either Y against X, or residuals against X. 2. The residuals are normally distributed. This can be checked by drawing a histogram or Normality plot. 3. The residuals have the same variability; checked by plotting residuals against fitted values of Y. The regression line is only part of the description of the relationship between X, Y data points. Being the line of means it tells us only the mean value of Y at any particular value of X. We still need to know how the values of Y are distributed about the regression line. For a given value of X the corresponding values of Y will not only have a mean, but also a variance and a standard deviation. As we saw above, it is one of the basic assumptions of linear regression that this standard deviation is constant, and does not depend upon X. Variation in a population is quantified by the statistical term variance (which is the square of the standard deviation i.e.sd 2 ). This concept is utilized in drawing the line. The difference between each value of the dependent variable Y not on the line and the value given by the line for a given X (i.e.y) is squared and added together. The resulting total is the sum of squared deviations. The smallest value of the sum of squared deviations between the observed values of y and the regression line is a measure of the best line. This procedure is called the method of least squares, or least square regression.

4 6 Simple Linear Regression Fig. 2.2 The Resids Simple linear regression is the process of fitting a straight line by the method of least squares on a scatter plot to study the relationship between two variables. The intercept and the slope are referred to as the parameters, and the statistical programme finds the values of the parameters that provide the best fit of the regression line. Let us now see how this works in practice. The heights of 1078 father and son pairs were measured and a scatter plot was drawn as shown in Fig Fig. 2.3 Scatterplot of sons height and fathers height SonHt Frequency FatherHt Frequency Father-height is plotted on the X-axis, and Son-height on the Y axis. There is considerable scatter for each X value. We next fit a regression line to the scatter plot as shown below:

5 Research methods - II 7 Fig. 2.4 Regression line for the scatterplot in Fig. 2.3 SonHt Frequency FatherHt Frequency These two figures illustrate what the regression line of least squares represents. For each individual X value the line represents the mean of the corresponding Y values. Some of the Y values fall on the line. They represent the Fit (or Fitted values). Each Y value away from the regression line would have a residual component, since Data Fit = Residual. The concept of the residual is important and we would return to it often in later chapters. The values of Y that fall on the regression line are the predicted or fitted values of Y as compared to the observed values. Another way of describing residuals is the difference between the observed and predicted values of Y. The statistical programme minimizes the VXPRIWKHVTXDUHVRIWKHUHVLGXDOVWRGUDZWKHUHJUHVVLRQOLQHDQGILQGWKHSDUDPHWHUV DQG 7KHVDPH applies when one is fitting a curve rather than a straight line. After the foregoing introduction to the topic of simple linear regression let us now take an example to see how regression works in practice. A study of breastmilk intake by two methods (Amer. J. Clin. Nutr. 1983; pages ). An experiment was designed to compare two methods of breastmilk intake by infants. The first method was deuterium dilution technique; the second method was test weighing. The researchers selected 14 babies, then used both techniques to measure the amount of milk consumed by each baby. The results are shown below:

6 8 Simple Linear Regression Baby Deuterium Test weighing We wish to find out as to how well these two methods correlate. We first carry out a scatter plot to see if there is any relationship between the values of breastmilk output obtained by the Deuterium method and by the age old test weighing method. Fig. 2.5 Scatter plot of Deuterium against Testweighing Regression Plot Y = X R-Sq = 59.3 % 2000 Deuterium Testweigh 2000 A linear relationship is evident, and we can now proceed with performing the regression analysis.

7 Research methods - II 9 [ In MINITAB Stat 5HJUHVVLRQ. In Response box Deuterium In Predictors box Testweigh ] The following output is obtained. (Part I) The regression equation is Deuterium = Testweigh (Part II) Predictor Coef StDev T P Constant Testweig S = R-Sq = 59.3% R-Sq(adj) = 56.0% (Part III) Analysis of Variance Source DF SS MS F P Regression Residual Error Total (Part IV) Unusual Observations Obs Testweig Deuteriu Fit StDev Fit Residual St Resid X R R denotes an observation with a large standardized residual X denotes an observation whose X value gives it large influence. Interpreting the output (Different portions of the output have been numbered to help with following the explanation provided) 1). The regression equation is Deut = testweigh which is equivalent to Y = X (Recall that Y = α + βx) The regression equation represents the least square line drawn through the scatter points. Given the data points it is the best line of least squares that can be drawn to represent them. 2). The next block of output gives the value of α= 67.3 under the heading constant, and β = together with additional information.

8 10 Simple Linear Regression Next line of output gives s the standard deviation which is a measure of how much the observed values of Y differ from the values provided by the regression line. In Fig. 2.5 there is a scatter of data points (n=14) about the regression line. The sum of squares of their vertical distances from the regression line is referred to as Error Sum of Squares (SSE for short). The sum amounts to as shown in the Analysis of Variance table in the next block (Part III) of the output. S 2 = SSE n 2 = = s = = Since s measures the spread of y values about the regression line one would expect 95% of y data points to fall within 2s of that line. The s statistic is often called the standard error of the estimate. It plays an important role in answering whether there is evidence to show a linear relationship between X and Y. The programme does this by running a check on β. If there was no linear relationship there will be no regression line, in which case β = 0. To test that β 0 the programme divides β by its standard deviation. (Standard deviation of β is derived from s by the formula sd β = s S xx. How S xx is obtained is described later). The data in the columns headed stdev and t-ratio are calculated in this manner by the programme, which gives us the standard deviation of α as and of β as The t test on β is positive at 0.001, which means that β 0. In other words there is a slope (i.e. there is a linear relationship between x and y data points). On the same line following s is R-sq =59.3%. R-sq is just the square of the correlation coefficient r. It is the fraction of the variation in Y that is explained by the regression. Next comes R-sq (adj.). It is the value of R-sq adjusted for the sample size (n) and the number of parameters (k) estimated. In this example n = 14 and 2 parameters (α and β) have been estimated. R-sq(adj.) = R-sq {(k 1) / n-k (1 R-sq)} = /12 ( ) = = 56%. 3). The next block of information is the Analysis of Variance table. In Analysis of Variance the total variation in Y values is partitioned into its component parts. Because of the linear relationship between Y and X we expect Y to vary as X varies. This variation is called explained by the regression or just regression. The remaining variation is called unexplained or residual error or just error depending on the programme one uses. Ideally the residual variation should be as small as possible. In that case most of the variation in Y will be explained by the regression, and the points on the scatter plot will lie close to the regression line. The line is then a good fit. In the column headed ss the first two entries add up to the last. And so

9 Research methods - II 11 ss (Regression) + ss (Residual) = ss (Total), which is equivalent to saying ss (Regression) = ss (Total) ívv5hvlgxdo\$vh[sodlqhgderyhdoowklvphdqvlvwkdwvv5hjuhvvlrqlv the amount of variation in the response variable Y that is accounted for by the linear relationship between the mean response and the explanatory variable X. DF is degree of freedom. SS is sum of squares MS is Mean sum of squares obtained by dividing SS by DF. (MS = SS/DF). Total SS is a measure of the variation of Y about its mean. Here it is The Regression SS is the amount of this variation explained by the regression line. (The line of least squares). The output gives it as Therefore, the fraction of variation explained by the regression line is = It is more convenient to convert this proportion to percentage which is 59.3% (i.e. the R 2 value), and to say that the regression equation explains 59.3% of the variation in Y. Thus the Analysis of Variance table helps us to answer a number of questions, like for example, Is there a straight line relationship between variables X and Y, and if so, how strong it is? The F statistic tests the null hypothesis H o : no straight line relationship between X and Y. A significant result for the F statistic means that a relationship exists as described by the straight line model. Prior to that a significant value of the t statistic means that the slope β 1 is not zero. 4). The last block gives two unusual observations denoted by X and R. The explanation is as follows: Each individual value entered in the regression analysis makes a contribution to the result. But sometimes a particular individual value may stand out from the rest. There are two ways in which the individual value can stand out viz.: (i). The observed response Y is very different from the predicted mean response. These are outliers. (ii). The values of X are very different from those observed for the other individuals in the sample. These are remote from the rest of the sample since they differ substantially from the other values on the X axis. Any value with a standardised residual (obtained by Residual stdev Resid) greater than 2 is labelled as an outlier. Observation 7 thus becomes an oulier by this definition. For example, in row 7 the value of 1098 for Deuterium is far lower than what could be expected from Testweigh value of The method of calculating remote from the rest of values is more complex and is not discussed here. How convincing is the trend observed between the response variable and the explanatory variable?

11 Research methods - II 13 β, is the slope, also called the regression coefficient. It gives the measure of change in the value of Y for a unit change in the value of X. There is also a third parameter of interest. For every value of X there is a dispersion of the corresponding values of Y around the regression line. This is the quantity Y (α + βx). As we have seen Data Fit. = the residual for each value of Y. It tells us how far the estimated mean Y value is from the actual observed value of Y. This difference between a Y value and the estimated mean Y value is called the residual. The main use of residuals is for checking the regression, and we would discuss the procedure a little later. However, in the case of simple linear regression with one explanatory variable the checking has already been performed visually by means of the scatter plot. In situations where there are several explanatory variables the residuals are needed to ascertain whether an additional explanatory variable might be worth considering. To do this the residuals are plotted against the corresponding values of the additional variable. If an association is apparent then the extra variable is a candidate for inclusion. This topic is discussed later in Chapter 5. Since the regression line indicates the mean value of Y there is a standard deviation around this mean. This is called the residual standard deviation. It can be calculated from the residual sum of squares as follows: RSS (n 2) = s 2. which gives s = (RSS n 2). Correlation From the given values of X the mean of X ( x) can be calculated, and from this the variance of X which is (X x) 2. The sum of all such calculations for each individual value of X is denoted by the symbol S xx. Similarly from the given values of Y the mean of Y ( y) can be calculated, and from this the variance of Y, which is denoted by the symbol S yy. The value of β is obtained by β = S xy S xx. One more statistic to consider is S xy, which is (X i x) (Yi y). The quantity S xy (S xx S yy) is called the correlation coefficient for X and Y, and is denoted by the symbol r. As we have seen in simple linear regression a single outcome variable (assumed to be Normally distributed) is related to a single explanatory variable by a straight line (the regression line) that represents the mean values of the response variable for each value of the explanatory variable. If the line is pointing upwards the relationship is positive, and negative if the direction is downwards. In the former case as the value of X increases so does that of Y. In the latter case as X increases Y decreases. The correlation coefficient is often used as a measure of the strength of association between two variables. A value of r close to +1 or 1 indicates a strong linear association. A value close to 0 indicates a weak association. A word of caution. If there is a curved trend the value of r could be small

12 14 Simple Linear Regression even with a strong relationship. When all the data points fall exactly on the regression line there would be no dispersion around the regression line indicating that the dependent variable can be predicted with certainty from the independent variable. This is the situation where the correlation coefficient is 1. The opposite situation arises when the correlation coefficient is 0. For values in between 0 and 1 the table below is a rough guide: r = 0.10 to 0.29 or r = to 0.29 r = 0.30 to 0.49 or r = to 0.49 r = 0.50 to 1.0 or r = -o.50 to -1.0 Small relationship Medium relationship Large relationship The coefficient R 2 (denoted as R-sq in the printout) is a measure of the portion of variability in Y explained by the regression. In other words it is the proportion of variation in Y that can be attributed to X. It is a measure of the strength of the linear relationship between X and Y. Larger the value of R 2 the greater is the linear relationship. 1 R 2 is the residual or unexplained variability. R 2 is also referred to as the coefficient of determination. It measures the fit of the regression line to the data. Values of the intercept and regression coefficients are calculated by the programme to minimize residuals in the given data. In all likelihood they will not be identical to the true population values of the intercept and slope. If the fitted regression line is used to predict for the wider situation the prediction will not probably be as accurate as for the sample. A better estimate of the proportion of variability in the population is R 2 Adj. This is discussed in later chapters. Analysis of variance In a regression analysis the variation of Y values around the regression line is a measure of how well the two variables X and Y interact. The method of partitioning the variation is known as the analysis of variance. It is expressed by means of the analysis of variance table. The total sum of squares S yy is so called because it represents the total variation of Y values around their mean. It is made up of two parts. The residual sum of squares, and

13 Research methods - II 15 that explained by the regression line. (Some authors call it reduction due to the regression). In the computer printouts this partitioning is depicted in part (3). The total variability in the values of Y (the Total Sum of Squares) has been partitioned into that explained by the regression and the part unexplained or Residual variation. The column headed df contains the degree of freedom for the different sources of variability. The column headed MS contains the mean squares, which are the sum of squares divided by their degree of freedom. The degree of freedom for a variance is the number of observations minus the number of parameters that have been estimated. In a given sample with n observations and an estimate of the mean, the degree of freedom is n-1. In regression analysis the total variability among Y values is the total sum of squares S yy. The denominator of this variance is n-1 (the degree of freedom). Both of these get partitioned into components associated with regression and residuals so that SS TOTAL = SS REGRESSION + SS RESIDUAL, as we have seen before, and df TOTAL = df REGRESSION + df TOTAL. Mean squares (MS) are obtained by dividing each sum of squares by the respective degree of freedom. The F ratio is the ratio MS regression MS residual. A ratio of mean squares follows F distribution and the value of F is reported by the programme. The resulting quantity is then looked up in the table of F distribution (similar to table of tdqg DQGWKH)WHVW statistic is calculated to give the value of P.

### Exercise 1.12 (Pg. 22-23)

Individuals: The objects that are described by a set of data. They may be people, animals, things, etc. (Also referred to as Cases or Records) Variables: The characteristics recorded about each individual.

### Regression Analysis: A Complete Example

Regression Analysis: A Complete Example This section works out an example that includes all the topics we have discussed so far in this chapter. A complete example of regression analysis. PhotoDisc, Inc./Getty

### 4. Multiple Regression in Practice

30 Multiple Regression in Practice 4. Multiple Regression in Practice The preceding chapters have helped define the broad principles on which regression analysis is based. What features one should look

### Univariate Regression

Univariate Regression Correlation and Regression The regression line summarizes the linear relationship between 2 variables Correlation coefficient, r, measures strength of relationship: the closer r is

### " Y. Notation and Equations for Regression Lecture 11/4. Notation:

Notation: Notation and Equations for Regression Lecture 11/4 m: The number of predictor variables in a regression Xi: One of multiple predictor variables. The subscript i represents any number from 1 through

### 11. Analysis of Case-control Studies Logistic Regression

Research methods II 113 11. Analysis of Case-control Studies Logistic Regression This chapter builds upon and further develops the concepts and strategies described in Ch.6 of Mother and Child Health:

### Section 14 Simple Linear Regression: Introduction to Least Squares Regression

Slide 1 Section 14 Simple Linear Regression: Introduction to Least Squares Regression There are several different measures of statistical association used for understanding the quantitative relationship

### Chapter 7: Simple linear regression Learning Objectives

Chapter 7: Simple linear regression Learning Objectives Reading: Section 7.1 of OpenIntro Statistics Video: Correlation vs. causation, YouTube (2:19) Video: Intro to Linear Regression, YouTube (5:18) -

### 17. SIMPLE LINEAR REGRESSION II

17. SIMPLE LINEAR REGRESSION II The Model In linear regression analysis, we assume that the relationship between X and Y is linear. This does not mean, however, that Y can be perfectly predicted from X.

### 10. Analysis of Longitudinal Studies Repeat-measures analysis

Research Methods II 99 10. Analysis of Longitudinal Studies Repeat-measures analysis This chapter builds on the concepts and methods described in Chapters 7 and 8 of Mother and Child Health: Research methods.

### CHAPTER 13 SIMPLE LINEAR REGRESSION. Opening Example. Simple Regression. Linear Regression

Opening Example CHAPTER 13 SIMPLE LINEAR REGREION SIMPLE LINEAR REGREION! Simple Regression! Linear Regression Simple Regression Definition A regression model is a mathematical equation that descries the

### NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates

### Chapter 10. Key Ideas Correlation, Correlation Coefficient (r),

Chapter 0 Key Ideas Correlation, Correlation Coefficient (r), Section 0-: Overview We have already explored the basics of describing single variable data sets. However, when two quantitative variables

### 1) Write the following as an algebraic expression using x as the variable: Triple a number subtracted from the number

1) Write the following as an algebraic expression using x as the variable: Triple a number subtracted from the number A. 3(x - x) B. x 3 x C. 3x - x D. x - 3x 2) Write the following as an algebraic expression

### Chapter 13 Introduction to Linear Regression and Correlation Analysis

Chapter 3 Student Lecture Notes 3- Chapter 3 Introduction to Linear Regression and Correlation Analsis Fall 2006 Fundamentals of Business Statistics Chapter Goals To understand the methods for displaing

### 1. The parameters to be estimated in the simple linear regression model Y=α+βx+ε ε~n(0,σ) are: a) α, β, σ b) α, β, ε c) a, b, s d) ε, 0, σ

STA 3024 Practice Problems Exam 2 NOTE: These are just Practice Problems. This is NOT meant to look just like the test, and it is NOT the only thing that you should study. Make sure you know all the material

### DATA INTERPRETATION AND STATISTICS

PholC60 September 001 DATA INTERPRETATION AND STATISTICS Books A easy and systematic introductory text is Essentials of Medical Statistics by Betty Kirkwood, published by Blackwell at about 14. DESCRIPTIVE

### Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression

Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression Objectives: To perform a hypothesis test concerning the slope of a least squares line To recognize that testing for a

### Session 7 Bivariate Data and Analysis

Session 7 Bivariate Data and Analysis Key Terms for This Session Previously Introduced mean standard deviation New in This Session association bivariate analysis contingency table co-variation least squares

### 1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96

1 Final Review 2 Review 2.1 CI 1-propZint Scenario 1 A TV manufacturer claims in its warranty brochure that in the past not more than 10 percent of its TV sets needed any repair during the first two years

### 2013 MBA Jump Start Program. Statistics Module Part 3

2013 MBA Jump Start Program Module 1: Statistics Thomas Gilbert Part 3 Statistics Module Part 3 Hypothesis Testing (Inference) Regressions 2 1 Making an Investment Decision A researcher in your firm just

### The importance of graphing the data: Anscombe s regression examples

The importance of graphing the data: Anscombe s regression examples Bruce Weaver Northern Health Research Conference Nipissing University, North Bay May 30-31, 2008 B. Weaver, NHRC 2008 1 The Objective

### 5. Linear Regression

5. Linear Regression Outline.................................................................... 2 Simple linear regression 3 Linear model............................................................. 4

### Chapter 23. Inferences for Regression

Chapter 23. Inferences for Regression Topics covered in this chapter: Simple Linear Regression Simple Linear Regression Example 23.1: Crying and IQ The Problem: Infants who cry easily may be more easily

### Scatter Plot, Correlation, and Regression on the TI-83/84

Scatter Plot, Correlation, and Regression on the TI-83/84 Summary: When you have a set of (x,y) data points and want to find the best equation to describe them, you are performing a regression. This page

### Linear Regression. Chapter 5. Prediction via Regression Line Number of new birds and Percent returning. Least Squares

Linear Regression Chapter 5 Regression Objective: To quantify the linear relationship between an explanatory variable (x) and response variable (y). We can then predict the average response for all subjects

### Simple linear regression

Simple linear regression Introduction Simple linear regression is a statistical method for obtaining a formula to predict values of one variable from another where there is a causal relationship between

### TRINITY COLLEGE. Faculty of Engineering, Mathematics and Science. School of Computer Science & Statistics

UNIVERSITY OF DUBLIN TRINITY COLLEGE Faculty of Engineering, Mathematics and Science School of Computer Science & Statistics BA (Mod) Enter Course Title Trinity Term 2013 Junior/Senior Sophister ST7002

### Simple Regression Theory II 2010 Samuel L. Baker

SIMPLE REGRESSION THEORY II 1 Simple Regression Theory II 2010 Samuel L. Baker Assessing how good the regression equation is likely to be Assignment 1A gets into drawing inferences about how close the

### Using Excel (Microsoft Office 2007 Version) for Graphical Analysis of Data

Using Excel (Microsoft Office 2007 Version) for Graphical Analysis of Data Introduction In several upcoming labs, a primary goal will be to determine the mathematical relationship between two variable

### Simple Predictive Analytics Curtis Seare

Using Excel to Solve Business Problems: Simple Predictive Analytics Curtis Seare Copyright: Vault Analytics July 2010 Contents Section I: Background Information Why use Predictive Analytics? How to use

### POLYNOMIAL AND MULTIPLE REGRESSION. Polynomial regression used to fit nonlinear (e.g. curvilinear) data into a least squares linear regression model.

Polynomial Regression POLYNOMIAL AND MULTIPLE REGRESSION Polynomial regression used to fit nonlinear (e.g. curvilinear) data into a least squares linear regression model. It is a form of linear regression

### Course Objective This course is designed to give you a basic understanding of how to run regressions in SPSS.

SPSS Regressions Social Science Research Lab American University, Washington, D.C. Web. www.american.edu/provost/ctrl/pclabs.cfm Tel. x3862 Email. SSRL@American.edu Course Objective This course is designed

### Example: Boats and Manatees

Figure 9-6 Example: Boats and Manatees Slide 1 Given the sample data in Table 9-1, find the value of the linear correlation coefficient r, then refer to Table A-6 to determine whether there is a significant

### Simple Linear Regression Inference

Simple Linear Regression Inference 1 Inference requirements The Normality assumption of the stochastic term e is needed for inference even if it is not a OLS requirement. Therefore we have: Interpretation

### X X X a) perfect linear correlation b) no correlation c) positive correlation (r = 1) (r = 0) (0 < r < 1)

CORRELATION AND REGRESSION / 47 CHAPTER EIGHT CORRELATION AND REGRESSION Correlation and regression are statistical methods that are commonly used in the medical literature to compare two or more variables.

### 1 Simple Linear Regression I Least Squares Estimation

Simple Linear Regression I Least Squares Estimation Textbook Sections: 8. 8.3 Previously, we have worked with a random variable x that comes from a population that is normally distributed with mean µ and

### Elementary Statistics Sample Exam #3

Elementary Statistics Sample Exam #3 Instructions. No books or telephones. Only the supplied calculators are allowed. The exam is worth 100 points. 1. A chi square goodness of fit test is considered to

### Chapter 7. One-way ANOVA

Chapter 7 One-way ANOVA One-way ANOVA examines equality of population means for a quantitative outcome and a single categorical explanatory variable with any number of levels. The t-test of Chapter 6 looks

### Diagrams and Graphs of Statistical Data

Diagrams and Graphs of Statistical Data One of the most effective and interesting alternative way in which a statistical data may be presented is through diagrams and graphs. There are several ways in

### Getting Correct Results from PROC REG

Getting Correct Results from PROC REG Nathaniel Derby, Statis Pro Data Analytics, Seattle, WA ABSTRACT PROC REG, SAS s implementation of linear regression, is often used to fit a line without checking

### MULTIPLE REGRESSION EXAMPLE

MULTIPLE REGRESSION EXAMPLE For a sample of n = 166 college students, the following variables were measured: Y = height X 1 = mother s height ( momheight ) X 2 = father s height ( dadheight ) X 3 = 1 if

### Regression and Correlation

Regression and Correlation Topics Covered: Dependent and independent variables. Scatter diagram. Correlation coefficient. Linear Regression line. by Dr.I.Namestnikova 1 Introduction Regression analysis

### How To Write A Data Analysis

Mathematics Probability and Statistics Curriculum Guide Revised 2010 This page is intentionally left blank. Introduction The Mathematics Curriculum Guide serves as a guide for teachers when planning instruction

### Part 2: Analysis of Relationship Between Two Variables

Part 2: Analysis of Relationship Between Two Variables Linear Regression Linear correlation Significance Tests Multiple regression Linear Regression Y = a X + b Dependent Variable Independent Variable

### Algebra I Vocabulary Cards

Algebra I Vocabulary Cards Table of Contents Expressions and Operations Natural Numbers Whole Numbers Integers Rational Numbers Irrational Numbers Real Numbers Absolute Value Order of Operations Expression

### An analysis appropriate for a quantitative outcome and a single quantitative explanatory. 9.1 The model behind linear regression

Chapter 9 Simple Linear Regression An analysis appropriate for a quantitative outcome and a single quantitative explanatory variable. 9.1 The model behind linear regression When we are examining the relationship

### Introduction to Regression and Data Analysis

Statlab Workshop Introduction to Regression and Data Analysis with Dan Campbell and Sherlock Campbell October 28, 2008 I. The basics A. Types of variables Your variables may take several forms, and it

### Correlation and Regression

Correlation and Regression Scatterplots Correlation Explanatory and response variables Simple linear regression General Principles of Data Analysis First plot the data, then add numerical summaries Look

### Outline. Topic 4 - Analysis of Variance Approach to Regression. Partitioning Sums of Squares. Total Sum of Squares. Partitioning sums of squares

Topic 4 - Analysis of Variance Approach to Regression Outline Partitioning sums of squares Degrees of freedom Expected mean squares General linear test - Fall 2013 R 2 and the coefficient of correlation

### Lecture 11: Chapter 5, Section 3 Relationships between Two Quantitative Variables; Correlation

Lecture 11: Chapter 5, Section 3 Relationships between Two Quantitative Variables; Correlation Display and Summarize Correlation for Direction and Strength Properties of Correlation Regression Line Cengage

### Basic Statistics and Data Analysis for Health Researchers from Foreign Countries

Basic Statistics and Data Analysis for Health Researchers from Foreign Countries Volkert Siersma siersma@sund.ku.dk The Research Unit for General Practice in Copenhagen Dias 1 Content Quantifying association

### Module 3: Correlation and Covariance

Using Statistical Data to Make Decisions Module 3: Correlation and Covariance Tom Ilvento Dr. Mugdim Pašiƒ University of Delaware Sarajevo Graduate School of Business O ften our interest in data analysis

### How To Run Statistical Tests in Excel

How To Run Statistical Tests in Excel Microsoft Excel is your best tool for storing and manipulating data, calculating basic descriptive statistics such as means and standard deviations, and conducting

### Geostatistics Exploratory Analysis

Instituto Superior de Estatística e Gestão de Informação Universidade Nova de Lisboa Master of Science in Geospatial Technologies Geostatistics Exploratory Analysis Carlos Alberto Felgueiras cfelgueiras@isegi.unl.pt

### Summarizing and Displaying Categorical Data

Summarizing and Displaying Categorical Data Categorical data can be summarized in a frequency distribution which counts the number of cases, or frequency, that fall into each category, or a relative frequency

### CALCULATIONS & STATISTICS

CALCULATIONS & STATISTICS CALCULATION OF SCORES Conversion of 1-5 scale to 0-100 scores When you look at your report, you will notice that the scores are reported on a 0-100 scale, even though respondents

### Relationships Between Two Variables: Scatterplots and Correlation

Relationships Between Two Variables: Scatterplots and Correlation Example: Consider the population of cars manufactured in the U.S. What is the relationship (1) between engine size and horsepower? (2)

### Predictor Coef StDev T P Constant 970667056 616256122 1.58 0.154 X 0.00293 0.06163 0.05 0.963. S = 0.5597 R-Sq = 0.0% R-Sq(adj) = 0.

Statistical analysis using Microsoft Excel Microsoft Excel spreadsheets have become somewhat of a standard for data storage, at least for smaller data sets. This, along with the program often being packaged

### Normality Testing in Excel

Normality Testing in Excel By Mark Harmon Copyright 2011 Mark Harmon No part of this publication may be reproduced or distributed without the express permission of the author. mark@excelmasterseries.com

### International Statistical Institute, 56th Session, 2007: Phil Everson

Teaching Regression using American Football Scores Everson, Phil Swarthmore College Department of Mathematics and Statistics 5 College Avenue Swarthmore, PA198, USA E-mail: peverso1@swarthmore.edu 1. Introduction

### Analysing Questionnaires using Minitab (for SPSS queries contact -) Graham.Currell@uwe.ac.uk

Analysing Questionnaires using Minitab (for SPSS queries contact -) Graham.Currell@uwe.ac.uk Structure As a starting point it is useful to consider a basic questionnaire as containing three main sections:

### Correlation. What Is Correlation? Perfect Correlation. Perfect Correlation. Greg C Elvers

Correlation Greg C Elvers What Is Correlation? Correlation is a descriptive statistic that tells you if two variables are related to each other E.g. Is your related to how much you study? When two variables

### Testing for Lack of Fit

Chapter 6 Testing for Lack of Fit How can we tell if a model fits the data? If the model is correct then ˆσ 2 should be an unbiased estimate of σ 2. If we have a model which is not complex enough to fit

### Violent crime total. Problem Set 1

Problem Set 1 Note: this problem set is primarily intended to get you used to manipulating and presenting data using a spreadsheet program. While subsequent problem sets will be useful indicators of the

### Vertical Alignment Colorado Academic Standards 6 th - 7 th - 8 th

Vertical Alignment Colorado Academic Standards 6 th - 7 th - 8 th Standard 3: Data Analysis, Statistics, and Probability 6 th Prepared Graduates: 1. Solve problems and make decisions that depend on un

### DESCRIPTIVE STATISTICS. The purpose of statistics is to condense raw data to make it easier to answer specific questions; test hypotheses.

DESCRIPTIVE STATISTICS The purpose of statistics is to condense raw data to make it easier to answer specific questions; test hypotheses. DESCRIPTIVE VS. INFERENTIAL STATISTICS Descriptive To organize,

### Recall this chart that showed how most of our course would be organized:

Chapter 4 One-Way ANOVA Recall this chart that showed how most of our course would be organized: Explanatory Variable(s) Response Variable Methods Categorical Categorical Contingency Tables Categorical

### One-Way Analysis of Variance: A Guide to Testing Differences Between Multiple Groups

One-Way Analysis of Variance: A Guide to Testing Differences Between Multiple Groups In analysis of variance, the main research question is whether the sample means are from different populations. The

: Table of Contents... 1 Overview of Model... 1 Dispersion... 2 Parameterization... 3 Sigma-Restricted Model... 3 Overparameterized Model... 4 Reference Coding... 4 Model Summary (Summary Tab)... 5 Summary

### Statistical Models in R

Statistical Models in R Some Examples Steven Buechler Department of Mathematics 276B Hurley Hall; 1-6233 Fall, 2007 Outline Statistical Models Linear Models in R Regression Regression analysis is the appropriate

### Correlation key concepts:

CORRELATION Correlation key concepts: Types of correlation Methods of studying correlation a) Scatter diagram b) Karl pearson s coefficient of correlation c) Spearman s Rank correlation coefficient d)

### Econometrics Simple Linear Regression

Econometrics Simple Linear Regression Burcu Eke UC3M Linear equations with one variable Recall what a linear equation is: y = b 0 + b 1 x is a linear equation with one variable, or equivalently, a straight

### II. DISTRIBUTIONS distribution normal distribution. standard scores

Appendix D Basic Measurement And Statistics The following information was developed by Steven Rothke, PhD, Department of Psychology, Rehabilitation Institute of Chicago (RIC) and expanded by Mary F. Schmidt,

### Causal Forecasting Models

CTL.SC1x -Supply Chain & Logistics Fundamentals Causal Forecasting Models MIT Center for Transportation & Logistics Causal Models Used when demand is correlated with some known and measurable environmental

### STT 200 LECTURE 1, SECTION 2,4 RECITATION 7 (10/16/2012)

STT 200 LECTURE 1, SECTION 2,4 RECITATION 7 (10/16/2012) TA: Zhen (Alan) Zhang zhangz19@stt.msu.edu Office hour: (C500 WH) 1:45 2:45PM Tuesday (office tel.: 432-3342) Help-room: (A102 WH) 11:20AM-12:30PM,

### Linear Models in STATA and ANOVA

Session 4 Linear Models in STATA and ANOVA Page Strengths of Linear Relationships 4-2 A Note on Non-Linear Relationships 4-4 Multiple Linear Regression 4-5 Removal of Variables 4-8 Independent Samples

### 2. What is the general linear model to be used to model linear trend? (Write out the model) = + + + or

Simple and Multiple Regression Analysis Example: Explore the relationships among Month, Adv.\$ and Sales \$: 1. Prepare a scatter plot of these data. The scatter plots for Adv.\$ versus Sales, and Month versus

### Module 5: Multiple Regression Analysis

Using Statistical Data Using to Make Statistical Decisions: Data Multiple to Make Regression Decisions Analysis Page 1 Module 5: Multiple Regression Analysis Tom Ilvento, University of Delaware, College

### How To Check For Differences In The One Way Anova

MINITAB ASSISTANT WHITE PAPER This paper explains the research conducted by Minitab statisticians to develop the methods and data checks used in the Assistant in Minitab 17 Statistical Software. One-Way

### Common Core Unit Summary Grades 6 to 8

Common Core Unit Summary Grades 6 to 8 Grade 8: Unit 1: Congruence and Similarity- 8G1-8G5 rotations reflections and translations,( RRT=congruence) understand congruence of 2 d figures after RRT Dilations

### Chapter 9 Descriptive Statistics for Bivariate Data

9.1 Introduction 215 Chapter 9 Descriptive Statistics for Bivariate Data 9.1 Introduction We discussed univariate data description (methods used to eplore the distribution of the values of a single variable)

### Correlation and Simple Linear Regression

Correlation and Simple Linear Regression We are often interested in studying the relationship among variables to determine whether they are associated with one another. When we think that changes in a

### A Determination of g, the Acceleration Due to Gravity, from Newton's Laws of Motion

A Determination of g, the Acceleration Due to Gravity, from Newton's Laws of Motion Objective In the experiment you will determine the cart acceleration, a, and the friction force, f, experimentally for

Using Your TI-83/84 Calculator: Linear Correlation and Regression Elementary Statistics Dr. Laura Schultz This handout describes how to use your calculator for various linear correlation and regression

### 1. How different is the t distribution from the normal?

Statistics 101 106 Lecture 7 (20 October 98) c David Pollard Page 1 Read M&M 7.1 and 7.2, ignoring starred parts. Reread M&M 3.2. The effects of estimated variances on normal approximations. t-distributions.

### AP Physics 1 and 2 Lab Investigations

AP Physics 1 and 2 Lab Investigations Student Guide to Data Analysis New York, NY. College Board, Advanced Placement, Advanced Placement Program, AP, AP Central, and the acorn logo are registered trademarks

### Introduction to Linear Regression

14. Regression A. Introduction to Simple Linear Regression B. Partitioning Sums of Squares C. Standard Error of the Estimate D. Inferential Statistics for b and r E. Influential Observations F. Regression

### Example G Cost of construction of nuclear power plants

1 Example G Cost of construction of nuclear power plants Description of data Table G.1 gives data, reproduced by permission of the Rand Corporation, from a report (Mooz, 1978) on 32 light water reactor

### Introduction to Linear Regression

14. Regression A. Introduction to Simple Linear Regression B. Partitioning Sums of Squares C. Standard Error of the Estimate D. Inferential Statistics for b and r E. Influential Observations F. Regression

### Statistical Models in R

Statistical Models in R Some Examples Steven Buechler Department of Mathematics 276B Hurley Hall; 1-6233 Fall, 2007 Outline Statistical Models Structure of models in R Model Assessment (Part IA) Anova

### DISCRIMINANT FUNCTION ANALYSIS (DA)

DISCRIMINANT FUNCTION ANALYSIS (DA) John Poulsen and Aaron French Key words: assumptions, further reading, computations, standardized coefficents, structure matrix, tests of signficance Introduction Discriminant

### The Volatility Index Stefan Iacono University System of Maryland Foundation

1 The Volatility Index Stefan Iacono University System of Maryland Foundation 28 May, 2014 Mr. Joe Rinaldi 2 The Volatility Index Introduction The CBOE s VIX, often called the market fear gauge, measures

### Regression step-by-step using Microsoft Excel

Step 1: Regression step-by-step using Microsoft Excel Notes prepared by Pamela Peterson Drake, James Madison University Type the data into the spreadsheet The example used throughout this How to is a regression

### Mario Guarracino. Regression

Regression Introduction In the last lesson, we saw how to aggregate data from different sources, identify measures and dimensions, to build data marts for business analysis. Some techniques were introduced

### Data Analysis Tools. Tools for Summarizing Data

Data Analysis Tools This section of the notes is meant to introduce you to many of the tools that are provided by Excel under the Tools/Data Analysis menu item. If your computer does not have that tool