Regression Add a line to the plot that fits the data well. Don t do any calculations, just add the line.

Similar documents
Multiple Linear Regression

1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96

Exercise 1.12 (Pg )

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

Outline. Topic 4 - Analysis of Variance Approach to Regression. Partitioning Sums of Squares. Total Sum of Squares. Partitioning sums of squares

2. Simple Linear Regression

Univariate Regression

Chapter 7: Simple linear regression Learning Objectives

Section 14 Simple Linear Regression: Introduction to Least Squares Regression

5. Linear Regression

Statistical Models in R

Lets suppose we rolled a six-sided die 150 times and recorded the number of times each outcome (1-6) occured. The data is

ANOVA. February 12, 2015

Coefficient of Determination

Comparing Nested Models

Simple Regression Theory II 2010 Samuel L. Baker

HYPOTHESIS TESTING: CONFIDENCE INTERVALS, T-TESTS, ANOVAS, AND REGRESSION

Simple linear regression

Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression

Basic Statistics and Data Analysis for Health Researchers from Foreign Countries

Using R for Linear Regression

" Y. Notation and Equations for Regression Lecture 11/4. Notation:

Module 5: Multiple Regression Analysis

Correlation and Simple Linear Regression

STAT 350 Practice Final Exam Solution (Spring 2015)

Chapter 10. Key Ideas Correlation, Correlation Coefficient (r),

Chapter 7 Section 7.1: Inference for the Mean of a Population

Chapter 13 Introduction to Linear Regression and Correlation Analysis

1. The parameters to be estimated in the simple linear regression model Y=α+βx+ε ε~n(0,σ) are: a) α, β, σ b) α, β, ε c) a, b, s d) ε, 0, σ

Psychology 205: Research Methods in Psychology

Section 1: Simple Linear Regression

Final Exam Practice Problem Answers

1) Write the following as an algebraic expression using x as the variable: Triple a number subtracted from the number

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)

International Statistical Institute, 56th Session, 2007: Phil Everson

Example: Boats and Manatees

Simple Linear Regression

The importance of graphing the data: Anscombe s regression examples

MULTIPLE REGRESSION EXAMPLE

Descriptive Statistics

17. SIMPLE LINEAR REGRESSION II

Biostatistics: DESCRIPTIVE STATISTICS: 2, VARIABILITY

Lecture 11: Confidence intervals and model comparison for linear regression; analysis of variance

Chicago Booth BUSINESS STATISTICS Final Exam Fall 2011

Answer: C. The strength of a correlation does not change if units change by a linear transformation such as: Fahrenheit = 32 + (5/9) * Centigrade

We extended the additive model in two variables to the interaction model by adding a third term to the equation.

Session 7 Bivariate Data and Analysis

X X X a) perfect linear correlation b) no correlation c) positive correlation (r = 1) (r = 0) (0 < r < 1)

Statistical Models in R

Example G Cost of construction of nuclear power plants

DATA INTERPRETATION AND STATISTICS

Simple Linear Regression

Good luck! BUSINESS STATISTICS FINAL EXAM INSTRUCTIONS. Name:

Confidence Intervals for One Standard Deviation Using Standard Deviation

Multiple Linear Regression in Data Mining

Introduction to Minitab and basic commands. Manipulating data in Minitab Describing data; calculating statistics; transformation.

Means, standard deviations and. and standard errors

Week 5: Multiple Linear Regression

An analysis method for a quantitative outcome and two categorical explanatory variables.

4. Simple regression. QBUS6840 Predictive Analytics.

Doing Multiple Regression with SPSS. In this case, we are interested in the Analyze options so we choose that menu. If gives us a number of choices:

Relationships Between Two Variables: Scatterplots and Correlation

Regression Analysis: A Complete Example

Week 4: Standard Error and Confidence Intervals

Recall this chart that showed how most of our course would be organized:

2013 MBA Jump Start Program. Statistics Module Part 3

How To Run Statistical Tests in Excel

Standard Deviation Estimator

DESCRIPTIVE STATISTICS. The purpose of statistics is to condense raw data to make it easier to answer specific questions; test hypotheses.

KSTAT MINI-MANUAL. Decision Sciences 434 Kellogg Graduate School of Management

SPSS Guide: Regression Analysis

CHAPTER 13 SIMPLE LINEAR REGRESSION. Opening Example. Simple Regression. Linear Regression

Factors affecting online sales

Stepwise Regression. Chapter 311. Introduction. Variable Selection Procedures. Forward (Step-Up) Selection

CALCULATIONS & STATISTICS

Analysis of Variance ANOVA

Part 2: Analysis of Relationship Between Two Variables

Lean Six Sigma Analyze Phase Introduction. TECH QUALITY and PRODUCTIVITY in INDUSTRY and TECHNOLOGY

1.5 Oneway Analysis of Variance

An analysis appropriate for a quantitative outcome and a single quantitative explanatory. 9.1 The model behind linear regression

Chapter 23. Inferences for Regression

Getting Correct Results from PROC REG

table to see that the probability is (b) What is the probability that x is between 16 and 60? The z-scores for 16 and 60 are: = 1.

Simple Linear Regression Inference

Lecture Notes Module 1

Least Squares Regression. Alan T. Arnholt Department of Mathematical Sciences Appalachian State University

Statistics courses often teach the two-sample t-test, linear regression, and analysis of variance

Descriptive Statistics

One-Way Analysis of Variance: A Guide to Testing Differences Between Multiple Groups

DEPARTMENT OF PSYCHOLOGY UNIVERSITY OF LANCASTER MSC IN PSYCHOLOGICAL RESEARCH METHODS ANALYSING AND INTERPRETING DATA 2 PART 1 WEEK 9

The correlation coefficient

Confidence Intervals for Cp

TRINITY COLLEGE. Faculty of Engineering, Mathematics and Science. School of Computer Science & Statistics

2. What is the general linear model to be used to model linear trend? (Write out the model) = or

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.

The Dummy s Guide to Data Analysis Using SPSS

AP STATISTICS REVIEW (YMS Chapters 1-8)

Estimation of σ 2, the variance of ɛ

E(y i ) = x T i β. yield of the refined product as a percentage of crude specific gravity vapour pressure ASTM 10% point ASTM end point in degrees F

Review of Fundamental Mathematics

Transcription:

Regression 137 9 Regression 9.1 Simple Linear Regression 9.1.1 The Least Squares Method Example. Consider the following small data set. somedata <- data.frame( x=1:5, y=c(1,3,2,4,4) ) somedata 4 x y 1 1 1 2 2 3 3 3 2 4 4 4 5 5 4 y 3 2 1 1 2 3 4 5 x 1. Add a line to the plot that fits the data well. Don t do any calculations, just add the line. 2. Estimate the slope and intercept of your line by reading them off of the graph 3. Now estimate the residuals for each point relative to your line residual = observed response predicted response 4. Compute the sum of the squared residuals, SSE. Square each residual and add them up.

138 Regression For example, suppose we we select a line that passes through (0,1) and (5,4). the equation for this line is y =1+.6x, and it looks like a pretty good fit: my.y <- makefun( 1 +.6 * x x) xyplot( y x, data=somedata, xlim=c(0,6), ylim=c(0,5) ) + plotfun( my.y(x) x, col="gray50" ) 4 y 3 2 1 1 2 3 4 5 x The residuals for this function are resids <- with(somedata, y - my.y(x)) ; resids [1] -0.6 0.8-0.8 0.6 and SSE is sum(residsˆ2) [1] 2 If your line is a good fit, then SSE will be small. The least squares regression line is the line that has the smallest possible SSE. 1 Thelm() function will find this best fitting line for us. model1 <- lm( y x, data=somedata ); model1 lm(formula = y x, data = somedata) (Intercept) x 0.7 0.7 This says that the equation of the best fit line is ŷ =0.7+0.7x 1 Usingcalculus,itiseasytoderiveformulasfortheslopeandinterceptofthisline. Butwewillusesoftwaretodothesecomputations. All statistical packages can perform these calculations for you.

Regression 139 xyplot( y x, data=somedata, type=c('p','r') ) + plotfun( my.y(x) x, col="gray50" ) # let's add our previous attempt, too y 4.0 3.5 3.0 2.5 2.0 1.5 1 2 3 4 5 x We can compute SSE using the resid() function. SSE <- sum ( resid(model1)ˆ2 ); SSE [1] 1.9 As we see, this is a better fit than our first attempt at least according to the least squares criterion. It will be better than any other attempt it is the least squares regression line. 9.1.2 Properties of the Least Squares Regression Line For a line with equation y = ˆβ 0 + ˆβ 1 x, the residuals are and the sum of the squares of the residuals is e i =y i (ˆβ 0 + ˆβ 1 x) SSE = ei 2 = (y i (ˆβ 0 + ˆβ 1 x)) 2 Simple calculus (which we won t do here) allows us to compute the best ˆβ 0 and ˆβ 1 possible. These best values define the least squares regression line. We always compute these values using software, but it is good to note that the least squares line satisfies two very nice properties. 1. The point (x,y) is on the line. This means that y = ˆβ 0 + ˆβ 1 x (and ˆβ 0 =y ˆβ 1 x) 2. The slope of the line is b =r s y s x where r is the correlationcoefficient: r = 1 xi x yi y n 1 s x s y Since we have a point and the slope, it is easy to compute the equation for the line if we know x, s x, y, s y, and r.

140 Regression 9.1.3 Explanatory and Response Variables Matter It is important that the explanatory variable be the x variable and the response variable be the y variable when doing regression. If we reverse the roles of y and x we do not get the same model. This is because the residuals are measured vertically (in the y direction). 9.1.4 Example: Florida Lakes Example Does the amount of mercury found in fish depend on the ph level of the lake? Fish were captured and ph measured in a number of Florida lakes. We can use this data to explore this question. xyplot(avgmercury ph, data = FloridaLakes, type = c("p", "r")) lm(avgmercury ph, data = FloridaLakes) lm(formula = AvgMercury ph, data = FloridaLakes) (Intercept) ph 1.531-0.152 AvgMercury 4 5 6 7 8 9 ph You can get terser output with coef(lm(avgmercury ph, data = FloridaLakes)) # just show me the coefficients (Intercept) ph 1.531-0.152 From these coefficients, we see that our regression equation is AvgMercury=1.531+( 0.152) ph So for example, this suggests that the average average mercury level (yes, that s two averages 2 ) for lake with a ph of 6 is approximately AvgMercury=1.531+( 0.152) 6.0=0.617 2 For each lake, the average mercury level is calculated. Different lakes will have different average mercury levels. Our regression line is estimating the average of these averages for lakes with a certain ph.

Regression 141 Using makefun(), we can automate computing the estimated response: Mercury.model <- lm(avgmercury ph, data = FloridaLakes) estimated.avgmercury <- makefun(mercury.model) estimated.avgmercury(6) 1 0.617 9.1.5 Example: Inkjet Printers Here s another example in which we want to predict the price of an inkjet printer from the number of pages it prints per minute (ppm). xyplot(price PPM, data = InkjetPrinters, type = c("p", "r")) lm(price PPM, data = InkjetPrinters) lm(formula = Price PPM, data = InkjetPrinters) (Intercept) PPM -94.2 90.9 Price 350 300 250 200 150 100 50 2.0 2.5 3.0 3.5 4.0 PPM You can get terser output with coef(lm(price PPM, data = InkjetPrinters)) (Intercept) PPM -94.2 90.9 So our regression equation is Price= 94.222+90.878 PPM For example, this suggests that the average price for inkjet printers that print 3 pages per minute is Price= 94.222+90.878 3.0=178.412

142 Regression 9.2 Parameter Estimates 9.2.1 Interpreting the Coefficients Thecoefficientsofthelinearmodeltellushowtoconstructthelinearfunctionthatweusetoestimateresponse values, but they can be interesting in their own rite as well. The intercept β 0 is the mean response value when the explanatory variable is 0. This may or may not be interesting. Often β 0 is not interesting because we are not interested in the value of the response variable whenthe predictoris 0. (That mightnot even be a possible value for the predictor.) Furthermore, if we do not collect data with values of the explanatory variable near 0, then we will be extrapolating from our data when we talk about the intercept. The estimate for β 1, on the other hand, is nearly always of interest. The slope coefficient β 1 tells us how quickly the response variable changes per unit change in the predictor. This is an interesting value in many moresituations. Furthermore,whenβ 1 =0,thenourmodelsaysthattheaverageresponsedoesnotdependon thepredictoratall. Sowhen0iscontainedintheconfidenceintervalforβ 1 orwecannotrejecth 0 : β 1 =0,then we do not have sufficient evidence to be convinced that our predictor is of any use in predicting the response. Since ˆβ 1 =r s y s x, testing whether β 1 =0 is equivalent to testing whether correlation coefficient ρ =0. 9.2.2 Estimating σ There is one more parameter in our model that we have been mostly ignoring so far: σ (or equivalently σ 2 ). This is the parameter that describes how tightly things should cluster around the regression line. We can estimate σ 2 from our residuals: ˆσ 2 =MSE = i e2 i n 2 ˆσ =RMSE = MSE = i e2 i n 2 The acronyms MSE and RMSE stand for Mean Squared Error and Root Mean Squared Error. The numerator in these expressions is the sum of the squares of the residuals SSE = ei 2. This is precisely the quantity that we were minimizing to get our least squares fit. MSE = SSE DFE where DFE = n 2 is the degrees of freedom associated with the estimation of σ 2 in a simple linear model. We lose two degrees of freedom when we estimate β 0 and β 1, just like we lost 1 degree of freedom when we had to estimate µ in order to compute a sample variance. RMSE = MSE is listed in the summary output for the linear model as the residual standard error because it is the estimated standard deviation of the error terms in the model. summary(mercury.model) i

Regression 143 lm(formula = AvgMercury ph, data = FloridaLakes) Residuals: Min 1Q Median 3Q Max -0.4890-0.1919-577 946 0.7113 Estimate Std. Error t value Pr(> t ) (Intercept) 1.5309 0.2035 7.52 8.1e-10 ph -0.1523 303-5.02 6.6e-06 Residual standard error: 0.282 on 51 degrees of freedom Multiple R-squared: 0.331,Adjusted R-squared: 0.318 F-statistic: 25.2 on 1 and 51 DF, p-value: 6.57e-06 We will learn about other parts of this summary output shortly. Much is known about the estimator σ 2, including ˆσ 2 is unbiased (on average it is σ 2 ), and the sampling distribution is related to a Chi-Squared distribution with n 2 degrees of freedom. 9.2.3 ANOVA for regression and the Correlation Coefficient There is another connection between the correlation coefficient and the least squares regression line. We can think about regression as a way to analyze the variability in the response. anova(lm(avgmercury ph, data = FloridaLakes)) Analysis of Variance Table Response: AvgMercury Df Sum Sq Mean Sq F value Pr(>F) ph 1 2.00 2.002 25.2 6.6e-06 Residuals 51 4.05 79 This is a lot like the ANOVA tables we have seen before. This time: SST = (y y) 2 SSE = (y ŷ) 2 SSM = (ŷ y) 2 SST =SSM+SSE As before, when SSM is large and SSE is small, then the model (ŷ = ˆβ 0 + ˆβ 1 x) explains a lot of the variability and little is left unexplained (SSE). On the other hand, if SSM is small and SSE is large, then the model explains only a little of the variability and most of it is due to things not explained by the model. The percentage of explained variability is denoted r 2 or R 2 :

144 Regression For our the Florida lakes study, we see that R 2 = SSM SST = SSM SSM+SSE SSM =2.00 SSE =4.05 SST =2.00+4.05=6.05 R 2 = SSM SST = 2 6.05 =0.331 This number is listed as Multiple R-squared on the summary output. So ph explains roughly 1/3 of the variability in mercury levels. The other two thirds of the variability in mercury levels is due to other things. (We can think of many things that might matter: size of the lake,depthofthelake,typesoffishinthelake,typesofplantsinthelake,proximitytoindustrialization highways, streets, manufacturing plants, etc.) More complex studies might investigate the effects of several such factors simultaneously. The correlation coefficient The square root of R 2 (with a sign to indicate whether the association between explanatory and response variables is positive or negative) is the correlation coefficient, R (or r). As a reminder, here are some important facts about R: 1. R is always between -1 and 1 2. R is 1 or -1 only if all the dots fall exactly on a line. 3. If the relationship between the explanatory and response variables is not roughly linear, then R is not a very useful number. (And simple linear regression is not very useful either). 4. For linear relationships, R is a measure of the strength of the relationship. If R is close to 1 or -1, the linearassociationisstrong. Ifitiscloserto0,thelinearassociationisweak(withlotsofscatteraboutthe best fit line). 5. R is unitless if we change the units of our measurements (from English to metric, for example) it will not affect the value of R. 9.3 Confidence Intervals and Hypothesis Tests 9.3.1 Bootstrap So how good are these estimates? We would like have interval estimates rather than just point estimates. One way to get interval estimates for the coefficients is to use the bootstrap. Florida Lakes boot.lakes <- do(1000) * lm(avgmercury ph, data = resample(floridalakes)) head(boot.lakes, 2)

Regression 145 Intercept ph sigma r.squared 1 1.59-0.162 0.258 0.351 2 1.40-0.140 0.294 0.269 dotplot( ph, data = boot.lakes, width = 03) dotplot( Intercept, data = boot.lakes, width = 2) histogram( ph, data = boot.lakes, width = 1) histogram( Intercept, data = boot.lakes, width = 0.1) ph Count 0 10 20 30 40 50 0.25 0.20 0.15 0.10 Intercept Count 0 10 20 30 40 50 1.2 1.4 1.6 1.8 2.0 2.2 ph Density 0 5 10 15 0.25 0.20 0.15 0.10 Intercept Density 1.5 2.0 1.5 2.0 cdata(0.95, ph, boot.lakes) low hi central.p -0.205-0.103 0.950 cdata(0.95, Intercept, boot.lakes) low hi central.p 1.20 1.90 0.95 Inkjet Printers boot.printers <- do(1000) * lm(price PPM, data = resample(inkjetprinters)) head(boot.printers, 2) Intercept PPM sigma r.squared

146 Regression 1-71.6 74.1 48.4 0.428 2-171.6 113.6 56.0 0.695 histogram( PPM, data = boot.printers) histogram( Intercept, data = boot.printers) cdata(0.95, PPM, boot.printers) low hi central.p 49.63 131.25 0.95 cdata(0.95, Intercept, boot.printers) low hi central.p -213.56 13.18 0.95 15 06 Density 10 05 Density 04 02 00 00 50 100 150 PPM 300 200 100 0 Intercept 9.3.2 Using Standard Errors We can also compute confidence intervals using estimate±t SE For t we use n 2 degrees of freedom. (The other two degrees of freedom go for estimating the intercept and the slope). This (and much of the regression analysis) is based on the assumptions that 1. The mean values of y (in the population) for each value of x lie along a line. 2. Individual values of y (in the population) for each value of x are normally distributed. 3. The standard deviations of these normal distributions are the same no matter what x is. As before, we have two ways we can estimate the standard errors. 1. Compute the standard deviation of the appropriate bootstrap distribution. This should work well provided our bootstrap distribution is something resembling a normal distribution.

Regression 147 2. Use formulas to compute the standard errors from summary statistics. The formulas for SE are a bit more complicated in this case, but R will standard error estimates for us, so we don t need to know the formulas. Florida Lakes The t value is based on DFE, the degrees of freedom for the errors (residuals). For simple linear regression, the error degrees of freedom is n 2=51. For a 95% confidence interval, we first compute t : t.star <- qt(0.975, df = 51) t.star [1] 2.01 Using the bootstrap distribution. To get the standard errors from or bootstrap distribution, we can use sd(). sd( Intercept, data = boot.lakes) [1] 0.184 sd( ph, data = boot.lakes) [1] 257 The confint() function can be applied to bootstrap distributions to make this even simpler. We even have a choice between (a) using the standard error as estimated by taking the standard deviation of the bootstrap distribution or (b) using the percentile method: confint(boot.lakes) # 95% CIs for each parameter name lower upper level method estimate margin.of.error 1 Intercept 1.171 1.894 0.95 stderr 1.533 0.3614 2 ph -0.203-0.102 0.95 stderr -0.152 505 3 sigma 0.222 0.330 0.95 stderr 0.276 543 4 r.squared 0.153 18 0.95 stderr 0.336 0.1822 confint(boot.lakes, method = "perc") # 95% CIs for each parameter; percentile method name lower upper level method 1 Intercept 1.199 1.903 0.95 quantile 2 ph -0.205-0.103 0.95 quantile 3 sigma 0.222 0.327 0.95 quantile 4 r.squared 0.165 21 0.95 quantile confint(boot.lakes, "ph", level = 0.98, method = c("stderr", "perc")) # 98% CI just for ph, both methods

148 Regression name lower upper level method estimate margin.of.error 1 ph -0.212-924 0.98 stderr -0.152 6 2 ph -0.221-980 0.98 quantile NA NA Using formulas for standard error. The summary output for a linear model includes the formula-based standard error estimates for each parameter. summary(lm(avgmercury ph, data = resample(floridalakes))) lm(formula = AvgMercury ph, data = resample(floridalakes)) Residuals: Min 1Q Median 3Q Max -0.4627-0.2074-946 0.1135 0.6780 Estimate Std. Error t value Pr(> t ) (Intercept) 1.3700 0.2055 6.67 1.8e-08 ph -0.1264 309-4.10 0015 Residual standard error: 0.298 on 51 degrees of freedom Multiple R-squared: 0.248,Adjusted R-squared: 0.233 F-statistic: 16.8 on 1 and 51 DF, p-value: 0015 So we get the following confidence intervals for intercept 1.63±t SE 1.63±2.008 0.2118 1.63±0.425 and the slope 0.153±t SE 0.1532.008 319 0.153 ± 64 The confint() function can also be used to simplify these calculations. confint(lm(avgmercury ph, data = resample(floridalakes))) # 95% CI 2.5 % 97.5 % (Intercept) 34 1.8394 ph -0.199-781 confint(lm(avgmercury ph, data = resample(floridalakes)), level = 0.99) # 99% CI % 99.5 % (Intercept) 0.683 1.933 ph -0.216-35

Regression 149 Inkjet Printers summary(lm(price PPM, data = resample(inkjetprinters))) lm(formula = Price PPM, data = resample(inkjetprinters)) Residuals: Min 1Q Median 3Q Max -61.43-41.43 1.99 29.15 94.44 Estimate Std. Error t value Pr(> t ) (Intercept) -214.1 48.8-4.39 0035 PPM 131.3 15.5 8.45 1.1e-07 Residual standard error: 50.7 on 18 degrees of freedom Multiple R-squared: 0.799,Adjusted R-squared: 0.788 F-statistic: 71.4 on 1 and 18 DF, p-value: 1.11e-07 confint(lm(price PPM, data = resample(inkjetprinters)), "PPM") 2.5 % 97.5 % PPM 71.4 140 confint(boot.printers, "PPM") name lower upper level method estimate margin.of.error 1 PPM 51.1 131 0.95 stderr 91 40 9.3.3 Hypothesis Tests The summary of linear models includes the results of some hypothesis tests: summary(lm(avgmercury ph, data = FloridaLakes)) lm(formula = AvgMercury ph, data = FloridaLakes) Residuals: Min 1Q Median 3Q Max -0.4890-0.1919-577 946 0.7113 Estimate Std. Error t value Pr(> t ) (Intercept) 1.5309 0.2035 7.52 8.1e-10 ph -0.1523 303-5.02 6.6e-06

150 Regression Residual standard error: 0.282 on 51 degrees of freedom Multiple R-squared: 0.331,Adjusted R-squared: 0.318 F-statistic: 25.2 on 1 and 51 DF, p-value: 6.57e-06 Of these the most interesting is the one in the row labeledph. This is a test of H 0 :β 1 =0 H a :β 1 0 The test statistic t = ˆβ 1 0 SE is converted to a p-value using a t-distribution with DFE =n 2 degrees of freedom. t <- -0.1523 / 303; t [1] -5.03 2 * pt( t, df = 51 ) # p-value [1] 6.52e-06 We could also estimate this p-value using randomization. If β 1 =0, then the model equation becomes response=β 0 +ε so the explanatory variable doesn t matter for determining the response. This means we can simulate a world in which the null hypothesis is true by shuffling the explanatory variable: rand.lakes <- do(1000) * lm(avgmercury shuffle(ph), data = FloridaLakes) histogram( ph, data = rand.lakes, v = 0) 2 * prop( (ph <= -0.1523), data = rand.lakes) # p-value from randomization distribution target level: TRUE; other levels: FALSE TRUE 0 Density 10 8 6 4 2 0 0.10 5 0 5 0.10 ph In this case, none of our 1000 resamples produced such a small value for ˆβ 1. This is consistent with the small p-value computed previously.

Regression 151 9.4 Making Predictions 9.4.1 Point Estimates for Response It may be very interesting to make predictions when the explanatory variable has some other value, however. There are two ways to do this in R. One uses the predict() function. It is simpler, however, to use the makefun() function in the mosaic package, so that s the approach we will use here. First, let s build our linear model and store it. lakes.model <- lm(avgmercury ph, data = FloridaLakes) coef(lakes.model) (Intercept) ph 1.531-0.152 Now let s create a function that will estimate values ofavgmercury for a given value ofph: mercury <- makefun(lakes.model) We can now input a ph value and see what our least squares regression line predicts for the average mercury level in the fish: mercury(ph = 5) # estimate AvgMercury when ph is 5 1 0.769 mercury(ph = 7) # estimate AvgMercury when ph is 5 1 0.465 9.4.2 Interval Estimates for the Mean and Individual Response R can compute two kinds of confidence intervals for the response for a given value 1. A confidence interval for the mean response for a given explanatory value can be computed by adding interval='confidence'. mercury(ph = 5, interval = "confidence") fit lwr upr 1 0.769 0.645 0.894 2. An interval for an individual response(called a prediction interval to avoid confusion with the confidence interval above) can be computed by adding interval='prediction' instead.

152 Regression mercury(ph = 5, interval = "prediction") fit lwr upr 1 0.769 0.191 1.35 Prediction intervals (a) are much wider than confidence intervals (b) are very sensitive to the assumption that the population normal for each value of the predictor. (c) are (for a 95% confidence level) a little bit wider than ŷ±2se where SE is the residual standard error reported in the summary output. The prediction interval is a little wider because it takes into account the uncertainty in our estimated slope and intercept as well as the variability of responses around the true regression line. The figure below shows the confidence (dotted) and prediction (dashed) intervals as bands around the regression line. require(fastr) xyplot(avgmercury ph, data = FloridaLakes, panel = panel.lmbands, cex = 0.6, alpha = ) AvgMercury 4 5 6 7 8 9 ph As the graph illustrates, the intervals are narrow near the center of the data and wider near the edges of the data. It is not safe to extrapolate beyond the data(without additional information), since there is no data to let us know whether the pattern of the data extends. 9.5 Regression Cautions 9.5.1 Don t Fit a Line If a Line Doesn t Fit Whendoingregressionyoushouldalwayslookatthedatatoseeifalineisagoodfit. Ifitisnot,itmaybethat a suitable transformation of one or both of the variables may improve things. Or perhaps some other method is required.

Regression 153 Anscombe s Data Anscombe illustrated the importance of looking at the data by concocting an interesting data set. Notice how similar the numerical summaries are for these for pairs of variables summary(lm(y1 x1, anscombe)) lm(formula = y1 x1, data = anscombe) Residuals: Min 1Q Median 3Q Max -1.9213-0.4558-414 0.7094 1.8388 Estimate Std. Error t value Pr(> t ) (Intercept) 3.000 1.125 2.67 257 x1 00 0.118 4.24 022 Residual standard error: 1.24 on 9 degrees of freedom Multiple R-squared: 0.667,Adjusted R-squared: 0.629 F-statistic: 18 on 1 and 9 DF, p-value: 0217 summary(lm(y2 x2, anscombe)) lm(formula = y2 x2, data = anscombe) Residuals: Min 1Q Median 3Q Max -1.901-0.761 0.129 0.949 1.269 Estimate Std. Error t value Pr(> t ) (Intercept) 3.001 1.125 2.67 258 x2 00 0.118 4.24 022 Residual standard error: 1.24 on 9 degrees of freedom Multiple R-squared: 0.666,Adjusted R-squared: 0.629 F-statistic: 18 on 1 and 9 DF, p-value: 0218 summary(lm(y3 x3, anscombe)) lm(formula = y3 x3, data = anscombe) Residuals: Min 1Q Median 3Q Max

154 Regression -1.159-0.615-0.230 0.154 3.241 Estimate Std. Error t value Pr(> t ) (Intercept) 3.002 1.124 2.67 256 x3 00 0.118 4.24 022 Residual standard error: 1.24 on 9 degrees of freedom Multiple R-squared: 0.666,Adjusted R-squared: 0.629 F-statistic: 18 on 1 and 9 DF, p-value: 0218 summary(lm(y4 x4, anscombe)) lm(formula = y4 x4, data = anscombe) Residuals: Min 1Q Median 3Q Max -1.751-0.831 00 0.809 1.839 Estimate Std. Error t value Pr(> t ) (Intercept) 3.002 1.124 2.67 256 x4 00 0.118 4.24 022 Residual standard error: 1.24 on 9 degrees of freedom Multiple R-squared: 0.667,Adjusted R-squared: 0.63 F-statistic: 18 on 1 and 9 DF, p-value: 0216 But the plots reveal that very different things are going on. 5 10 15 5 10 15 y 12 10 8 6 4 1 2 3 4 5 10 15 x 5 10 15 9.5.2 Outliers in Regression Outliers can be very influential in regression, especially in small data sets, and especially if they occur for extreme values of the explanatory variable. Outliers cannot be removed just because we don t like them, but they should be explored to see what is going on (data entry error? special case? etc.) Some researchers will do leave-one-out analysis, or leave some out analysis where they refit the regression with each data point left out once. If the regression summary changes very little when we do this, this means that the regression line is summarizing information that is shared among all the points relatively equally. But

Regression 155 ifremovingoneorasmallnumberofvaluesmakesadramaticchange,thenweknowthatthatpointisexerting a lot of influence over the resulting analysis (a cause for caution). 9.5.3 Residual Plots In addition to scatter plots of the response vs. the explanatory variable, we can also create plots of the residuals of the model vs either the explanatory variable or the fitted values (ŷ). The latter works in a wider variety of settings (including multiple regression and two-way ANOVA). model1 <- lm(y1 x1, data = anscombe) model2 <- lm(y2 x2, data = anscombe) model3 <- lm(y3 x3, data = anscombe) model4 <- lm(y4 x4, data = anscombe) xyplot(resid(model1) x1, data = anscombe) xyplot(resid(model1) fitted(model1), data = anscombe) resid(model1) 2 1 0 1 2 resid(model1) 2 1 0 1 2 4 6 8 10 12 14 5 6 7 8 9 10 x1 fitted(model1) xyplot(resid(model2) x2, data = anscombe) xyplot(resid(model2) fitted(model2), data = anscombe) resid(model2) 1 0 1 2 resid(model2) 1 0 1 2 4 6 8 10 12 14 5 6 7 8 9 10 x2 fitted(model2) You can make similar plots for models 3 and 4. The main advantage of these plots is that they use the vertical space in the plot more efficiently. This is especially important when the size of the residuals is small relative to the range of the response variable. Returningto our Florida lakes,wesee that thingslook reasonablefor themodel wehavebeen fitting(but stay tuned for the next section).

156 Regression lake.model <- lm(avgmercury ph, data = FloridaLakes) xyplot(avgmercury ph, data = FloridaLakes, type = c("p", "r")) xyplot(resid(lake.model) fitted(lake.model), data = FloridaLakes) AvgMercury resid(lake.model) 0.6 0.4 0.2 0.2 0.4 4 5 6 7 8 9 0.2 0.4 0.6 0.8 ph fitted(lake.model) We are hoping not to see any strong patterns in these residual plots. 9.5.4 Checking the Distribution of the Residuals for Normality Residuals should be checked to see that the distribution looks approximately normal and that that standard deviation remains consistent across the range of our data (and across time). histogram( resid(lakes.model)) xqqmath( resid(lakes.model)) Density 1.5 resid(lakes.model) 0.6 0.4 0.2 0.2 0.4 resid(lakes.model) 2 1 0 1 2 qnorm The normal-quantile plot shown above is designed so that the points will fall along a straight line when the underlying distribution is exactly normal. As the distribution becomes less and less normal, the normalquantile will look less and less like a straight line. Similar plots (and some others as well) can also be made with mplot(lakes.model) In this case things don t look quite as good as we would like on the normality front. The residuals are a bit too skewed (too many large positive residuals). Using a log transformation on the response (see below) might improve things.

Regression 157 9.5.5 Tranformations Transformations of one or both variables can change the shape of the relationship (from non-linear to linear, we hope) and also the distribution of the residuals. In biological applications, a logarithmic transformation is often useful. lakes.model2 <- lm(log(avgmercury) ph, data = FloridaLakes) xyplot(log(avgmercury) ph, data = FloridaLakes, type = c("p", "r")) summary(lakes.model2) lm(formula = log(avgmercury) ph, data = FloridaLakes) Residuals: Min 1Q Median 3Q Max -1.6794-0.4315 994 0.4422 1.3715 Estimate Std. Error t value Pr(> t ) (Intercept) 1.7400 0.4819 3.61 7e-04 ph -0.4022 718-5.60 8.5e-07 Residual standard error: 0.667 on 51 degrees of freedom Multiple R-squared: 0.381,Adjusted R-squared: 0.369 F-statistic: 31.4 on 1 and 51 DF, p-value: 8.54e-07 log(avgmercury) 0 1 2 3 4 5 6 7 8 9 ph If we like, we can show the new model fit overlaid on the original data: xyplot(avgmercury ph, data = FloridaLakes, main = "untransformed model", type = c("p", "r")) xyplot(avgmercury ph, data = FloridaLakes, main = "log transformed model") Hg <- makefun(lakes.model2) # turn model into a function plotfun(exp(hg(ph)) ph, add = TRUE) # add this function to the plot

158 Regression untransformed model log transformed model AvgMercury AvgMercury 4 5 6 7 8 9 4 5 6 7 8 9 ph ph log transformed model AvgMercury 4 5 6 7 8 9 ph A logarithmic transformation of AvgMercury improves the normality of the residuals. histogram( resid(lakes.model2)) qqmath( resid(lakes.model2)) xyplot(resid(lakes.model2) ph, data = FloridaLakes) xyplot(resid(lakes.model2) fitted(lakes.model2)) Density 0.6 0.4 0.2 resid(lakes.model2) 1.5 1.5 1 0 1 resid(lakes.model2) 2 1 0 1 2 qnorm

Regression 159 resid(lakes.model2) 1.5 1.5 resid(lakes.model2) 1.5 1.5 4 5 6 7 8 9 2.0 1.5 ph fitted(lakes.model2) The absolute values of the residuals are perhaps a bit larger when the ph is higher (and fits are smaller), although this is exagerated somewhat in the plots because there is so little data with very small ph values. If we look at square roots of standardized residuals this effect is not as pronounced: mplot(lakes.model2, w = 3) [[1]] Scale Location Standardized residuals 1.5 2.0 1.5 Fitted Value On balance, the log transformation seems to improve the situation and is to be preferred over the original model.