Regression Add a line to the plot that fits the data well. Don t do any calculations, just add the line.

Size: px
Start display at page:

Download "Regression Add a line to the plot that fits the data well. Don t do any calculations, just add the line."

Transcription

1 Regression Regression 9.1 Simple Linear Regression The Least Squares Method Example. Consider the following small data set. somedata <- data.frame( x=1:5, y=c(1,3,2,4,4) ) somedata 4 x y y x 1. Add a line to the plot that fits the data well. Don t do any calculations, just add the line. 2. Estimate the slope and intercept of your line by reading them off of the graph 3. Now estimate the residuals for each point relative to your line residual = observed response predicted response 4. Compute the sum of the squared residuals, SSE. Square each residual and add them up.

2 138 Regression For example, suppose we we select a line that passes through (0,1) and (5,4). the equation for this line is y =1+.6x, and it looks like a pretty good fit: my.y <- makefun( * x x) xyplot( y x, data=somedata, xlim=c(0,6), ylim=c(0,5) ) + plotfun( my.y(x) x, col="gray50" ) 4 y x The residuals for this function are resids <- with(somedata, y - my.y(x)) ; resids [1] and SSE is sum(residsˆ2) [1] 2 If your line is a good fit, then SSE will be small. The least squares regression line is the line that has the smallest possible SSE. 1 Thelm() function will find this best fitting line for us. model1 <- lm( y x, data=somedata ); model1 lm(formula = y x, data = somedata) (Intercept) x This says that the equation of the best fit line is ŷ = x 1 Usingcalculus,itiseasytoderiveformulasfortheslopeandinterceptofthisline. Butwewillusesoftwaretodothesecomputations. All statistical packages can perform these calculations for you.

3 Regression 139 xyplot( y x, data=somedata, type=c('p','r') ) + plotfun( my.y(x) x, col="gray50" ) # let's add our previous attempt, too y x We can compute SSE using the resid() function. SSE <- sum ( resid(model1)ˆ2 ); SSE [1] 1.9 As we see, this is a better fit than our first attempt at least according to the least squares criterion. It will be better than any other attempt it is the least squares regression line Properties of the Least Squares Regression Line For a line with equation y = ˆβ 0 + ˆβ 1 x, the residuals are and the sum of the squares of the residuals is e i =y i (ˆβ 0 + ˆβ 1 x) SSE = ei 2 = (y i (ˆβ 0 + ˆβ 1 x)) 2 Simple calculus (which we won t do here) allows us to compute the best ˆβ 0 and ˆβ 1 possible. These best values define the least squares regression line. We always compute these values using software, but it is good to note that the least squares line satisfies two very nice properties. 1. The point (x,y) is on the line. This means that y = ˆβ 0 + ˆβ 1 x (and ˆβ 0 =y ˆβ 1 x) 2. The slope of the line is b =r s y s x where r is the correlationcoefficient: r = 1 xi x yi y n 1 s x s y Since we have a point and the slope, it is easy to compute the equation for the line if we know x, s x, y, s y, and r.

4 140 Regression Explanatory and Response Variables Matter It is important that the explanatory variable be the x variable and the response variable be the y variable when doing regression. If we reverse the roles of y and x we do not get the same model. This is because the residuals are measured vertically (in the y direction) Example: Florida Lakes Example Does the amount of mercury found in fish depend on the ph level of the lake? Fish were captured and ph measured in a number of Florida lakes. We can use this data to explore this question. xyplot(avgmercury ph, data = FloridaLakes, type = c("p", "r")) lm(avgmercury ph, data = FloridaLakes) lm(formula = AvgMercury ph, data = FloridaLakes) (Intercept) ph AvgMercury ph You can get terser output with coef(lm(avgmercury ph, data = FloridaLakes)) # just show me the coefficients (Intercept) ph From these coefficients, we see that our regression equation is AvgMercury=1.531+( 0.152) ph So for example, this suggests that the average average mercury level (yes, that s two averages 2 ) for lake with a ph of 6 is approximately AvgMercury=1.531+( 0.152) 6.0= For each lake, the average mercury level is calculated. Different lakes will have different average mercury levels. Our regression line is estimating the average of these averages for lakes with a certain ph.

5 Regression 141 Using makefun(), we can automate computing the estimated response: Mercury.model <- lm(avgmercury ph, data = FloridaLakes) estimated.avgmercury <- makefun(mercury.model) estimated.avgmercury(6) Example: Inkjet Printers Here s another example in which we want to predict the price of an inkjet printer from the number of pages it prints per minute (ppm). xyplot(price PPM, data = InkjetPrinters, type = c("p", "r")) lm(price PPM, data = InkjetPrinters) lm(formula = Price PPM, data = InkjetPrinters) (Intercept) PPM Price PPM You can get terser output with coef(lm(price PPM, data = InkjetPrinters)) (Intercept) PPM So our regression equation is Price= PPM For example, this suggests that the average price for inkjet printers that print 3 pages per minute is Price= =

6 142 Regression 9.2 Parameter Estimates Interpreting the Coefficients Thecoefficientsofthelinearmodeltellushowtoconstructthelinearfunctionthatweusetoestimateresponse values, but they can be interesting in their own rite as well. The intercept β 0 is the mean response value when the explanatory variable is 0. This may or may not be interesting. Often β 0 is not interesting because we are not interested in the value of the response variable whenthe predictoris 0. (That mightnot even be a possible value for the predictor.) Furthermore, if we do not collect data with values of the explanatory variable near 0, then we will be extrapolating from our data when we talk about the intercept. The estimate for β 1, on the other hand, is nearly always of interest. The slope coefficient β 1 tells us how quickly the response variable changes per unit change in the predictor. This is an interesting value in many moresituations. Furthermore,whenβ 1 =0,thenourmodelsaysthattheaverageresponsedoesnotdependon thepredictoratall. Sowhen0iscontainedintheconfidenceintervalforβ 1 orwecannotrejecth 0 : β 1 =0,then we do not have sufficient evidence to be convinced that our predictor is of any use in predicting the response. Since ˆβ 1 =r s y s x, testing whether β 1 =0 is equivalent to testing whether correlation coefficient ρ = Estimating σ There is one more parameter in our model that we have been mostly ignoring so far: σ (or equivalently σ 2 ). This is the parameter that describes how tightly things should cluster around the regression line. We can estimate σ 2 from our residuals: ˆσ 2 =MSE = i e2 i n 2 ˆσ =RMSE = MSE = i e2 i n 2 The acronyms MSE and RMSE stand for Mean Squared Error and Root Mean Squared Error. The numerator in these expressions is the sum of the squares of the residuals SSE = ei 2. This is precisely the quantity that we were minimizing to get our least squares fit. MSE = SSE DFE where DFE = n 2 is the degrees of freedom associated with the estimation of σ 2 in a simple linear model. We lose two degrees of freedom when we estimate β 0 and β 1, just like we lost 1 degree of freedom when we had to estimate µ in order to compute a sample variance. RMSE = MSE is listed in the summary output for the linear model as the residual standard error because it is the estimated standard deviation of the error terms in the model. summary(mercury.model) i

7 Regression 143 lm(formula = AvgMercury ph, data = FloridaLakes) Residuals: Min 1Q Median 3Q Max Estimate Std. Error t value Pr(> t ) (Intercept) e-10 ph e-06 Residual standard error: on 51 degrees of freedom Multiple R-squared: 0.331,Adjusted R-squared: F-statistic: 25.2 on 1 and 51 DF, p-value: 6.57e-06 We will learn about other parts of this summary output shortly. Much is known about the estimator σ 2, including ˆσ 2 is unbiased (on average it is σ 2 ), and the sampling distribution is related to a Chi-Squared distribution with n 2 degrees of freedom ANOVA for regression and the Correlation Coefficient There is another connection between the correlation coefficient and the least squares regression line. We can think about regression as a way to analyze the variability in the response. anova(lm(avgmercury ph, data = FloridaLakes)) Analysis of Variance Table Response: AvgMercury Df Sum Sq Mean Sq F value Pr(>F) ph e-06 Residuals This is a lot like the ANOVA tables we have seen before. This time: SST = (y y) 2 SSE = (y ŷ) 2 SSM = (ŷ y) 2 SST =SSM+SSE As before, when SSM is large and SSE is small, then the model (ŷ = ˆβ 0 + ˆβ 1 x) explains a lot of the variability and little is left unexplained (SSE). On the other hand, if SSM is small and SSE is large, then the model explains only a little of the variability and most of it is due to things not explained by the model. The percentage of explained variability is denoted r 2 or R 2 :

8 144 Regression For our the Florida lakes study, we see that R 2 = SSM SST = SSM SSM+SSE SSM =2.00 SSE =4.05 SST = =6.05 R 2 = SSM SST = =0.331 This number is listed as Multiple R-squared on the summary output. So ph explains roughly 1/3 of the variability in mercury levels. The other two thirds of the variability in mercury levels is due to other things. (We can think of many things that might matter: size of the lake,depthofthelake,typesoffishinthelake,typesofplantsinthelake,proximitytoindustrialization highways, streets, manufacturing plants, etc.) More complex studies might investigate the effects of several such factors simultaneously. The correlation coefficient The square root of R 2 (with a sign to indicate whether the association between explanatory and response variables is positive or negative) is the correlation coefficient, R (or r). As a reminder, here are some important facts about R: 1. R is always between -1 and 1 2. R is 1 or -1 only if all the dots fall exactly on a line. 3. If the relationship between the explanatory and response variables is not roughly linear, then R is not a very useful number. (And simple linear regression is not very useful either). 4. For linear relationships, R is a measure of the strength of the relationship. If R is close to 1 or -1, the linearassociationisstrong. Ifitiscloserto0,thelinearassociationisweak(withlotsofscatteraboutthe best fit line). 5. R is unitless if we change the units of our measurements (from English to metric, for example) it will not affect the value of R. 9.3 Confidence Intervals and Hypothesis Tests Bootstrap So how good are these estimates? We would like have interval estimates rather than just point estimates. One way to get interval estimates for the coefficients is to use the bootstrap. Florida Lakes boot.lakes <- do(1000) * lm(avgmercury ph, data = resample(floridalakes)) head(boot.lakes, 2)

9 Regression 145 Intercept ph sigma r.squared dotplot( ph, data = boot.lakes, width = 03) dotplot( Intercept, data = boot.lakes, width = 2) histogram( ph, data = boot.lakes, width = 1) histogram( Intercept, data = boot.lakes, width = 0.1) ph Count Intercept Count ph Density Intercept Density cdata(0.95, ph, boot.lakes) low hi central.p cdata(0.95, Intercept, boot.lakes) low hi central.p Inkjet Printers boot.printers <- do(1000) * lm(price PPM, data = resample(inkjetprinters)) head(boot.printers, 2) Intercept PPM sigma r.squared

10 146 Regression histogram( PPM, data = boot.printers) histogram( Intercept, data = boot.printers) cdata(0.95, PPM, boot.printers) low hi central.p cdata(0.95, Intercept, boot.printers) low hi central.p Density Density PPM Intercept Using Standard Errors We can also compute confidence intervals using estimate±t SE For t we use n 2 degrees of freedom. (The other two degrees of freedom go for estimating the intercept and the slope). This (and much of the regression analysis) is based on the assumptions that 1. The mean values of y (in the population) for each value of x lie along a line. 2. Individual values of y (in the population) for each value of x are normally distributed. 3. The standard deviations of these normal distributions are the same no matter what x is. As before, we have two ways we can estimate the standard errors. 1. Compute the standard deviation of the appropriate bootstrap distribution. This should work well provided our bootstrap distribution is something resembling a normal distribution.

11 Regression Use formulas to compute the standard errors from summary statistics. The formulas for SE are a bit more complicated in this case, but R will standard error estimates for us, so we don t need to know the formulas. Florida Lakes The t value is based on DFE, the degrees of freedom for the errors (residuals). For simple linear regression, the error degrees of freedom is n 2=51. For a 95% confidence interval, we first compute t : t.star <- qt(0.975, df = 51) t.star [1] 2.01 Using the bootstrap distribution. To get the standard errors from or bootstrap distribution, we can use sd(). sd( Intercept, data = boot.lakes) [1] sd( ph, data = boot.lakes) [1] 257 The confint() function can be applied to bootstrap distributions to make this even simpler. We even have a choice between (a) using the standard error as estimated by taking the standard deviation of the bootstrap distribution or (b) using the percentile method: confint(boot.lakes) # 95% CIs for each parameter name lower upper level method estimate margin.of.error 1 Intercept stderr ph stderr sigma stderr r.squared stderr confint(boot.lakes, method = "perc") # 95% CIs for each parameter; percentile method name lower upper level method 1 Intercept quantile 2 ph quantile 3 sigma quantile 4 r.squared quantile confint(boot.lakes, "ph", level = 0.98, method = c("stderr", "perc")) # 98% CI just for ph, both methods

12 148 Regression name lower upper level method estimate margin.of.error 1 ph stderr ph quantile NA NA Using formulas for standard error. The summary output for a linear model includes the formula-based standard error estimates for each parameter. summary(lm(avgmercury ph, data = resample(floridalakes))) lm(formula = AvgMercury ph, data = resample(floridalakes)) Residuals: Min 1Q Median 3Q Max Estimate Std. Error t value Pr(> t ) (Intercept) e-08 ph Residual standard error: on 51 degrees of freedom Multiple R-squared: 0.248,Adjusted R-squared: F-statistic: 16.8 on 1 and 51 DF, p-value: 0015 So we get the following confidence intervals for intercept 1.63±t SE 1.63± ±0.425 and the slope 0.153±t SE ± 64 The confint() function can also be used to simplify these calculations. confint(lm(avgmercury ph, data = resample(floridalakes))) # 95% CI 2.5 % 97.5 % (Intercept) ph confint(lm(avgmercury ph, data = resample(floridalakes)), level = 0.99) # 99% CI % 99.5 % (Intercept) ph

13 Regression 149 Inkjet Printers summary(lm(price PPM, data = resample(inkjetprinters))) lm(formula = Price PPM, data = resample(inkjetprinters)) Residuals: Min 1Q Median 3Q Max Estimate Std. Error t value Pr(> t ) (Intercept) PPM e-07 Residual standard error: 50.7 on 18 degrees of freedom Multiple R-squared: 0.799,Adjusted R-squared: F-statistic: 71.4 on 1 and 18 DF, p-value: 1.11e-07 confint(lm(price PPM, data = resample(inkjetprinters)), "PPM") 2.5 % 97.5 % PPM confint(boot.printers, "PPM") name lower upper level method estimate margin.of.error 1 PPM stderr Hypothesis Tests The summary of linear models includes the results of some hypothesis tests: summary(lm(avgmercury ph, data = FloridaLakes)) lm(formula = AvgMercury ph, data = FloridaLakes) Residuals: Min 1Q Median 3Q Max Estimate Std. Error t value Pr(> t ) (Intercept) e-10 ph e-06

14 150 Regression Residual standard error: on 51 degrees of freedom Multiple R-squared: 0.331,Adjusted R-squared: F-statistic: 25.2 on 1 and 51 DF, p-value: 6.57e-06 Of these the most interesting is the one in the row labeledph. This is a test of H 0 :β 1 =0 H a :β 1 0 The test statistic t = ˆβ 1 0 SE is converted to a p-value using a t-distribution with DFE =n 2 degrees of freedom. t < / 303; t [1] * pt( t, df = 51 ) # p-value [1] 6.52e-06 We could also estimate this p-value using randomization. If β 1 =0, then the model equation becomes response=β 0 +ε so the explanatory variable doesn t matter for determining the response. This means we can simulate a world in which the null hypothesis is true by shuffling the explanatory variable: rand.lakes <- do(1000) * lm(avgmercury shuffle(ph), data = FloridaLakes) histogram( ph, data = rand.lakes, v = 0) 2 * prop( (ph <= ), data = rand.lakes) # p-value from randomization distribution target level: TRUE; other levels: FALSE TRUE 0 Density ph In this case, none of our 1000 resamples produced such a small value for ˆβ 1. This is consistent with the small p-value computed previously.

15 Regression Making Predictions Point Estimates for Response It may be very interesting to make predictions when the explanatory variable has some other value, however. There are two ways to do this in R. One uses the predict() function. It is simpler, however, to use the makefun() function in the mosaic package, so that s the approach we will use here. First, let s build our linear model and store it. lakes.model <- lm(avgmercury ph, data = FloridaLakes) coef(lakes.model) (Intercept) ph Now let s create a function that will estimate values ofavgmercury for a given value ofph: mercury <- makefun(lakes.model) We can now input a ph value and see what our least squares regression line predicts for the average mercury level in the fish: mercury(ph = 5) # estimate AvgMercury when ph is mercury(ph = 7) # estimate AvgMercury when ph is Interval Estimates for the Mean and Individual Response R can compute two kinds of confidence intervals for the response for a given value 1. A confidence interval for the mean response for a given explanatory value can be computed by adding interval='confidence'. mercury(ph = 5, interval = "confidence") fit lwr upr An interval for an individual response(called a prediction interval to avoid confusion with the confidence interval above) can be computed by adding interval='prediction' instead.

16 152 Regression mercury(ph = 5, interval = "prediction") fit lwr upr Prediction intervals (a) are much wider than confidence intervals (b) are very sensitive to the assumption that the population normal for each value of the predictor. (c) are (for a 95% confidence level) a little bit wider than ŷ±2se where SE is the residual standard error reported in the summary output. The prediction interval is a little wider because it takes into account the uncertainty in our estimated slope and intercept as well as the variability of responses around the true regression line. The figure below shows the confidence (dotted) and prediction (dashed) intervals as bands around the regression line. require(fastr) xyplot(avgmercury ph, data = FloridaLakes, panel = panel.lmbands, cex = 0.6, alpha = ) AvgMercury ph As the graph illustrates, the intervals are narrow near the center of the data and wider near the edges of the data. It is not safe to extrapolate beyond the data(without additional information), since there is no data to let us know whether the pattern of the data extends. 9.5 Regression Cautions Don t Fit a Line If a Line Doesn t Fit Whendoingregressionyoushouldalwayslookatthedatatoseeifalineisagoodfit. Ifitisnot,itmaybethat a suitable transformation of one or both of the variables may improve things. Or perhaps some other method is required.

17 Regression 153 Anscombe s Data Anscombe illustrated the importance of looking at the data by concocting an interesting data set. Notice how similar the numerical summaries are for these for pairs of variables summary(lm(y1 x1, anscombe)) lm(formula = y1 x1, data = anscombe) Residuals: Min 1Q Median 3Q Max Estimate Std. Error t value Pr(> t ) (Intercept) x Residual standard error: 1.24 on 9 degrees of freedom Multiple R-squared: 0.667,Adjusted R-squared: F-statistic: 18 on 1 and 9 DF, p-value: 0217 summary(lm(y2 x2, anscombe)) lm(formula = y2 x2, data = anscombe) Residuals: Min 1Q Median 3Q Max Estimate Std. Error t value Pr(> t ) (Intercept) x Residual standard error: 1.24 on 9 degrees of freedom Multiple R-squared: 0.666,Adjusted R-squared: F-statistic: 18 on 1 and 9 DF, p-value: 0218 summary(lm(y3 x3, anscombe)) lm(formula = y3 x3, data = anscombe) Residuals: Min 1Q Median 3Q Max

18 154 Regression Estimate Std. Error t value Pr(> t ) (Intercept) x Residual standard error: 1.24 on 9 degrees of freedom Multiple R-squared: 0.666,Adjusted R-squared: F-statistic: 18 on 1 and 9 DF, p-value: 0218 summary(lm(y4 x4, anscombe)) lm(formula = y4 x4, data = anscombe) Residuals: Min 1Q Median 3Q Max Estimate Std. Error t value Pr(> t ) (Intercept) x Residual standard error: 1.24 on 9 degrees of freedom Multiple R-squared: 0.667,Adjusted R-squared: 0.63 F-statistic: 18 on 1 and 9 DF, p-value: 0216 But the plots reveal that very different things are going on y x Outliers in Regression Outliers can be very influential in regression, especially in small data sets, and especially if they occur for extreme values of the explanatory variable. Outliers cannot be removed just because we don t like them, but they should be explored to see what is going on (data entry error? special case? etc.) Some researchers will do leave-one-out analysis, or leave some out analysis where they refit the regression with each data point left out once. If the regression summary changes very little when we do this, this means that the regression line is summarizing information that is shared among all the points relatively equally. But

19 Regression 155 ifremovingoneorasmallnumberofvaluesmakesadramaticchange,thenweknowthatthatpointisexerting a lot of influence over the resulting analysis (a cause for caution) Residual Plots In addition to scatter plots of the response vs. the explanatory variable, we can also create plots of the residuals of the model vs either the explanatory variable or the fitted values (ŷ). The latter works in a wider variety of settings (including multiple regression and two-way ANOVA). model1 <- lm(y1 x1, data = anscombe) model2 <- lm(y2 x2, data = anscombe) model3 <- lm(y3 x3, data = anscombe) model4 <- lm(y4 x4, data = anscombe) xyplot(resid(model1) x1, data = anscombe) xyplot(resid(model1) fitted(model1), data = anscombe) resid(model1) resid(model1) x1 fitted(model1) xyplot(resid(model2) x2, data = anscombe) xyplot(resid(model2) fitted(model2), data = anscombe) resid(model2) resid(model2) x2 fitted(model2) You can make similar plots for models 3 and 4. The main advantage of these plots is that they use the vertical space in the plot more efficiently. This is especially important when the size of the residuals is small relative to the range of the response variable. Returningto our Florida lakes,wesee that thingslook reasonablefor themodel wehavebeen fitting(but stay tuned for the next section).

20 156 Regression lake.model <- lm(avgmercury ph, data = FloridaLakes) xyplot(avgmercury ph, data = FloridaLakes, type = c("p", "r")) xyplot(resid(lake.model) fitted(lake.model), data = FloridaLakes) AvgMercury resid(lake.model) ph fitted(lake.model) We are hoping not to see any strong patterns in these residual plots Checking the Distribution of the Residuals for Normality Residuals should be checked to see that the distribution looks approximately normal and that that standard deviation remains consistent across the range of our data (and across time). histogram( resid(lakes.model)) xqqmath( resid(lakes.model)) Density 1.5 resid(lakes.model) resid(lakes.model) qnorm The normal-quantile plot shown above is designed so that the points will fall along a straight line when the underlying distribution is exactly normal. As the distribution becomes less and less normal, the normalquantile will look less and less like a straight line. Similar plots (and some others as well) can also be made with mplot(lakes.model) In this case things don t look quite as good as we would like on the normality front. The residuals are a bit too skewed (too many large positive residuals). Using a log transformation on the response (see below) might improve things.

21 Regression Tranformations Transformations of one or both variables can change the shape of the relationship (from non-linear to linear, we hope) and also the distribution of the residuals. In biological applications, a logarithmic transformation is often useful. lakes.model2 <- lm(log(avgmercury) ph, data = FloridaLakes) xyplot(log(avgmercury) ph, data = FloridaLakes, type = c("p", "r")) summary(lakes.model2) lm(formula = log(avgmercury) ph, data = FloridaLakes) Residuals: Min 1Q Median 3Q Max Estimate Std. Error t value Pr(> t ) (Intercept) e-04 ph e-07 Residual standard error: on 51 degrees of freedom Multiple R-squared: 0.381,Adjusted R-squared: F-statistic: 31.4 on 1 and 51 DF, p-value: 8.54e-07 log(avgmercury) ph If we like, we can show the new model fit overlaid on the original data: xyplot(avgmercury ph, data = FloridaLakes, main = "untransformed model", type = c("p", "r")) xyplot(avgmercury ph, data = FloridaLakes, main = "log transformed model") Hg <- makefun(lakes.model2) # turn model into a function plotfun(exp(hg(ph)) ph, add = TRUE) # add this function to the plot

22 158 Regression untransformed model log transformed model AvgMercury AvgMercury ph ph log transformed model AvgMercury ph A logarithmic transformation of AvgMercury improves the normality of the residuals. histogram( resid(lakes.model2)) qqmath( resid(lakes.model2)) xyplot(resid(lakes.model2) ph, data = FloridaLakes) xyplot(resid(lakes.model2) fitted(lakes.model2)) Density resid(lakes.model2) resid(lakes.model2) qnorm

23 Regression 159 resid(lakes.model2) resid(lakes.model2) ph fitted(lakes.model2) The absolute values of the residuals are perhaps a bit larger when the ph is higher (and fits are smaller), although this is exagerated somewhat in the plots because there is so little data with very small ph values. If we look at square roots of standardized residuals this effect is not as pronounced: mplot(lakes.model2, w = 3) [[1]] Scale Location Standardized residuals Fitted Value On balance, the log transformation seems to improve the situation and is to be preferred over the original model.

Multiple Linear Regression

Multiple Linear Regression Multiple Linear Regression A regression with two or more explanatory variables is called a multiple regression. Rather than modeling the mean response as a straight line, as in simple regression, it is

More information

1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96

1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96 1 Final Review 2 Review 2.1 CI 1-propZint Scenario 1 A TV manufacturer claims in its warranty brochure that in the past not more than 10 percent of its TV sets needed any repair during the first two years

More information

Exercise 1.12 (Pg. 22-23)

Exercise 1.12 (Pg. 22-23) Individuals: The objects that are described by a set of data. They may be people, animals, things, etc. (Also referred to as Cases or Records) Variables: The characteristics recorded about each individual.

More information

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( ) Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates

More information

Outline. Topic 4 - Analysis of Variance Approach to Regression. Partitioning Sums of Squares. Total Sum of Squares. Partitioning sums of squares

Outline. Topic 4 - Analysis of Variance Approach to Regression. Partitioning Sums of Squares. Total Sum of Squares. Partitioning sums of squares Topic 4 - Analysis of Variance Approach to Regression Outline Partitioning sums of squares Degrees of freedom Expected mean squares General linear test - Fall 2013 R 2 and the coefficient of correlation

More information

2. Simple Linear Regression

2. Simple Linear Regression Research methods - II 3 2. Simple Linear Regression Simple linear regression is a technique in parametric statistics that is commonly used for analyzing mean response of a variable Y which changes according

More information

Univariate Regression

Univariate Regression Univariate Regression Correlation and Regression The regression line summarizes the linear relationship between 2 variables Correlation coefficient, r, measures strength of relationship: the closer r is

More information

Chapter 7: Simple linear regression Learning Objectives

Chapter 7: Simple linear regression Learning Objectives Chapter 7: Simple linear regression Learning Objectives Reading: Section 7.1 of OpenIntro Statistics Video: Correlation vs. causation, YouTube (2:19) Video: Intro to Linear Regression, YouTube (5:18) -

More information

Section 14 Simple Linear Regression: Introduction to Least Squares Regression

Section 14 Simple Linear Regression: Introduction to Least Squares Regression Slide 1 Section 14 Simple Linear Regression: Introduction to Least Squares Regression There are several different measures of statistical association used for understanding the quantitative relationship

More information

5. Linear Regression

5. Linear Regression 5. Linear Regression Outline.................................................................... 2 Simple linear regression 3 Linear model............................................................. 4

More information

Statistical Models in R

Statistical Models in R Statistical Models in R Some Examples Steven Buechler Department of Mathematics 276B Hurley Hall; 1-6233 Fall, 2007 Outline Statistical Models Linear Models in R Regression Regression analysis is the appropriate

More information

Lets suppose we rolled a six-sided die 150 times and recorded the number of times each outcome (1-6) occured. The data is

Lets suppose we rolled a six-sided die 150 times and recorded the number of times each outcome (1-6) occured. The data is In this lab we will look at how R can eliminate most of the annoying calculations involved in (a) using Chi-Squared tests to check for homogeneity in two-way tables of catagorical data and (b) computing

More information

ANOVA. February 12, 2015

ANOVA. February 12, 2015 ANOVA February 12, 2015 1 ANOVA models Last time, we discussed the use of categorical variables in multivariate regression. Often, these are encoded as indicator columns in the design matrix. In [1]: %%R

More information

Coefficient of Determination

Coefficient of Determination Coefficient of Determination The coefficient of determination R 2 (or sometimes r 2 ) is another measure of how well the least squares equation ŷ = b 0 + b 1 x performs as a predictor of y. R 2 is computed

More information

Comparing Nested Models

Comparing Nested Models Comparing Nested Models ST 430/514 Two models are nested if one model contains all the terms of the other, and at least one additional term. The larger model is the complete (or full) model, and the smaller

More information

Simple Regression Theory II 2010 Samuel L. Baker

Simple Regression Theory II 2010 Samuel L. Baker SIMPLE REGRESSION THEORY II 1 Simple Regression Theory II 2010 Samuel L. Baker Assessing how good the regression equation is likely to be Assignment 1A gets into drawing inferences about how close the

More information

HYPOTHESIS TESTING: CONFIDENCE INTERVALS, T-TESTS, ANOVAS, AND REGRESSION

HYPOTHESIS TESTING: CONFIDENCE INTERVALS, T-TESTS, ANOVAS, AND REGRESSION HYPOTHESIS TESTING: CONFIDENCE INTERVALS, T-TESTS, ANOVAS, AND REGRESSION HOD 2990 10 November 2010 Lecture Background This is a lightning speed summary of introductory statistical methods for senior undergraduate

More information

Simple linear regression

Simple linear regression Simple linear regression Introduction Simple linear regression is a statistical method for obtaining a formula to predict values of one variable from another where there is a causal relationship between

More information

Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression

Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression Objectives: To perform a hypothesis test concerning the slope of a least squares line To recognize that testing for a

More information

Basic Statistics and Data Analysis for Health Researchers from Foreign Countries

Basic Statistics and Data Analysis for Health Researchers from Foreign Countries Basic Statistics and Data Analysis for Health Researchers from Foreign Countries Volkert Siersma siersma@sund.ku.dk The Research Unit for General Practice in Copenhagen Dias 1 Content Quantifying association

More information

Using R for Linear Regression

Using R for Linear Regression Using R for Linear Regression In the following handout words and symbols in bold are R functions and words and symbols in italics are entries supplied by the user; underlined words and symbols are optional

More information

" Y. Notation and Equations for Regression Lecture 11/4. Notation:

 Y. Notation and Equations for Regression Lecture 11/4. Notation: Notation: Notation and Equations for Regression Lecture 11/4 m: The number of predictor variables in a regression Xi: One of multiple predictor variables. The subscript i represents any number from 1 through

More information

Module 5: Multiple Regression Analysis

Module 5: Multiple Regression Analysis Using Statistical Data Using to Make Statistical Decisions: Data Multiple to Make Regression Decisions Analysis Page 1 Module 5: Multiple Regression Analysis Tom Ilvento, University of Delaware, College

More information

Correlation and Simple Linear Regression

Correlation and Simple Linear Regression Correlation and Simple Linear Regression We are often interested in studying the relationship among variables to determine whether they are associated with one another. When we think that changes in a

More information

STAT 350 Practice Final Exam Solution (Spring 2015)

STAT 350 Practice Final Exam Solution (Spring 2015) PART 1: Multiple Choice Questions: 1) A study was conducted to compare five different training programs for improving endurance. Forty subjects were randomly divided into five groups of eight subjects

More information

Chapter 10. Key Ideas Correlation, Correlation Coefficient (r),

Chapter 10. Key Ideas Correlation, Correlation Coefficient (r), Chapter 0 Key Ideas Correlation, Correlation Coefficient (r), Section 0-: Overview We have already explored the basics of describing single variable data sets. However, when two quantitative variables

More information

Chapter 7 Section 7.1: Inference for the Mean of a Population

Chapter 7 Section 7.1: Inference for the Mean of a Population Chapter 7 Section 7.1: Inference for the Mean of a Population Now let s look at a similar situation Take an SRS of size n Normal Population : N(, ). Both and are unknown parameters. Unlike what we used

More information

Chapter 13 Introduction to Linear Regression and Correlation Analysis

Chapter 13 Introduction to Linear Regression and Correlation Analysis Chapter 3 Student Lecture Notes 3- Chapter 3 Introduction to Linear Regression and Correlation Analsis Fall 2006 Fundamentals of Business Statistics Chapter Goals To understand the methods for displaing

More information

1. The parameters to be estimated in the simple linear regression model Y=α+βx+ε ε~n(0,σ) are: a) α, β, σ b) α, β, ε c) a, b, s d) ε, 0, σ

1. The parameters to be estimated in the simple linear regression model Y=α+βx+ε ε~n(0,σ) are: a) α, β, σ b) α, β, ε c) a, b, s d) ε, 0, σ STA 3024 Practice Problems Exam 2 NOTE: These are just Practice Problems. This is NOT meant to look just like the test, and it is NOT the only thing that you should study. Make sure you know all the material

More information

Psychology 205: Research Methods in Psychology

Psychology 205: Research Methods in Psychology Psychology 205: Research Methods in Psychology Using R to analyze the data for study 2 Department of Psychology Northwestern University Evanston, Illinois USA November, 2012 1 / 38 Outline 1 Getting ready

More information

Section 1: Simple Linear Regression

Section 1: Simple Linear Regression Section 1: Simple Linear Regression Carlos M. Carvalho The University of Texas McCombs School of Business http://faculty.mccombs.utexas.edu/carlos.carvalho/teaching/ 1 Regression: General Introduction

More information

Final Exam Practice Problem Answers

Final Exam Practice Problem Answers Final Exam Practice Problem Answers The following data set consists of data gathered from 77 popular breakfast cereals. The variables in the data set are as follows: Brand: The brand name of the cereal

More information

1) Write the following as an algebraic expression using x as the variable: Triple a number subtracted from the number

1) Write the following as an algebraic expression using x as the variable: Triple a number subtracted from the number 1) Write the following as an algebraic expression using x as the variable: Triple a number subtracted from the number A. 3(x - x) B. x 3 x C. 3x - x D. x - 3x 2) Write the following as an algebraic expression

More information

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4) Summary of Formulas and Concepts Descriptive Statistics (Ch. 1-4) Definitions Population: The complete set of numerical information on a particular quantity in which an investigator is interested. We assume

More information

International Statistical Institute, 56th Session, 2007: Phil Everson

International Statistical Institute, 56th Session, 2007: Phil Everson Teaching Regression using American Football Scores Everson, Phil Swarthmore College Department of Mathematics and Statistics 5 College Avenue Swarthmore, PA198, USA E-mail: peverso1@swarthmore.edu 1. Introduction

More information

Example: Boats and Manatees

Example: Boats and Manatees Figure 9-6 Example: Boats and Manatees Slide 1 Given the sample data in Table 9-1, find the value of the linear correlation coefficient r, then refer to Table A-6 to determine whether there is a significant

More information

Simple Linear Regression

Simple Linear Regression STAT 101 Dr. Kari Lock Morgan Simple Linear Regression SECTIONS 9.3 Confidence and prediction intervals (9.3) Conditions for inference (9.1) Want More Stats??? If you have enjoyed learning how to analyze

More information

The importance of graphing the data: Anscombe s regression examples

The importance of graphing the data: Anscombe s regression examples The importance of graphing the data: Anscombe s regression examples Bruce Weaver Northern Health Research Conference Nipissing University, North Bay May 30-31, 2008 B. Weaver, NHRC 2008 1 The Objective

More information

MULTIPLE REGRESSION EXAMPLE

MULTIPLE REGRESSION EXAMPLE MULTIPLE REGRESSION EXAMPLE For a sample of n = 166 college students, the following variables were measured: Y = height X 1 = mother s height ( momheight ) X 2 = father s height ( dadheight ) X 3 = 1 if

More information

Descriptive Statistics

Descriptive Statistics Descriptive Statistics Primer Descriptive statistics Central tendency Variation Relative position Relationships Calculating descriptive statistics Descriptive Statistics Purpose to describe or summarize

More information

17. SIMPLE LINEAR REGRESSION II

17. SIMPLE LINEAR REGRESSION II 17. SIMPLE LINEAR REGRESSION II The Model In linear regression analysis, we assume that the relationship between X and Y is linear. This does not mean, however, that Y can be perfectly predicted from X.

More information

Biostatistics: DESCRIPTIVE STATISTICS: 2, VARIABILITY

Biostatistics: DESCRIPTIVE STATISTICS: 2, VARIABILITY Biostatistics: DESCRIPTIVE STATISTICS: 2, VARIABILITY 1. Introduction Besides arriving at an appropriate expression of an average or consensus value for observations of a population, it is important to

More information

Lecture 11: Confidence intervals and model comparison for linear regression; analysis of variance

Lecture 11: Confidence intervals and model comparison for linear regression; analysis of variance Lecture 11: Confidence intervals and model comparison for linear regression; analysis of variance 14 November 2007 1 Confidence intervals and hypothesis testing for linear regression Just as there was

More information

Chicago Booth BUSINESS STATISTICS 41000 Final Exam Fall 2011

Chicago Booth BUSINESS STATISTICS 41000 Final Exam Fall 2011 Chicago Booth BUSINESS STATISTICS 41000 Final Exam Fall 2011 Name: Section: I pledge my honor that I have not violated the Honor Code Signature: This exam has 34 pages. You have 3 hours to complete this

More information

Answer: C. The strength of a correlation does not change if units change by a linear transformation such as: Fahrenheit = 32 + (5/9) * Centigrade

Answer: C. The strength of a correlation does not change if units change by a linear transformation such as: Fahrenheit = 32 + (5/9) * Centigrade Statistics Quiz Correlation and Regression -- ANSWERS 1. Temperature and air pollution are known to be correlated. We collect data from two laboratories, in Boston and Montreal. Boston makes their measurements

More information

We extended the additive model in two variables to the interaction model by adding a third term to the equation.

We extended the additive model in two variables to the interaction model by adding a third term to the equation. Quadratic Models We extended the additive model in two variables to the interaction model by adding a third term to the equation. Similarly, we can extend the linear model in one variable to the quadratic

More information

Session 7 Bivariate Data and Analysis

Session 7 Bivariate Data and Analysis Session 7 Bivariate Data and Analysis Key Terms for This Session Previously Introduced mean standard deviation New in This Session association bivariate analysis contingency table co-variation least squares

More information

X X X a) perfect linear correlation b) no correlation c) positive correlation (r = 1) (r = 0) (0 < r < 1)

X X X a) perfect linear correlation b) no correlation c) positive correlation (r = 1) (r = 0) (0 < r < 1) CORRELATION AND REGRESSION / 47 CHAPTER EIGHT CORRELATION AND REGRESSION Correlation and regression are statistical methods that are commonly used in the medical literature to compare two or more variables.

More information

Statistical Models in R

Statistical Models in R Statistical Models in R Some Examples Steven Buechler Department of Mathematics 276B Hurley Hall; 1-6233 Fall, 2007 Outline Statistical Models Structure of models in R Model Assessment (Part IA) Anova

More information

Example G Cost of construction of nuclear power plants

Example G Cost of construction of nuclear power plants 1 Example G Cost of construction of nuclear power plants Description of data Table G.1 gives data, reproduced by permission of the Rand Corporation, from a report (Mooz, 1978) on 32 light water reactor

More information

DATA INTERPRETATION AND STATISTICS

DATA INTERPRETATION AND STATISTICS PholC60 September 001 DATA INTERPRETATION AND STATISTICS Books A easy and systematic introductory text is Essentials of Medical Statistics by Betty Kirkwood, published by Blackwell at about 14. DESCRIPTIVE

More information

Simple Linear Regression

Simple Linear Regression Chapter Nine Simple Linear Regression Consider the following three scenarios: 1. The CEO of the local Tourism Authority would like to know whether a family s annual expenditure on recreation is related

More information

Good luck! BUSINESS STATISTICS FINAL EXAM INSTRUCTIONS. Name:

Good luck! BUSINESS STATISTICS FINAL EXAM INSTRUCTIONS. Name: Glo bal Leadership M BA BUSINESS STATISTICS FINAL EXAM Name: INSTRUCTIONS 1. Do not open this exam until instructed to do so. 2. Be sure to fill in your name before starting the exam. 3. You have two hours

More information

Confidence Intervals for One Standard Deviation Using Standard Deviation

Confidence Intervals for One Standard Deviation Using Standard Deviation Chapter 640 Confidence Intervals for One Standard Deviation Using Standard Deviation Introduction This routine calculates the sample size necessary to achieve a specified interval width or distance from

More information

Multiple Linear Regression in Data Mining

Multiple Linear Regression in Data Mining Multiple Linear Regression in Data Mining Contents 2.1. A Review of Multiple Linear Regression 2.2. Illustration of the Regression Process 2.3. Subset Selection in Linear Regression 1 2 Chap. 2 Multiple

More information

Introduction to Minitab and basic commands. Manipulating data in Minitab Describing data; calculating statistics; transformation.

Introduction to Minitab and basic commands. Manipulating data in Minitab Describing data; calculating statistics; transformation. Computer Workshop 1 Part I Introduction to Minitab and basic commands. Manipulating data in Minitab Describing data; calculating statistics; transformation. Outlier testing Problem: 1. Five months of nickel

More information

Means, standard deviations and. and standard errors

Means, standard deviations and. and standard errors CHAPTER 4 Means, standard deviations and standard errors 4.1 Introduction Change of units 4.2 Mean, median and mode Coefficient of variation 4.3 Measures of variation 4.4 Calculating the mean and standard

More information

Week 5: Multiple Linear Regression

Week 5: Multiple Linear Regression BUS41100 Applied Regression Analysis Week 5: Multiple Linear Regression Parameter estimation and inference, forecasting, diagnostics, dummy variables Robert B. Gramacy The University of Chicago Booth School

More information

An analysis method for a quantitative outcome and two categorical explanatory variables.

An analysis method for a quantitative outcome and two categorical explanatory variables. Chapter 11 Two-Way ANOVA An analysis method for a quantitative outcome and two categorical explanatory variables. If an experiment has a quantitative outcome and two categorical explanatory variables that

More information

4. Simple regression. QBUS6840 Predictive Analytics. https://www.otexts.org/fpp/4

4. Simple regression. QBUS6840 Predictive Analytics. https://www.otexts.org/fpp/4 4. Simple regression QBUS6840 Predictive Analytics https://www.otexts.org/fpp/4 Outline The simple linear model Least squares estimation Forecasting with regression Non-linear functional forms Regression

More information

Doing Multiple Regression with SPSS. In this case, we are interested in the Analyze options so we choose that menu. If gives us a number of choices:

Doing Multiple Regression with SPSS. In this case, we are interested in the Analyze options so we choose that menu. If gives us a number of choices: Doing Multiple Regression with SPSS Multiple Regression for Data Already in Data Editor Next we want to specify a multiple regression analysis for these data. The menu bar for SPSS offers several options:

More information

Relationships Between Two Variables: Scatterplots and Correlation

Relationships Between Two Variables: Scatterplots and Correlation Relationships Between Two Variables: Scatterplots and Correlation Example: Consider the population of cars manufactured in the U.S. What is the relationship (1) between engine size and horsepower? (2)

More information

Regression Analysis: A Complete Example

Regression Analysis: A Complete Example Regression Analysis: A Complete Example This section works out an example that includes all the topics we have discussed so far in this chapter. A complete example of regression analysis. PhotoDisc, Inc./Getty

More information

Week 4: Standard Error and Confidence Intervals

Week 4: Standard Error and Confidence Intervals Health Sciences M.Sc. Programme Applied Biostatistics Week 4: Standard Error and Confidence Intervals Sampling Most research data come from subjects we think of as samples drawn from a larger population.

More information

Recall this chart that showed how most of our course would be organized:

Recall this chart that showed how most of our course would be organized: Chapter 4 One-Way ANOVA Recall this chart that showed how most of our course would be organized: Explanatory Variable(s) Response Variable Methods Categorical Categorical Contingency Tables Categorical

More information

2013 MBA Jump Start Program. Statistics Module Part 3

2013 MBA Jump Start Program. Statistics Module Part 3 2013 MBA Jump Start Program Module 1: Statistics Thomas Gilbert Part 3 Statistics Module Part 3 Hypothesis Testing (Inference) Regressions 2 1 Making an Investment Decision A researcher in your firm just

More information

How To Run Statistical Tests in Excel

How To Run Statistical Tests in Excel How To Run Statistical Tests in Excel Microsoft Excel is your best tool for storing and manipulating data, calculating basic descriptive statistics such as means and standard deviations, and conducting

More information

Standard Deviation Estimator

Standard Deviation Estimator CSS.com Chapter 905 Standard Deviation Estimator Introduction Even though it is not of primary interest, an estimate of the standard deviation (SD) is needed when calculating the power or sample size of

More information

DESCRIPTIVE STATISTICS. The purpose of statistics is to condense raw data to make it easier to answer specific questions; test hypotheses.

DESCRIPTIVE STATISTICS. The purpose of statistics is to condense raw data to make it easier to answer specific questions; test hypotheses. DESCRIPTIVE STATISTICS The purpose of statistics is to condense raw data to make it easier to answer specific questions; test hypotheses. DESCRIPTIVE VS. INFERENTIAL STATISTICS Descriptive To organize,

More information

KSTAT MINI-MANUAL. Decision Sciences 434 Kellogg Graduate School of Management

KSTAT MINI-MANUAL. Decision Sciences 434 Kellogg Graduate School of Management KSTAT MINI-MANUAL Decision Sciences 434 Kellogg Graduate School of Management Kstat is a set of macros added to Excel and it will enable you to do the statistics required for this course very easily. To

More information

SPSS Guide: Regression Analysis

SPSS Guide: Regression Analysis SPSS Guide: Regression Analysis I put this together to give you a step-by-step guide for replicating what we did in the computer lab. It should help you run the tests we covered. The best way to get familiar

More information

CHAPTER 13 SIMPLE LINEAR REGRESSION. Opening Example. Simple Regression. Linear Regression

CHAPTER 13 SIMPLE LINEAR REGRESSION. Opening Example. Simple Regression. Linear Regression Opening Example CHAPTER 13 SIMPLE LINEAR REGREION SIMPLE LINEAR REGREION! Simple Regression! Linear Regression Simple Regression Definition A regression model is a mathematical equation that descries the

More information

Factors affecting online sales

Factors affecting online sales Factors affecting online sales Table of contents Summary... 1 Research questions... 1 The dataset... 2 Descriptive statistics: The exploratory stage... 3 Confidence intervals... 4 Hypothesis tests... 4

More information

Stepwise Regression. Chapter 311. Introduction. Variable Selection Procedures. Forward (Step-Up) Selection

Stepwise Regression. Chapter 311. Introduction. Variable Selection Procedures. Forward (Step-Up) Selection Chapter 311 Introduction Often, theory and experience give only general direction as to which of a pool of candidate variables (including transformed variables) should be included in the regression model.

More information

CALCULATIONS & STATISTICS

CALCULATIONS & STATISTICS CALCULATIONS & STATISTICS CALCULATION OF SCORES Conversion of 1-5 scale to 0-100 scores When you look at your report, you will notice that the scores are reported on a 0-100 scale, even though respondents

More information

Analysis of Variance ANOVA

Analysis of Variance ANOVA Analysis of Variance ANOVA Overview We ve used the t -test to compare the means from two independent groups. Now we ve come to the final topic of the course: how to compare means from more than two populations.

More information

Part 2: Analysis of Relationship Between Two Variables

Part 2: Analysis of Relationship Between Two Variables Part 2: Analysis of Relationship Between Two Variables Linear Regression Linear correlation Significance Tests Multiple regression Linear Regression Y = a X + b Dependent Variable Independent Variable

More information

Lean Six Sigma Analyze Phase Introduction. TECH 50800 QUALITY and PRODUCTIVITY in INDUSTRY and TECHNOLOGY

Lean Six Sigma Analyze Phase Introduction. TECH 50800 QUALITY and PRODUCTIVITY in INDUSTRY and TECHNOLOGY TECH 50800 QUALITY and PRODUCTIVITY in INDUSTRY and TECHNOLOGY Before we begin: Turn on the sound on your computer. There is audio to accompany this presentation. Audio will accompany most of the online

More information

1.5 Oneway Analysis of Variance

1.5 Oneway Analysis of Variance Statistics: Rosie Cornish. 200. 1.5 Oneway Analysis of Variance 1 Introduction Oneway analysis of variance (ANOVA) is used to compare several means. This method is often used in scientific or medical experiments

More information

An analysis appropriate for a quantitative outcome and a single quantitative explanatory. 9.1 The model behind linear regression

An analysis appropriate for a quantitative outcome and a single quantitative explanatory. 9.1 The model behind linear regression Chapter 9 Simple Linear Regression An analysis appropriate for a quantitative outcome and a single quantitative explanatory variable. 9.1 The model behind linear regression When we are examining the relationship

More information

Chapter 23. Inferences for Regression

Chapter 23. Inferences for Regression Chapter 23. Inferences for Regression Topics covered in this chapter: Simple Linear Regression Simple Linear Regression Example 23.1: Crying and IQ The Problem: Infants who cry easily may be more easily

More information

Getting Correct Results from PROC REG

Getting Correct Results from PROC REG Getting Correct Results from PROC REG Nathaniel Derby, Statis Pro Data Analytics, Seattle, WA ABSTRACT PROC REG, SAS s implementation of linear regression, is often used to fit a line without checking

More information

table to see that the probability is 0.8413. (b) What is the probability that x is between 16 and 60? The z-scores for 16 and 60 are: 60 38 = 1.

table to see that the probability is 0.8413. (b) What is the probability that x is between 16 and 60? The z-scores for 16 and 60 are: 60 38 = 1. Review Problems for Exam 3 Math 1040 1 1. Find the probability that a standard normal random variable is less than 2.37. Looking up 2.37 on the normal table, we see that the probability is 0.9911. 2. Find

More information

Simple Linear Regression Inference

Simple Linear Regression Inference Simple Linear Regression Inference 1 Inference requirements The Normality assumption of the stochastic term e is needed for inference even if it is not a OLS requirement. Therefore we have: Interpretation

More information

Lecture Notes Module 1

Lecture Notes Module 1 Lecture Notes Module 1 Study Populations A study population is a clearly defined collection of people, animals, plants, or objects. In psychological research, a study population usually consists of a specific

More information

Least Squares Regression. Alan T. Arnholt Department of Mathematical Sciences Appalachian State University arnholt@math.appstate.

Least Squares Regression. Alan T. Arnholt Department of Mathematical Sciences Appalachian State University arnholt@math.appstate. Least Squares Regression Alan T. Arnholt Department of Mathematical Sciences Appalachian State University arnholt@math.appstate.edu Spring 2006 R Notes 1 Copyright c 2006 Alan T. Arnholt 2 Least Squares

More information

Statistics courses often teach the two-sample t-test, linear regression, and analysis of variance

Statistics courses often teach the two-sample t-test, linear regression, and analysis of variance 2 Making Connections: The Two-Sample t-test, Regression, and ANOVA In theory, there s no difference between theory and practice. In practice, there is. Yogi Berra 1 Statistics courses often teach the two-sample

More information

Descriptive Statistics

Descriptive Statistics Y520 Robert S Michael Goal: Learn to calculate indicators and construct graphs that summarize and describe a large quantity of values. Using the textbook readings and other resources listed on the web

More information

One-Way Analysis of Variance: A Guide to Testing Differences Between Multiple Groups

One-Way Analysis of Variance: A Guide to Testing Differences Between Multiple Groups One-Way Analysis of Variance: A Guide to Testing Differences Between Multiple Groups In analysis of variance, the main research question is whether the sample means are from different populations. The

More information

DEPARTMENT OF PSYCHOLOGY UNIVERSITY OF LANCASTER MSC IN PSYCHOLOGICAL RESEARCH METHODS ANALYSING AND INTERPRETING DATA 2 PART 1 WEEK 9

DEPARTMENT OF PSYCHOLOGY UNIVERSITY OF LANCASTER MSC IN PSYCHOLOGICAL RESEARCH METHODS ANALYSING AND INTERPRETING DATA 2 PART 1 WEEK 9 DEPARTMENT OF PSYCHOLOGY UNIVERSITY OF LANCASTER MSC IN PSYCHOLOGICAL RESEARCH METHODS ANALYSING AND INTERPRETING DATA 2 PART 1 WEEK 9 Analysis of covariance and multiple regression So far in this course,

More information

The correlation coefficient

The correlation coefficient The correlation coefficient Clinical Biostatistics The correlation coefficient Martin Bland Correlation coefficients are used to measure the of the relationship or association between two quantitative

More information

Confidence Intervals for Cp

Confidence Intervals for Cp Chapter 296 Confidence Intervals for Cp Introduction This routine calculates the sample size needed to obtain a specified width of a Cp confidence interval at a stated confidence level. Cp is a process

More information

TRINITY COLLEGE. Faculty of Engineering, Mathematics and Science. School of Computer Science & Statistics

TRINITY COLLEGE. Faculty of Engineering, Mathematics and Science. School of Computer Science & Statistics UNIVERSITY OF DUBLIN TRINITY COLLEGE Faculty of Engineering, Mathematics and Science School of Computer Science & Statistics BA (Mod) Enter Course Title Trinity Term 2013 Junior/Senior Sophister ST7002

More information

2. What is the general linear model to be used to model linear trend? (Write out the model) = + + + or

2. What is the general linear model to be used to model linear trend? (Write out the model) = + + + or Simple and Multiple Regression Analysis Example: Explore the relationships among Month, Adv.$ and Sales $: 1. Prepare a scatter plot of these data. The scatter plots for Adv.$ versus Sales, and Month versus

More information

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions. Algebra I Overview View unit yearlong overview here Many of the concepts presented in Algebra I are progressions of concepts that were introduced in grades 6 through 8. The content presented in this course

More information

The Dummy s Guide to Data Analysis Using SPSS

The Dummy s Guide to Data Analysis Using SPSS The Dummy s Guide to Data Analysis Using SPSS Mathematics 57 Scripps College Amy Gamble April, 2001 Amy Gamble 4/30/01 All Rights Rerserved TABLE OF CONTENTS PAGE Helpful Hints for All Tests...1 Tests

More information

AP STATISTICS REVIEW (YMS Chapters 1-8)

AP STATISTICS REVIEW (YMS Chapters 1-8) AP STATISTICS REVIEW (YMS Chapters 1-8) Exploring Data (Chapter 1) Categorical Data nominal scale, names e.g. male/female or eye color or breeds of dogs Quantitative Data rational scale (can +,,, with

More information

Estimation of σ 2, the variance of ɛ

Estimation of σ 2, the variance of ɛ Estimation of σ 2, the variance of ɛ The variance of the errors σ 2 indicates how much observations deviate from the fitted surface. If σ 2 is small, parameters β 0, β 1,..., β k will be reliably estimated

More information

E(y i ) = x T i β. yield of the refined product as a percentage of crude specific gravity vapour pressure ASTM 10% point ASTM end point in degrees F

E(y i ) = x T i β. yield of the refined product as a percentage of crude specific gravity vapour pressure ASTM 10% point ASTM end point in degrees F Random and Mixed Effects Models (Ch. 10) Random effects models are very useful when the observations are sampled in a highly structured way. The basic idea is that the error associated with any linear,

More information

Review of Fundamental Mathematics

Review of Fundamental Mathematics Review of Fundamental Mathematics As explained in the Preface and in Chapter 1 of your textbook, managerial economics applies microeconomic theory to business decision making. The decision-making tools

More information