The statistical procedures used depend upon the kind of variables (categorical or quantitative):

Similar documents
Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression

Regression Analysis: A Complete Example

Section 14 Simple Linear Regression: Introduction to Least Squares Regression

2. Simple Linear Regression

Correlation and Regression

Chapter 7: Simple linear regression Learning Objectives

" Y. Notation and Equations for Regression Lecture 11/4. Notation:

Chapter 10. Key Ideas Correlation, Correlation Coefficient (r),

Statistical Models in R

Exercise 1.12 (Pg )

Module 5: Multiple Regression Analysis

1.5 Oneway Analysis of Variance

1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96

Simple linear regression

Chapter 13 Introduction to Linear Regression and Correlation Analysis

SPSS Guide: Regression Analysis

1. The parameters to be estimated in the simple linear regression model Y=α+βx+ε ε~n(0,σ) are: a) α, β, σ b) α, β, ε c) a, b, s d) ε, 0, σ

MTH 140 Statistics Videos

Good luck! BUSINESS STATISTICS FINAL EXAM INSTRUCTIONS. Name:

2013 MBA Jump Start Program. Statistics Module Part 3

Univariate Regression

Module 3: Correlation and Covariance

Multiple Linear Regression

17. SIMPLE LINEAR REGRESSION II

4. Multiple Regression in Practice

Fairfield Public Schools

Predictor Coef StDev T P Constant X S = R-Sq = 0.0% R-Sq(adj) = 0.

AP STATISTICS REVIEW (YMS Chapters 1-8)

MULTIPLE REGRESSION EXAMPLE

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.

Simple Regression Theory II 2010 Samuel L. Baker

Recall this chart that showed how most of our course would be organized:

Introduction to Regression and Data Analysis

Linear Regression. Chapter 5. Prediction via Regression Line Number of new birds and Percent returning. Least Squares

Lesson 1: Comparison of Population Means Part c: Comparison of Two- Means

Comparing Means in Two Populations

2. Here is a small part of a data set that describes the fuel economy (in miles per gallon) of 2006 model motor vehicles.

Chapter 23. Inferences for Regression

Outline. Topic 4 - Analysis of Variance Approach to Regression. Partitioning Sums of Squares. Total Sum of Squares. Partitioning sums of squares

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

An analysis appropriate for a quantitative outcome and a single quantitative explanatory. 9.1 The model behind linear regression

STAT 350 Practice Final Exam Solution (Spring 2015)

Lets suppose we rolled a six-sided die 150 times and recorded the number of times each outcome (1-6) occured. The data is

HYPOTHESIS TESTING: CONFIDENCE INTERVALS, T-TESTS, ANOVAS, AND REGRESSION

Chapter 7. One-way ANOVA

CALCULATIONS & STATISTICS

We extended the additive model in two variables to the interaction model by adding a third term to the equation.

Using R for Linear Regression

SPSS Explore procedure

DESCRIPTIVE STATISTICS. The purpose of statistics is to condense raw data to make it easier to answer specific questions; test hypotheses.

Simple Linear Regression Inference

The Dummy s Guide to Data Analysis Using SPSS

The importance of graphing the data: Anscombe s regression examples

CHAPTER 13 SIMPLE LINEAR REGRESSION. Opening Example. Simple Regression. Linear Regression

GLM I An Introduction to Generalized Linear Models

Session 7 Bivariate Data and Analysis

Class 19: Two Way Tables, Conditional Distributions, Chi-Square (Text: Sections 2.5; 9.1)

Premaster Statistics Tutorial 4 Full solutions

Inference for two Population Means

Correlation and Simple Linear Regression

DATA INTERPRETATION AND STATISTICS

5. Linear Regression

An analysis method for a quantitative outcome and two categorical explanatory variables.

TRINITY COLLEGE. Faculty of Engineering, Mathematics and Science. School of Computer Science & Statistics

Part 2: Analysis of Relationship Between Two Variables

11. Analysis of Case-control Studies Logistic Regression

1 Basic ANOVA concepts

Basic Statistics and Data Analysis for Health Researchers from Foreign Countries

Nonlinear Regression Functions. SW Ch 8 1/54/

Multiple Regression: What Is It?

Comparing Nested Models

Business Statistics. Successful completion of Introductory and/or Intermediate Algebra courses is recommended before taking Business Statistics.

Relationships Between Two Variables: Scatterplots and Correlation

IAPRI Quantitative Analysis Capacity Building Series. Multiple regression analysis & interpreting results

Introduction to Analysis of Variance (ANOVA) Limitations of the t-test

Simple Predictive Analytics Curtis Seare

II. DISTRIBUTIONS distribution normal distribution. standard scores

One-Way Analysis of Variance

Chapter 5 Analysis of variance SPSS Analysis of variance

A Primer on Mathematical Statistics and Univariate Distributions; The Normal Distribution; The GLM with the Normal Distribution

Regression step-by-step using Microsoft Excel

One-Way ANOVA using SPSS SPSS ANOVA procedures found in the Compare Means analyses. Specifically, we demonstrate

1) Write the following as an algebraic expression using x as the variable: Triple a number subtracted from the number

Once saved, if the file was zipped you will need to unzip it. For the files that I will be posting you need to change the preferences.

Descriptive Statistics

Data Mining Techniques Chapter 5: The Lure of Statistics: Data Mining Using Familiar Tools

Course Objective This course is designed to give you a basic understanding of how to run regressions in SPSS.

Answer: C. The strength of a correlation does not change if units change by a linear transformation such as: Fahrenheit = 32 + (5/9) * Centigrade

Example: Boats and Manatees

Descriptive statistics; Correlation and regression

DEPARTMENT OF PSYCHOLOGY UNIVERSITY OF LANCASTER MSC IN PSYCHOLOGICAL RESEARCH METHODS ANALYSING AND INTERPRETING DATA 2 PART 1 WEEK 9

MEASURES OF VARIATION

Elementary Statistics Sample Exam #3

Analysis of Variance ANOVA

Additional sources Compilation of sources:

Stat 411/511 THE RANDOMIZATION TEST. Charlotte Wickham. stat511.cwick.co.nz. Oct

2 Sample t-test (unequal sample sizes and unequal variances)

International Statistical Institute, 56th Session, 2007: Phil Everson

Introduction to Quantitative Methods

Transcription:

Math 143 Correlation and Regression 1 Review: We are looking at methods to investigate two or more variables at once. bivariate: multivariate: The statistical procedures used depend upon the kind of variables (categorical or quantitative): ChiSquare deals with Correlation/Regression deals with ANOVA deals with Actually, each of these methods can be extended to deal with multivariate situations. Sometimes additional descriptions are used to distinguish the simpler bivariate versions from their related multivariate versions: Bivariate Chisquare for Twoway Tables Simple (Linear) Regression Oneway ANOVA Multivariate Chisquare for Threeway Tables, etc. Multiple (Linear) Regression Twoway ANOVA We will focus our attention on the bivariate cases, but will talk a little about the multivariate cases. We use regression and/or correlation when we have two quantitative variables, and we want to see if there is an association between these two variables. Two variables (reflecting two measurements for the same individual) are associated if Here s the plan: 1. Start with graphical displays. 2. Move on to numerical summaries. 3. Then look for patterns and deviations from those patterns and use a mathematical model to describe regular patterns. 4. Finally, use statistical inference to draw conclusions about the relationship between the two quantitative variables in the population from their relationship in a sample.

Math 143 Correlation and Regression 2 Scatter Plots Scatter plots are a graphical display of the relationship between two quantitative variables. one dot per individual explanatory variable (x) on horizontal axis, response variable (y) on vertical axis Example. Five used cars were selected randomly. Their ages and prices were recorded as follows: Observation Age (yrs) Price ($1000) 1 2 6.5 2 6 3.7 3 3 6.1 4 4 4.5 5 10 2.0 Plot the used car data on a scatter plot. From the plot, how would you describe the relationship between age and price of a used car? As usual, when looking at scatter plots, we are looking to see the overall pattern and any striking deviations from that pattern (e.g., outliers). The pattern in a scatter plot can often be summarized in terms of Form Strength Direction Positive association: Negative association: If you have more than one group, points may be plotted in different colors, or with different letters, to see if group affects the relationship between the two continuous variables.

Math 143 Correlation and Regression 3 The Correlation Coefficient: r Our goal is to come up with a number that measures The correlation coefficient is defined as r = 1 ( ) ( ) n 1 xi x yi ȳ s x s y = 1 n 1 ( ) ( ) That is, it is the sum of products of zscores, scaled by the size of the data set (n 1). Let s see how to use this to compute the correlation coefficient from data. Then will figure out how this number describes the strength and direction of a linear association between two variables. Golf Scores Here is a scatter plot for the golf scores in two rounds of 12 players: round2 100+ 90+ o o o o 80+ o ++++++round1 80.0 85.0 90.0 95.0 100.0 105.0 The standard deviation was 7.83 for round 1 and 7.84 for round 2. We can use this to calculate the zscores for each value and then the correlation coefficient: round1 round2 z1 z2 z1*z2 ===== ===== ===== ===== ====== 89 94 0.09 0.77 0.07 90 85 0.04 0.38 0.02 87 89 0.34 0.13 0.04 95 89 0.68 0.13 0.09 86 81 0.47 0.89 0.42 81 76 1.11 1.53 1.69 102 107 1.57 2.42 3.82 105 89 1.96 0.13 0.25 83 87 0.85 0.13 0.11 88 91 0.21 0.38 0.08 91 88 0.17 0.00 0.00 79 80 1.36 1.02 1.39 ==== ==== ===== ===== ===== sum 1076 1056 0.00 0.00 7.56 mean(round1) = 89.667 mean(round2) = 88 sd(round1) = 7.83 sd(round2) = 7.84 The correlation coefficient is r = 1 11 (7.56) = 0.687 since there are 12 pairs of data values and 11 = 12 1.

Math 143 Correlation and Regression 4 Here is the Minitab regression output for this example: The regression equation is round2 = 26.3320 + 0.687747 round1 S = 5.97399 RSq = 47.2 % RSq(adj) = 41.9 % Analysis of Variance Source DF SS MS F P Regression 1 319.115 319.115 8.94166 0.014 Error 10 356.885 35.689 Total 11 676.000 Notice that r is not given directly, but r 2 is given (it s called RSq). There is also a lot of other stuff in the output. We will learn what those other numbers mean a little later. Another Example Here is a scatter plot for two scores (fake data): score2 o 80+ o o 60+ 40+ o ++++++score1 36.0 42.0 48.0 54.0 60.0 66.0 The standard deviation of score1 is 9.89 and of score2 15.85. We can use this to calculate the zscores and then the correlation coefficient as before: score1 score2 Z1 Z2 Z1*Z2 46 72 0.50 0.35 0.18 47 69 0.39 0.16 0.06 52 72 0.11 0.35 0.04 44 39 0.70 1.73 1.21 55 67 0.41 0.04 0.02 67 87 1.63 1.30 2.12 49 62 0.19 0.28 0.05 42 51 0.90 0.97 0.87 39 54 1.20 0.78 0.94 68 91 1.73 1.55 2.68 === === ===== ===== ===== sum 509 664 0.00 0.00 7.69 The correlation coefficient is r = 1 9 (7.69) = 0.854 since there are 10 pairs of data values and 9 = 10 1.

Math 143 Correlation and Regression 5 Here is the Minitab regression output for this example: The regression equation is score2 = 3.25020 + 1.36837 score1 S = 8.73901 RSq = 73.0 % RSq(adj) = 69.6 % Analysis of Variance Source DF SS MS F P Regression 1 1649.44 1649.44 21.5979 0.002 Error 8 610.96 76.37 Total 9 2260.40 So how does this correlation coefficient work? Let s think first about when it will be positive and when it will be negative. The correlation coefficient will be positive if we have lots of positive products and few negative products in our sum. Positive products occur when both zscores are positive or both are negative. So a positive correlation coefficient indicates Similarly, a negative correlation coefficient indicates On the other hand, the correlation coefficient will be near zero when Properties of r: 1. 2. 3. 4.

Math 143 Correlation and Regression 6 Regression Lines Correlations measure the direction and strength of a linear relationship. If we want to go beyond that, and to draw a line to graphically show the relationship more specifically, we need linear regression. A regression line describes how a response variable y changes as an explanatory variable x changes. We often use a regression line to predict the value of y for a given value of x. Note: when we did correlations, we did not need to be careful about explanatory and response variables, but for regression we must always identify one variable as the independent (explanatory) variable and the other as the dependent (response) variable. Review of Lines Slope Slope can be computed from any two points on a line Definition: slope = We get the same number for slope no matter which two points we pick. Example. (1, 3) and (3, 8) are on a line. What is the slope of the line? Example. (1, 3) is on a line that has slope 2. If the xcoordinate of another point on the line is 3, what is the y coordinate? How do we determine y if we know x? All lines can be described by an equation of the form y = (slope)x + (intercept) y = mx + b y = a + bx ŷ = a + bx and we can determine the equation above if we know either slope and one point two points

Math 143 Correlation and Regression 7 Least Squares The full name of the regression line is the leastsquares regression line of y on x. The regression line is chosen to make the sum of the squared errors in predicting y values from x values according to the line as small as possible. These errors are usually called residuals: residual = observed y predicted y = y ŷ We will have more to say about residuals shortly. It is an interesting mathematical problem to determine which line minimizes the sum of squared residuals. It turns out to be a straightforward application of calculus. The amazing thing, is that it is actually quite easy to determine the regression line from data. We will describe the regression line by giving one point and the slope. From that we can get the equation. The point is always on the regression line. The slope (b) of the regression line is Example. (used car data) r =.9707, x = 5, s x = 3.16, ȳ = 4.56, s y = 1.83. Find an equation for the regression line. Making Predictions. Often we use regression lines with the equation to make predictions. Use the regression line with the equation price=7.37 0.562(age) to predict the prices of cars that are 4 years old, 10 years old, and 15 years old. price(4) = price(10) = price(15) = To sketch a line: Make two predictions, plot them, and connect them. (You will be more accurate if you make predictions that are farther apart.) Go back and add the regression line to our car data scatter plot.

Math 143 Correlation and Regression 8 How well does the regression line fit? A regression line can be made from any set of data. But some data sets are not very well described by the regression line. We want to develop some tools to help us measure the fit. The regression line is a mathematical model. It describes the relationship between x and y, but not perfectly. The vertical distances between data points and the regression line are the errors of the model. (Remember: residual = observed predicted.) The errors are also called residuals because they represent the leftover (or residual) information that the model fails to predict. They help us assess how well the line describes the data. Example. Find the residuals in the used car example. What do they sum to? x obs. y predicted y age price 7.37.562(age) residual = observed predicted 2 6.5 6.246 6 3.7 3.998 3 6.1 5.684 4 4.5 5.122 10 2.0 1.750 Plot these residuals on the vertical axis and x values on the horizontal axis. If observations were made in a known order, one can also plot residuals vs. the time order of the observations. Once we have a plot of residuals, we look for patterns. If the fit is good, we should see when we look at residuals plots. Sketches of residual plots and what they indicate:

Math 143 Correlation and Regression 9 General Cautions about Correlation and Regression 1. Correlation only measures linear association. So we should always 2. Extrapolation (predicting outside the range of x s in the data) is dangerous. 3. Correlations and regressions are not resistant (they can be greatly affected by outliers). Influential observations: 4. Association does not imply causation. The Question of Causation The last caution above deserves some special attention. Often we would like to be able to say that a change in one variable causes a change in another variable. For example, we might like to say that the amount of some drug taken causes an amount of weight to be lost. But under what circumstances can we say this with reasonable certainty? When must we use caution? Some possible explanations for an observed association: (Broken lines show an association. Arrows show causeandeffect links. We observe x and y, but z might be an unobserved (lurking) variable.) Causation: Common response: Confounding: So, when can we say that something causes something else? 1. The best situation is a. But this is usually only possible in the lab. When you re working with people, you can t do this. What you can do is try to control for lurking variables (make your groups as similar as possible). 2. When we can t do an experiment, we look for: a strong association a consistent association the alleged cause precedes the effect in time the alleged cause is plausible

Math 143 Correlation and Regression 10 Inference for Regression Simple Linear Regression is a set of procedures for dealing with data that has two quantitative values for each individual in the data set. We will call these x and y. The goal will be to use x to predict y. Example. A tax consultant studied the current relation between the selling price and assessed valuation of onefamily dwellings in a large tax district. He obtained a random sample of recent sales of onefamily dwellings and, after plotting the data, found a regression line of (measuring in thousands of dollars): (selling price) = 4.98 + 2.40(assessed valuation) At the same time, a second tax consultant obtained a second random sample of recent sales of onefamily dwellings, and after plotting the data, found a regression line of: (selling price) = 1.89 + 2.63(assessed valuation) Both consultants attempted to model the true linear relationship they believed to exist between price and valuation. The regression lines they found were sample estimates of the true (but unknown) population relationship between selling price (y) and assessed valuation (x). But each one came up with a different result. The fact that different samples lead to different regressions lines tells us that unless we know how accurate we can expect our regression line estimate to be, it really isn t of much use to us. So we need to learn about inference for regression. The model for simple linear regression is y i = β 0 + β 1 x i + ɛ i ɛ i N(0,σ) That is, the y value (y i ) for a given x value (x i ) is determined by the equation of a line (y i = β 0 + β 1 x i ), and an error term (+ɛ i ). One way of thinking about the error term is that the line is predicting for each value of x the. There is still variation among different individuals with the same value for x. The error term measures how far y i is from the the predicted mean according to the line. These errors are assumed to be normally distributed with a standard deviation σ that does not depend on x (or on anything else). To summarize, the assumptions for the linear regression model are 1. 2.

Math 143 Correlation and Regression 11 Example. We want to predict SAT scores from ACT scores. We sample scores from a number of student who have taken both tests. Here is a scatter plot of their test scores: 1400+ SAT 2 2 o o 1050+ o 3 3 2 2 o 3 2 3 2 o o o o 4 o 2 o o o 2 o o o o 700+ o o 350+ ++++++ACT 5.0 10.0 15.0 20.0 25.0 30.0 The simple regression model has 3 parameters: β 0, β 1, and σ. Given a sample from the population, we estimate these parameters with b 0, b 1 and s. b 0 and b 1 come from the regression line: s = b 1 = slope = r s y s x b 0 = intercept. We can solve for b 0 using the fact that the point ( x, ȳ) is on the regression line. (residual) 2, where residual = observed predicted = y i ŷ i. n 2 In practice, the values of b 0, b 1, s and r are calculated by software. You should be able to identify each in the output below (r 2 is given rather than r): The regression equation is SAT = 253 + 31.2 ACT Predictor Coef SE Coef T P Constant 253.19 62.67 4.04 0.000 ACT 31.206 2.895 10.78 0.000 S = 104.8 RSq = 66.7% RSq(adj) = 66.1% Since the model is based on normal distributions and we don t know σ, 1. Regression is going to be sensitive to outliers. Outliers with especially large or small values of the independent variable are especially influential. Minitab will even help you try to identify potential problems: Unusual Observations Obs ACT SAT Fit SE Fit Residual St Resid 15 10.0 500.0 565.2 35.0 65.2 0.66 X 21 32.0 1440.0 1251.8 34.2 188.2 1.90 X 25 7.0 490.0 471.6 43.1 18.4 0.19 X 47 21.0 420.0 908.5 13.5 488.5 4.70R R denotes an observation with a large standardized residual X denotes an observation whose X value gives it large influence.

Math 143 Correlation and Regression 12 2. We can check if the model is reasonable by looking at our residuals: (a) Histograms and normal quantile plots indicate overall normality. We are looking for a roughly bellshaped histogram or a roughly linear normal quantile plot. (b) Plots of residuals vs x, or residuals vs. order, or residuals vs. fit note: fit = indicate if the standard deviation appears to remain constant throughout. We are looking to NOT see any clear pattern in these plots. A pattern would indicate something other than randomness is influencing the residuals. 3. We can do inference for β 0, β 1, etc. using the t distributions, we just need to know the corresponding SE and degrees of freedom. parameter SE df 1 β 0 SE b0 = s n + x 2 (x i x) 2 n 2 β 1 SE b1 = 1 ˆµ (prediction of mean) SE ˆµ = s n + ŷ (individual prediction) SEŷ = s s (xi x) 2 n 2 x x (x i x) 2 n 2 1 + 1 n + x x (x i x) 2 n 2 We won t ever compute these SE s by hand, but notice that they are made up of pieces that look familiar (square roots, n in the denominator, square of differences from the mean, all the usual stuff.) Furthermore, just by looking at the formulas, we can learn something about the behavior of the confidence intervals and hypothesis tests involved. SE b0 and SE b1 are easy to identify in the computer output (see bottom of previous page). The values under the headings T and P are the t and Pvalue for the twosided hypothesis tests with the null hypotheses H 0 : β 0 = 0 and H 0 : β 1 = 0, respectively. (a) Inference for β 0. (H 0 : β 0 = 0) This is usually not the most interesting thing to know. Remember the intercept tells the (mean) y value associated with an x value of. For many situations, this is not even a meaningful value. (b) Inference for β 1. (H 0 : β 1 = 0) This is much more interesting for two reasons. First, the slope is often a very interesting parameter to know because Second, this is a measure of how useful the model is for making predictions.

Math 143 Correlation and Regression 13 Confidence Intervals vs Prediction Intervals Of course, we can also give a confidence interval for β 1. 95% CI for β 1 : Predictor Coef SE Coef T P Constant 253.19 62.67 4.04 0.000 ACT 31.206 2.895 10.78 0.000 S = 104.8 RSq = 66.7% RSq(adj) = 66.1% Recall that our goal was to make predictions of y from x. As you would probably expect, such predictions will also be described using confidence intervals. Actually there are two kinds of predictions: 1. Confidence intervals for the mean response 2. Prediction intervals are confidence intervals for a. Notice that for predictions (confidence intervals and prediction intervals), the standard errors depend on x (the x value for which you want a prediction made), so it is not possible for Minitab to tell you what SE is until you decide to make a prediction. We will use output like that below when we want confidence intervals for predictions. You should know how to interpret them and remember that they are simply examples of t confidence intervals with a messy standard error. Predicted Values for New Observations New Obs Fit SE Fit 95.0% CI 95.0% PI 1 1033.3 17.6 ( 998.2, 1068.5) ( 820.6, 1246.1) Values of Predictors for New Observations New Obs ACT 1 25.0

Math 143 Correlation and Regression 14 Analysis of Variance for Regression Minitab also produces an ANOVA table for Regression. It is arranged like the ANOVA tables from the ANOVA tests. Analysis of Variance Source DF SS MS F P Regression 1 1276586 1276586 116.16 0.000 Residual Error 58 637387 10989 Total 59 1913973 The variation is being split up into two pieces one explained by the line (labeled Regression ), and one not (labeled Residual Error ). Let s see if we can figure out what is going on here. Let s look at the variation in the values of y i from their mean value ȳ. Notice that y i ȳ = (y i ŷ i ) (ŷ i ȳ) SS stands for. The values under SS are pretty much what we would expect (remember that ŷ i denotes the prediction corresponding to x i ): SS(Regression) = (ŷ i ȳ) 2 SS(Residual Error) = (y i ŷ i ) 2 SS(Total) = (y i ȳ) 2 The degrees of freedom are 1 for the regression line and n 2 (the rest) for the residuals. MS stands for, and is computed from SS: MS = SS/DF. Finally, F = MSR/MSE. If F is large: If F is small: The null and alternative hypothesis for this test are H 0 : β 1 = 0. (The slope of the regression line is 0.) H a : β 1 = 0. (The slope of the regression line is not 1.) We have already seen a test for this (the t test for slope). It turns out that the Pvalues for the two tests are always the same. In fact, t 2 = F. You can verify this in our example by comparing the Minitab output. The Interpretation of R 2 Finally, we can see why Minitab reports R 2 rather than R, and why it is reported as a percent. R 2 is the square of the correlation coefficient (r or R). But it is also true that R 2 = SSR SST, so R 2 gives the percentage of the variation (as measured by the sums of squares) that is explained by the regression line.

Math 143 Correlation and Regression 15 Example: Skin Thickness and Body Density There are many reasons why one would like to the the fat content of a human body. The most accurate way to estimate this is by determining the body density (weight per unit volume). Since fat is less dense than other body tissue, a lower density indicates a higher relative fat content. The best way to estimate body density is difficult to measure directly (the standard method requires weighing the subject underwater), so scientists have looked for other measurements that can accurately predict body density. One such measurement we will call skinfold thickness, and is actually the logarithm of the sum of four skinfold thicknesses measured at different points on the body. To test how well skinfold thickness predicts body density, 92 subjects were measured for skinfold thickness and body density. A scatter plot appears below. Residuals vs Fitted Normal Q Q plot density 1.03 1.04 1.05 1.06 1.07 1.08 1.09 1.0 1.2 1.4 1.6 1.8 2.0 skthick Residuals Standardized residuals 0.5 1.0 0.0 1.5 0.02 0.00 0.01 0.02 61 70 9 1.03 1.04 1.05 1.06 1.07 1.08 1.09 1.10 Fitted values Scale Location plot 61 70 9 Standardized residuals Cook s distance 2 1 0 1 2 3 0.00 0.02 0.04 0.06 0.08 70 9 2 1 0 1 2 Theoretical Quantiles Cook s distance plot 23 70 42 61 1.03 1.04 1.05 1.06 1.07 1.08 1.09 1.10 0 20 40 60 80 Fitted values Obs. number Residuals: Min 1Q Median 3Q Max 0.018967 0.005092 0.000498 0.004949 0.023679 Coefficients: Estimate Std. Error t value Pr(> t ) (Intercept) 1.16300 0.00656 177.3 <2e16 *** skthick 0.06312 0.00414 15.2 <2e16 *** Signif. codes: 0 *** 0.001 ** 0.01 * 0.05. 0.1 1 Residual standard error: 0.00854 on 90 degrees of freedom Multiple RSquared: 0.72, Adjusted Rsquared: 0.717 Fstatistic: 232 on 1 and 90 DF, pvalue: <2e16 Analysis of Variance Table Response: density Df Sum Sq Mean Sq F value Pr(>F) skthick 1 0.01691 0.01691 232 <2e16 *** Residuals 90 0.00656 0.00007 Signif. codes: 0 *** 0.001 ** 0.01 * 0.05. 0.1 1