The statistical procedures used depend upon the kind of variables (categorical or quantitative):
|
|
- Darlene Hutchinson
- 7 years ago
- Views:
Transcription
1 Math 143 Correlation and Regression 1 Review: We are looking at methods to investigate two or more variables at once. bivariate: multivariate: The statistical procedures used depend upon the kind of variables (categorical or quantitative): ChiSquare deals with Correlation/Regression deals with ANOVA deals with Actually, each of these methods can be extended to deal with multivariate situations. Sometimes additional descriptions are used to distinguish the simpler bivariate versions from their related multivariate versions: Bivariate Chisquare for Twoway Tables Simple (Linear) Regression Oneway ANOVA Multivariate Chisquare for Threeway Tables, etc. Multiple (Linear) Regression Twoway ANOVA We will focus our attention on the bivariate cases, but will talk a little about the multivariate cases. We use regression and/or correlation when we have two quantitative variables, and we want to see if there is an association between these two variables. Two variables (reflecting two measurements for the same individual) are associated if Here s the plan: 1. Start with graphical displays. 2. Move on to numerical summaries. 3. Then look for patterns and deviations from those patterns and use a mathematical model to describe regular patterns. 4. Finally, use statistical inference to draw conclusions about the relationship between the two quantitative variables in the population from their relationship in a sample.
2 Math 143 Correlation and Regression 2 Scatter Plots Scatter plots are a graphical display of the relationship between two quantitative variables. one dot per individual explanatory variable (x) on horizontal axis, response variable (y) on vertical axis Example. Five used cars were selected randomly. Their ages and prices were recorded as follows: Observation Age (yrs) Price ($1000) Plot the used car data on a scatter plot. From the plot, how would you describe the relationship between age and price of a used car? As usual, when looking at scatter plots, we are looking to see the overall pattern and any striking deviations from that pattern (e.g., outliers). The pattern in a scatter plot can often be summarized in terms of Form Strength Direction Positive association: Negative association: If you have more than one group, points may be plotted in different colors, or with different letters, to see if group affects the relationship between the two continuous variables.
3 Math 143 Correlation and Regression 3 The Correlation Coefficient: r Our goal is to come up with a number that measures The correlation coefficient is defined as r = 1 ( ) ( ) n 1 xi x yi ȳ s x s y = 1 n 1 ( ) ( ) That is, it is the sum of products of zscores, scaled by the size of the data set (n 1). Let s see how to use this to compute the correlation coefficient from data. Then will figure out how this number describes the strength and direction of a linear association between two variables. Golf Scores Here is a scatter plot for the golf scores in two rounds of 12 players: round o o o o 80+ o round The standard deviation was 7.83 for round 1 and 7.84 for round 2. We can use this to calculate the zscores for each value and then the correlation coefficient: round1 round2 z1 z2 z1*z2 ===== ===== ===== ===== ====== ==== ==== ===== ===== ===== sum mean(round1) = mean(round2) = 88 sd(round1) = 7.83 sd(round2) = 7.84 The correlation coefficient is r = 1 11 (7.56) = since there are 12 pairs of data values and 11 = 12 1.
4 Math 143 Correlation and Regression 4 Here is the Minitab regression output for this example: The regression equation is round2 = round1 S = RSq = 47.2 % RSq(adj) = 41.9 % Analysis of Variance Source DF SS MS F P Regression Error Total Notice that r is not given directly, but r 2 is given (it s called RSq). There is also a lot of other stuff in the output. We will learn what those other numbers mean a little later. Another Example Here is a scatter plot for two scores (fake data): score2 o 80+ o o o score The standard deviation of score1 is 9.89 and of score We can use this to calculate the zscores and then the correlation coefficient as before: score1 score2 Z1 Z2 Z1*Z === === ===== ===== ===== sum The correlation coefficient is r = 1 9 (7.69) = since there are 10 pairs of data values and 9 = 10 1.
5 Math 143 Correlation and Regression 5 Here is the Minitab regression output for this example: The regression equation is score2 = score1 S = RSq = 73.0 % RSq(adj) = 69.6 % Analysis of Variance Source DF SS MS F P Regression Error Total So how does this correlation coefficient work? Let s think first about when it will be positive and when it will be negative. The correlation coefficient will be positive if we have lots of positive products and few negative products in our sum. Positive products occur when both zscores are positive or both are negative. So a positive correlation coefficient indicates Similarly, a negative correlation coefficient indicates On the other hand, the correlation coefficient will be near zero when Properties of r:
6 Math 143 Correlation and Regression 6 Regression Lines Correlations measure the direction and strength of a linear relationship. If we want to go beyond that, and to draw a line to graphically show the relationship more specifically, we need linear regression. A regression line describes how a response variable y changes as an explanatory variable x changes. We often use a regression line to predict the value of y for a given value of x. Note: when we did correlations, we did not need to be careful about explanatory and response variables, but for regression we must always identify one variable as the independent (explanatory) variable and the other as the dependent (response) variable. Review of Lines Slope Slope can be computed from any two points on a line Definition: slope = We get the same number for slope no matter which two points we pick. Example. (1, 3) and (3, 8) are on a line. What is the slope of the line? Example. (1, 3) is on a line that has slope 2. If the xcoordinate of another point on the line is 3, what is the y coordinate? How do we determine y if we know x? All lines can be described by an equation of the form y = (slope)x + (intercept) y = mx + b y = a + bx ŷ = a + bx and we can determine the equation above if we know either slope and one point two points
7 Math 143 Correlation and Regression 7 Least Squares The full name of the regression line is the leastsquares regression line of y on x. The regression line is chosen to make the sum of the squared errors in predicting y values from x values according to the line as small as possible. These errors are usually called residuals: residual = observed y predicted y = y ŷ We will have more to say about residuals shortly. It is an interesting mathematical problem to determine which line minimizes the sum of squared residuals. It turns out to be a straightforward application of calculus. The amazing thing, is that it is actually quite easy to determine the regression line from data. We will describe the regression line by giving one point and the slope. From that we can get the equation. The point is always on the regression line. The slope (b) of the regression line is Example. (used car data) r =.9707, x = 5, s x = 3.16, ȳ = 4.56, s y = Find an equation for the regression line. Making Predictions. Often we use regression lines with the equation to make predictions. Use the regression line with the equation price= (age) to predict the prices of cars that are 4 years old, 10 years old, and 15 years old. price(4) = price(10) = price(15) = To sketch a line: Make two predictions, plot them, and connect them. (You will be more accurate if you make predictions that are farther apart.) Go back and add the regression line to our car data scatter plot.
8 Math 143 Correlation and Regression 8 How well does the regression line fit? A regression line can be made from any set of data. But some data sets are not very well described by the regression line. We want to develop some tools to help us measure the fit. The regression line is a mathematical model. It describes the relationship between x and y, but not perfectly. The vertical distances between data points and the regression line are the errors of the model. (Remember: residual = observed predicted.) The errors are also called residuals because they represent the leftover (or residual) information that the model fails to predict. They help us assess how well the line describes the data. Example. Find the residuals in the used car example. What do they sum to? x obs. y predicted y age price (age) residual = observed predicted Plot these residuals on the vertical axis and x values on the horizontal axis. If observations were made in a known order, one can also plot residuals vs. the time order of the observations. Once we have a plot of residuals, we look for patterns. If the fit is good, we should see when we look at residuals plots. Sketches of residual plots and what they indicate:
9 Math 143 Correlation and Regression 9 General Cautions about Correlation and Regression 1. Correlation only measures linear association. So we should always 2. Extrapolation (predicting outside the range of x s in the data) is dangerous. 3. Correlations and regressions are not resistant (they can be greatly affected by outliers). Influential observations: 4. Association does not imply causation. The Question of Causation The last caution above deserves some special attention. Often we would like to be able to say that a change in one variable causes a change in another variable. For example, we might like to say that the amount of some drug taken causes an amount of weight to be lost. But under what circumstances can we say this with reasonable certainty? When must we use caution? Some possible explanations for an observed association: (Broken lines show an association. Arrows show causeandeffect links. We observe x and y, but z might be an unobserved (lurking) variable.) Causation: Common response: Confounding: So, when can we say that something causes something else? 1. The best situation is a. But this is usually only possible in the lab. When you re working with people, you can t do this. What you can do is try to control for lurking variables (make your groups as similar as possible). 2. When we can t do an experiment, we look for: a strong association a consistent association the alleged cause precedes the effect in time the alleged cause is plausible
10 Math 143 Correlation and Regression 10 Inference for Regression Simple Linear Regression is a set of procedures for dealing with data that has two quantitative values for each individual in the data set. We will call these x and y. The goal will be to use x to predict y. Example. A tax consultant studied the current relation between the selling price and assessed valuation of onefamily dwellings in a large tax district. He obtained a random sample of recent sales of onefamily dwellings and, after plotting the data, found a regression line of (measuring in thousands of dollars): (selling price) = (assessed valuation) At the same time, a second tax consultant obtained a second random sample of recent sales of onefamily dwellings, and after plotting the data, found a regression line of: (selling price) = (assessed valuation) Both consultants attempted to model the true linear relationship they believed to exist between price and valuation. The regression lines they found were sample estimates of the true (but unknown) population relationship between selling price (y) and assessed valuation (x). But each one came up with a different result. The fact that different samples lead to different regressions lines tells us that unless we know how accurate we can expect our regression line estimate to be, it really isn t of much use to us. So we need to learn about inference for regression. The model for simple linear regression is y i = β 0 + β 1 x i + ɛ i ɛ i N(0,σ) That is, the y value (y i ) for a given x value (x i ) is determined by the equation of a line (y i = β 0 + β 1 x i ), and an error term (+ɛ i ). One way of thinking about the error term is that the line is predicting for each value of x the. There is still variation among different individuals with the same value for x. The error term measures how far y i is from the the predicted mean according to the line. These errors are assumed to be normally distributed with a standard deviation σ that does not depend on x (or on anything else). To summarize, the assumptions for the linear regression model are 1. 2.
11 Math 143 Correlation and Regression 11 Example. We want to predict SAT scores from ACT scores. We sample scores from a number of student who have taken both tests. Here is a scatter plot of their test scores: SAT 2 2 o o o o o o o o 4 o 2 o o o 2 o o o o 700+ o o ACT The simple regression model has 3 parameters: β 0, β 1, and σ. Given a sample from the population, we estimate these parameters with b 0, b 1 and s. b 0 and b 1 come from the regression line: s = b 1 = slope = r s y s x b 0 = intercept. We can solve for b 0 using the fact that the point ( x, ȳ) is on the regression line. (residual) 2, where residual = observed predicted = y i ŷ i. n 2 In practice, the values of b 0, b 1, s and r are calculated by software. You should be able to identify each in the output below (r 2 is given rather than r): The regression equation is SAT = ACT Predictor Coef SE Coef T P Constant ACT S = RSq = 66.7% RSq(adj) = 66.1% Since the model is based on normal distributions and we don t know σ, 1. Regression is going to be sensitive to outliers. Outliers with especially large or small values of the independent variable are especially influential. Minitab will even help you try to identify potential problems: Unusual Observations Obs ACT SAT Fit SE Fit Residual St Resid X X X R R denotes an observation with a large standardized residual X denotes an observation whose X value gives it large influence.
12 Math 143 Correlation and Regression We can check if the model is reasonable by looking at our residuals: (a) Histograms and normal quantile plots indicate overall normality. We are looking for a roughly bellshaped histogram or a roughly linear normal quantile plot. (b) Plots of residuals vs x, or residuals vs. order, or residuals vs. fit note: fit = indicate if the standard deviation appears to remain constant throughout. We are looking to NOT see any clear pattern in these plots. A pattern would indicate something other than randomness is influencing the residuals. 3. We can do inference for β 0, β 1, etc. using the t distributions, we just need to know the corresponding SE and degrees of freedom. parameter SE df 1 β 0 SE b0 = s n + x 2 (x i x) 2 n 2 β 1 SE b1 = 1 ˆµ (prediction of mean) SE ˆµ = s n + ŷ (individual prediction) SEŷ = s s (xi x) 2 n 2 x x (x i x) 2 n n + x x (x i x) 2 n 2 We won t ever compute these SE s by hand, but notice that they are made up of pieces that look familiar (square roots, n in the denominator, square of differences from the mean, all the usual stuff.) Furthermore, just by looking at the formulas, we can learn something about the behavior of the confidence intervals and hypothesis tests involved. SE b0 and SE b1 are easy to identify in the computer output (see bottom of previous page). The values under the headings T and P are the t and Pvalue for the twosided hypothesis tests with the null hypotheses H 0 : β 0 = 0 and H 0 : β 1 = 0, respectively. (a) Inference for β 0. (H 0 : β 0 = 0) This is usually not the most interesting thing to know. Remember the intercept tells the (mean) y value associated with an x value of. For many situations, this is not even a meaningful value. (b) Inference for β 1. (H 0 : β 1 = 0) This is much more interesting for two reasons. First, the slope is often a very interesting parameter to know because Second, this is a measure of how useful the model is for making predictions.
13 Math 143 Correlation and Regression 13 Confidence Intervals vs Prediction Intervals Of course, we can also give a confidence interval for β 1. 95% CI for β 1 : Predictor Coef SE Coef T P Constant ACT S = RSq = 66.7% RSq(adj) = 66.1% Recall that our goal was to make predictions of y from x. As you would probably expect, such predictions will also be described using confidence intervals. Actually there are two kinds of predictions: 1. Confidence intervals for the mean response 2. Prediction intervals are confidence intervals for a. Notice that for predictions (confidence intervals and prediction intervals), the standard errors depend on x (the x value for which you want a prediction made), so it is not possible for Minitab to tell you what SE is until you decide to make a prediction. We will use output like that below when we want confidence intervals for predictions. You should know how to interpret them and remember that they are simply examples of t confidence intervals with a messy standard error. Predicted Values for New Observations New Obs Fit SE Fit 95.0% CI 95.0% PI ( 998.2, ) ( 820.6, ) Values of Predictors for New Observations New Obs ACT
14 Math 143 Correlation and Regression 14 Analysis of Variance for Regression Minitab also produces an ANOVA table for Regression. It is arranged like the ANOVA tables from the ANOVA tests. Analysis of Variance Source DF SS MS F P Regression Residual Error Total The variation is being split up into two pieces one explained by the line (labeled Regression ), and one not (labeled Residual Error ). Let s see if we can figure out what is going on here. Let s look at the variation in the values of y i from their mean value ȳ. Notice that y i ȳ = (y i ŷ i ) (ŷ i ȳ) SS stands for. The values under SS are pretty much what we would expect (remember that ŷ i denotes the prediction corresponding to x i ): SS(Regression) = (ŷ i ȳ) 2 SS(Residual Error) = (y i ŷ i ) 2 SS(Total) = (y i ȳ) 2 The degrees of freedom are 1 for the regression line and n 2 (the rest) for the residuals. MS stands for, and is computed from SS: MS = SS/DF. Finally, F = MSR/MSE. If F is large: If F is small: The null and alternative hypothesis for this test are H 0 : β 1 = 0. (The slope of the regression line is 0.) H a : β 1 = 0. (The slope of the regression line is not 1.) We have already seen a test for this (the t test for slope). It turns out that the Pvalues for the two tests are always the same. In fact, t 2 = F. You can verify this in our example by comparing the Minitab output. The Interpretation of R 2 Finally, we can see why Minitab reports R 2 rather than R, and why it is reported as a percent. R 2 is the square of the correlation coefficient (r or R). But it is also true that R 2 = SSR SST, so R 2 gives the percentage of the variation (as measured by the sums of squares) that is explained by the regression line.
15 Math 143 Correlation and Regression 15 Example: Skin Thickness and Body Density There are many reasons why one would like to the the fat content of a human body. The most accurate way to estimate this is by determining the body density (weight per unit volume). Since fat is less dense than other body tissue, a lower density indicates a higher relative fat content. The best way to estimate body density is difficult to measure directly (the standard method requires weighing the subject underwater), so scientists have looked for other measurements that can accurately predict body density. One such measurement we will call skinfold thickness, and is actually the logarithm of the sum of four skinfold thicknesses measured at different points on the body. To test how well skinfold thickness predicts body density, 92 subjects were measured for skinfold thickness and body density. A scatter plot appears below. Residuals vs Fitted Normal Q Q plot density skthick Residuals Standardized residuals Fitted values Scale Location plot Standardized residuals Cook s distance Theoretical Quantiles Cook s distance plot Fitted values Obs. number Residuals: Min 1Q Median 3Q Max Coefficients: Estimate Std. Error t value Pr(> t ) (Intercept) <2e16 *** skthick <2e16 *** Signif. codes: 0 *** ** 0.01 * Residual standard error: on 90 degrees of freedom Multiple RSquared: 0.72, Adjusted Rsquared: Fstatistic: 232 on 1 and 90 DF, pvalue: <2e16 Analysis of Variance Table Response: density Df Sum Sq Mean Sq F value Pr(>F) skthick <2e16 *** Residuals Signif. codes: 0 *** ** 0.01 *
Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression
Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression Objectives: To perform a hypothesis test concerning the slope of a least squares line To recognize that testing for a
More informationRegression Analysis: A Complete Example
Regression Analysis: A Complete Example This section works out an example that includes all the topics we have discussed so far in this chapter. A complete example of regression analysis. PhotoDisc, Inc./Getty
More informationSection 14 Simple Linear Regression: Introduction to Least Squares Regression
Slide 1 Section 14 Simple Linear Regression: Introduction to Least Squares Regression There are several different measures of statistical association used for understanding the quantitative relationship
More information2. Simple Linear Regression
Research methods - II 3 2. Simple Linear Regression Simple linear regression is a technique in parametric statistics that is commonly used for analyzing mean response of a variable Y which changes according
More informationCorrelation and Regression
Correlation and Regression Scatterplots Correlation Explanatory and response variables Simple linear regression General Principles of Data Analysis First plot the data, then add numerical summaries Look
More informationChapter 7: Simple linear regression Learning Objectives
Chapter 7: Simple linear regression Learning Objectives Reading: Section 7.1 of OpenIntro Statistics Video: Correlation vs. causation, YouTube (2:19) Video: Intro to Linear Regression, YouTube (5:18) -
More information" Y. Notation and Equations for Regression Lecture 11/4. Notation:
Notation: Notation and Equations for Regression Lecture 11/4 m: The number of predictor variables in a regression Xi: One of multiple predictor variables. The subscript i represents any number from 1 through
More informationChapter 10. Key Ideas Correlation, Correlation Coefficient (r),
Chapter 0 Key Ideas Correlation, Correlation Coefficient (r), Section 0-: Overview We have already explored the basics of describing single variable data sets. However, when two quantitative variables
More informationStatistical Models in R
Statistical Models in R Some Examples Steven Buechler Department of Mathematics 276B Hurley Hall; 1-6233 Fall, 2007 Outline Statistical Models Linear Models in R Regression Regression analysis is the appropriate
More informationExercise 1.12 (Pg. 22-23)
Individuals: The objects that are described by a set of data. They may be people, animals, things, etc. (Also referred to as Cases or Records) Variables: The characteristics recorded about each individual.
More informationModule 5: Multiple Regression Analysis
Using Statistical Data Using to Make Statistical Decisions: Data Multiple to Make Regression Decisions Analysis Page 1 Module 5: Multiple Regression Analysis Tom Ilvento, University of Delaware, College
More information1.5 Oneway Analysis of Variance
Statistics: Rosie Cornish. 200. 1.5 Oneway Analysis of Variance 1 Introduction Oneway analysis of variance (ANOVA) is used to compare several means. This method is often used in scientific or medical experiments
More information1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96
1 Final Review 2 Review 2.1 CI 1-propZint Scenario 1 A TV manufacturer claims in its warranty brochure that in the past not more than 10 percent of its TV sets needed any repair during the first two years
More informationSimple linear regression
Simple linear regression Introduction Simple linear regression is a statistical method for obtaining a formula to predict values of one variable from another where there is a causal relationship between
More informationChapter 13 Introduction to Linear Regression and Correlation Analysis
Chapter 3 Student Lecture Notes 3- Chapter 3 Introduction to Linear Regression and Correlation Analsis Fall 2006 Fundamentals of Business Statistics Chapter Goals To understand the methods for displaing
More informationSPSS Guide: Regression Analysis
SPSS Guide: Regression Analysis I put this together to give you a step-by-step guide for replicating what we did in the computer lab. It should help you run the tests we covered. The best way to get familiar
More information1. The parameters to be estimated in the simple linear regression model Y=α+βx+ε ε~n(0,σ) are: a) α, β, σ b) α, β, ε c) a, b, s d) ε, 0, σ
STA 3024 Practice Problems Exam 2 NOTE: These are just Practice Problems. This is NOT meant to look just like the test, and it is NOT the only thing that you should study. Make sure you know all the material
More informationMTH 140 Statistics Videos
MTH 140 Statistics Videos Chapter 1 Picturing Distributions with Graphs Individuals and Variables Categorical Variables: Pie Charts and Bar Graphs Categorical Variables: Pie Charts and Bar Graphs Quantitative
More informationGood luck! BUSINESS STATISTICS FINAL EXAM INSTRUCTIONS. Name:
Glo bal Leadership M BA BUSINESS STATISTICS FINAL EXAM Name: INSTRUCTIONS 1. Do not open this exam until instructed to do so. 2. Be sure to fill in your name before starting the exam. 3. You have two hours
More information2013 MBA Jump Start Program. Statistics Module Part 3
2013 MBA Jump Start Program Module 1: Statistics Thomas Gilbert Part 3 Statistics Module Part 3 Hypothesis Testing (Inference) Regressions 2 1 Making an Investment Decision A researcher in your firm just
More informationUnivariate Regression
Univariate Regression Correlation and Regression The regression line summarizes the linear relationship between 2 variables Correlation coefficient, r, measures strength of relationship: the closer r is
More informationModule 3: Correlation and Covariance
Using Statistical Data to Make Decisions Module 3: Correlation and Covariance Tom Ilvento Dr. Mugdim Pašiƒ University of Delaware Sarajevo Graduate School of Business O ften our interest in data analysis
More informationMultiple Linear Regression
Multiple Linear Regression A regression with two or more explanatory variables is called a multiple regression. Rather than modeling the mean response as a straight line, as in simple regression, it is
More information17. SIMPLE LINEAR REGRESSION II
17. SIMPLE LINEAR REGRESSION II The Model In linear regression analysis, we assume that the relationship between X and Y is linear. This does not mean, however, that Y can be perfectly predicted from X.
More information4. Multiple Regression in Practice
30 Multiple Regression in Practice 4. Multiple Regression in Practice The preceding chapters have helped define the broad principles on which regression analysis is based. What features one should look
More informationFairfield Public Schools
Mathematics Fairfield Public Schools AP Statistics AP Statistics BOE Approved 04/08/2014 1 AP STATISTICS Critical Areas of Focus AP Statistics is a rigorous course that offers advanced students an opportunity
More informationPredictor Coef StDev T P Constant 970667056 616256122 1.58 0.154 X 0.00293 0.06163 0.05 0.963. S = 0.5597 R-Sq = 0.0% R-Sq(adj) = 0.
Statistical analysis using Microsoft Excel Microsoft Excel spreadsheets have become somewhat of a standard for data storage, at least for smaller data sets. This, along with the program often being packaged
More informationAP STATISTICS REVIEW (YMS Chapters 1-8)
AP STATISTICS REVIEW (YMS Chapters 1-8) Exploring Data (Chapter 1) Categorical Data nominal scale, names e.g. male/female or eye color or breeds of dogs Quantitative Data rational scale (can +,,, with
More informationMULTIPLE REGRESSION EXAMPLE
MULTIPLE REGRESSION EXAMPLE For a sample of n = 166 college students, the following variables were measured: Y = height X 1 = mother s height ( momheight ) X 2 = father s height ( dadheight ) X 3 = 1 if
More informationThis unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.
Algebra I Overview View unit yearlong overview here Many of the concepts presented in Algebra I are progressions of concepts that were introduced in grades 6 through 8. The content presented in this course
More informationSimple Regression Theory II 2010 Samuel L. Baker
SIMPLE REGRESSION THEORY II 1 Simple Regression Theory II 2010 Samuel L. Baker Assessing how good the regression equation is likely to be Assignment 1A gets into drawing inferences about how close the
More informationRecall this chart that showed how most of our course would be organized:
Chapter 4 One-Way ANOVA Recall this chart that showed how most of our course would be organized: Explanatory Variable(s) Response Variable Methods Categorical Categorical Contingency Tables Categorical
More informationIntroduction to Regression and Data Analysis
Statlab Workshop Introduction to Regression and Data Analysis with Dan Campbell and Sherlock Campbell October 28, 2008 I. The basics A. Types of variables Your variables may take several forms, and it
More informationLinear Regression. Chapter 5. Prediction via Regression Line Number of new birds and Percent returning. Least Squares
Linear Regression Chapter 5 Regression Objective: To quantify the linear relationship between an explanatory variable (x) and response variable (y). We can then predict the average response for all subjects
More informationLesson 1: Comparison of Population Means Part c: Comparison of Two- Means
Lesson : Comparison of Population Means Part c: Comparison of Two- Means Welcome to lesson c. This third lesson of lesson will discuss hypothesis testing for two independent means. Steps in Hypothesis
More informationComparing Means in Two Populations
Comparing Means in Two Populations Overview The previous section discussed hypothesis testing when sampling from a single population (either a single mean or two means from the same population). Now we
More information2. Here is a small part of a data set that describes the fuel economy (in miles per gallon) of 2006 model motor vehicles.
Math 1530-017 Exam 1 February 19, 2009 Name Student Number E There are five possible responses to each of the following multiple choice questions. There is only on BEST answer. Be sure to read all possible
More informationChapter 23. Inferences for Regression
Chapter 23. Inferences for Regression Topics covered in this chapter: Simple Linear Regression Simple Linear Regression Example 23.1: Crying and IQ The Problem: Infants who cry easily may be more easily
More informationOutline. Topic 4 - Analysis of Variance Approach to Regression. Partitioning Sums of Squares. Total Sum of Squares. Partitioning sums of squares
Topic 4 - Analysis of Variance Approach to Regression Outline Partitioning sums of squares Degrees of freedom Expected mean squares General linear test - Fall 2013 R 2 and the coefficient of correlation
More informationNCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )
Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates
More informationAn analysis appropriate for a quantitative outcome and a single quantitative explanatory. 9.1 The model behind linear regression
Chapter 9 Simple Linear Regression An analysis appropriate for a quantitative outcome and a single quantitative explanatory variable. 9.1 The model behind linear regression When we are examining the relationship
More informationSTAT 350 Practice Final Exam Solution (Spring 2015)
PART 1: Multiple Choice Questions: 1) A study was conducted to compare five different training programs for improving endurance. Forty subjects were randomly divided into five groups of eight subjects
More informationLets suppose we rolled a six-sided die 150 times and recorded the number of times each outcome (1-6) occured. The data is
In this lab we will look at how R can eliminate most of the annoying calculations involved in (a) using Chi-Squared tests to check for homogeneity in two-way tables of catagorical data and (b) computing
More informationHYPOTHESIS TESTING: CONFIDENCE INTERVALS, T-TESTS, ANOVAS, AND REGRESSION
HYPOTHESIS TESTING: CONFIDENCE INTERVALS, T-TESTS, ANOVAS, AND REGRESSION HOD 2990 10 November 2010 Lecture Background This is a lightning speed summary of introductory statistical methods for senior undergraduate
More informationChapter 7. One-way ANOVA
Chapter 7 One-way ANOVA One-way ANOVA examines equality of population means for a quantitative outcome and a single categorical explanatory variable with any number of levels. The t-test of Chapter 6 looks
More informationCALCULATIONS & STATISTICS
CALCULATIONS & STATISTICS CALCULATION OF SCORES Conversion of 1-5 scale to 0-100 scores When you look at your report, you will notice that the scores are reported on a 0-100 scale, even though respondents
More informationWe extended the additive model in two variables to the interaction model by adding a third term to the equation.
Quadratic Models We extended the additive model in two variables to the interaction model by adding a third term to the equation. Similarly, we can extend the linear model in one variable to the quadratic
More informationUsing R for Linear Regression
Using R for Linear Regression In the following handout words and symbols in bold are R functions and words and symbols in italics are entries supplied by the user; underlined words and symbols are optional
More informationSPSS Explore procedure
SPSS Explore procedure One useful function in SPSS is the Explore procedure, which will produce histograms, boxplots, stem-and-leaf plots and extensive descriptive statistics. To run the Explore procedure,
More informationDESCRIPTIVE STATISTICS. The purpose of statistics is to condense raw data to make it easier to answer specific questions; test hypotheses.
DESCRIPTIVE STATISTICS The purpose of statistics is to condense raw data to make it easier to answer specific questions; test hypotheses. DESCRIPTIVE VS. INFERENTIAL STATISTICS Descriptive To organize,
More informationSimple Linear Regression Inference
Simple Linear Regression Inference 1 Inference requirements The Normality assumption of the stochastic term e is needed for inference even if it is not a OLS requirement. Therefore we have: Interpretation
More informationThe Dummy s Guide to Data Analysis Using SPSS
The Dummy s Guide to Data Analysis Using SPSS Mathematics 57 Scripps College Amy Gamble April, 2001 Amy Gamble 4/30/01 All Rights Rerserved TABLE OF CONTENTS PAGE Helpful Hints for All Tests...1 Tests
More informationThe importance of graphing the data: Anscombe s regression examples
The importance of graphing the data: Anscombe s regression examples Bruce Weaver Northern Health Research Conference Nipissing University, North Bay May 30-31, 2008 B. Weaver, NHRC 2008 1 The Objective
More informationCHAPTER 13 SIMPLE LINEAR REGRESSION. Opening Example. Simple Regression. Linear Regression
Opening Example CHAPTER 13 SIMPLE LINEAR REGREION SIMPLE LINEAR REGREION! Simple Regression! Linear Regression Simple Regression Definition A regression model is a mathematical equation that descries the
More informationGLM I An Introduction to Generalized Linear Models
GLM I An Introduction to Generalized Linear Models CAS Ratemaking and Product Management Seminar March 2009 Presented by: Tanya D. Havlicek, Actuarial Assistant 0 ANTITRUST Notice The Casualty Actuarial
More informationSession 7 Bivariate Data and Analysis
Session 7 Bivariate Data and Analysis Key Terms for This Session Previously Introduced mean standard deviation New in This Session association bivariate analysis contingency table co-variation least squares
More informationClass 19: Two Way Tables, Conditional Distributions, Chi-Square (Text: Sections 2.5; 9.1)
Spring 204 Class 9: Two Way Tables, Conditional Distributions, Chi-Square (Text: Sections 2.5; 9.) Big Picture: More than Two Samples In Chapter 7: We looked at quantitative variables and compared the
More informationPremaster Statistics Tutorial 4 Full solutions
Premaster Statistics Tutorial 4 Full solutions Regression analysis Q1 (based on Doane & Seward, 4/E, 12.7) a. Interpret the slope of the fitted regression = 125,000 + 150. b. What is the prediction for
More informationInference for two Population Means
Inference for two Population Means Bret Hanlon and Bret Larget Department of Statistics University of Wisconsin Madison October 27 November 1, 2011 Two Population Means 1 / 65 Case Study Case Study Example
More informationCorrelation and Simple Linear Regression
Correlation and Simple Linear Regression We are often interested in studying the relationship among variables to determine whether they are associated with one another. When we think that changes in a
More informationDATA INTERPRETATION AND STATISTICS
PholC60 September 001 DATA INTERPRETATION AND STATISTICS Books A easy and systematic introductory text is Essentials of Medical Statistics by Betty Kirkwood, published by Blackwell at about 14. DESCRIPTIVE
More information5. Linear Regression
5. Linear Regression Outline.................................................................... 2 Simple linear regression 3 Linear model............................................................. 4
More informationAn analysis method for a quantitative outcome and two categorical explanatory variables.
Chapter 11 Two-Way ANOVA An analysis method for a quantitative outcome and two categorical explanatory variables. If an experiment has a quantitative outcome and two categorical explanatory variables that
More informationTRINITY COLLEGE. Faculty of Engineering, Mathematics and Science. School of Computer Science & Statistics
UNIVERSITY OF DUBLIN TRINITY COLLEGE Faculty of Engineering, Mathematics and Science School of Computer Science & Statistics BA (Mod) Enter Course Title Trinity Term 2013 Junior/Senior Sophister ST7002
More informationPart 2: Analysis of Relationship Between Two Variables
Part 2: Analysis of Relationship Between Two Variables Linear Regression Linear correlation Significance Tests Multiple regression Linear Regression Y = a X + b Dependent Variable Independent Variable
More information11. Analysis of Case-control Studies Logistic Regression
Research methods II 113 11. Analysis of Case-control Studies Logistic Regression This chapter builds upon and further develops the concepts and strategies described in Ch.6 of Mother and Child Health:
More information1 Basic ANOVA concepts
Math 143 ANOVA 1 Analysis of Variance (ANOVA) Recall, when we wanted to compare two population means, we used the 2-sample t procedures. Now let s expand this to compare k 3 population means. As with the
More informationBasic Statistics and Data Analysis for Health Researchers from Foreign Countries
Basic Statistics and Data Analysis for Health Researchers from Foreign Countries Volkert Siersma siersma@sund.ku.dk The Research Unit for General Practice in Copenhagen Dias 1 Content Quantifying association
More informationNonlinear Regression Functions. SW Ch 8 1/54/
Nonlinear Regression Functions SW Ch 8 1/54/ The TestScore STR relation looks linear (maybe) SW Ch 8 2/54/ But the TestScore Income relation looks nonlinear... SW Ch 8 3/54/ Nonlinear Regression General
More informationMultiple Regression: What Is It?
Multiple Regression Multiple Regression: What Is It? Multiple regression is a collection of techniques in which there are multiple predictors of varying kinds and a single outcome We are interested in
More informationComparing Nested Models
Comparing Nested Models ST 430/514 Two models are nested if one model contains all the terms of the other, and at least one additional term. The larger model is the complete (or full) model, and the smaller
More informationBusiness Statistics. Successful completion of Introductory and/or Intermediate Algebra courses is recommended before taking Business Statistics.
Business Course Text Bowerman, Bruce L., Richard T. O'Connell, J. B. Orris, and Dawn C. Porter. Essentials of Business, 2nd edition, McGraw-Hill/Irwin, 2008, ISBN: 978-0-07-331988-9. Required Computing
More informationRelationships Between Two Variables: Scatterplots and Correlation
Relationships Between Two Variables: Scatterplots and Correlation Example: Consider the population of cars manufactured in the U.S. What is the relationship (1) between engine size and horsepower? (2)
More informationIAPRI Quantitative Analysis Capacity Building Series. Multiple regression analysis & interpreting results
IAPRI Quantitative Analysis Capacity Building Series Multiple regression analysis & interpreting results How important is R-squared? R-squared Published in Agricultural Economics 0.45 Best article of the
More informationIntroduction to Analysis of Variance (ANOVA) Limitations of the t-test
Introduction to Analysis of Variance (ANOVA) The Structural Model, The Summary Table, and the One- Way ANOVA Limitations of the t-test Although the t-test is commonly used, it has limitations Can only
More informationSimple Predictive Analytics Curtis Seare
Using Excel to Solve Business Problems: Simple Predictive Analytics Curtis Seare Copyright: Vault Analytics July 2010 Contents Section I: Background Information Why use Predictive Analytics? How to use
More informationII. DISTRIBUTIONS distribution normal distribution. standard scores
Appendix D Basic Measurement And Statistics The following information was developed by Steven Rothke, PhD, Department of Psychology, Rehabilitation Institute of Chicago (RIC) and expanded by Mary F. Schmidt,
More informationOne-Way Analysis of Variance
One-Way Analysis of Variance Note: Much of the math here is tedious but straightforward. We ll skim over it in class but you should be sure to ask questions if you don t understand it. I. Overview A. We
More informationChapter 5 Analysis of variance SPSS Analysis of variance
Chapter 5 Analysis of variance SPSS Analysis of variance Data file used: gss.sav How to get there: Analyze Compare Means One-way ANOVA To test the null hypothesis that several population means are equal,
More informationA Primer on Mathematical Statistics and Univariate Distributions; The Normal Distribution; The GLM with the Normal Distribution
A Primer on Mathematical Statistics and Univariate Distributions; The Normal Distribution; The GLM with the Normal Distribution PSYC 943 (930): Fundamentals of Multivariate Modeling Lecture 4: September
More informationRegression step-by-step using Microsoft Excel
Step 1: Regression step-by-step using Microsoft Excel Notes prepared by Pamela Peterson Drake, James Madison University Type the data into the spreadsheet The example used throughout this How to is a regression
More informationOne-Way ANOVA using SPSS 11.0. SPSS ANOVA procedures found in the Compare Means analyses. Specifically, we demonstrate
1 One-Way ANOVA using SPSS 11.0 This section covers steps for testing the difference between three or more group means using the SPSS ANOVA procedures found in the Compare Means analyses. Specifically,
More information1) Write the following as an algebraic expression using x as the variable: Triple a number subtracted from the number
1) Write the following as an algebraic expression using x as the variable: Triple a number subtracted from the number A. 3(x - x) B. x 3 x C. 3x - x D. x - 3x 2) Write the following as an algebraic expression
More informationOnce saved, if the file was zipped you will need to unzip it. For the files that I will be posting you need to change the preferences.
1 Commands in JMP and Statcrunch Below are a set of commands in JMP and Statcrunch which facilitate a basic statistical analysis. The first part concerns commands in JMP, the second part is for analysis
More informationDescriptive Statistics
Descriptive Statistics Primer Descriptive statistics Central tendency Variation Relative position Relationships Calculating descriptive statistics Descriptive Statistics Purpose to describe or summarize
More informationData Mining Techniques Chapter 5: The Lure of Statistics: Data Mining Using Familiar Tools
Data Mining Techniques Chapter 5: The Lure of Statistics: Data Mining Using Familiar Tools Occam s razor.......................................................... 2 A look at data I.........................................................
More informationCourse Objective This course is designed to give you a basic understanding of how to run regressions in SPSS.
SPSS Regressions Social Science Research Lab American University, Washington, D.C. Web. www.american.edu/provost/ctrl/pclabs.cfm Tel. x3862 Email. SSRL@American.edu Course Objective This course is designed
More informationAnswer: C. The strength of a correlation does not change if units change by a linear transformation such as: Fahrenheit = 32 + (5/9) * Centigrade
Statistics Quiz Correlation and Regression -- ANSWERS 1. Temperature and air pollution are known to be correlated. We collect data from two laboratories, in Boston and Montreal. Boston makes their measurements
More informationExample: Boats and Manatees
Figure 9-6 Example: Boats and Manatees Slide 1 Given the sample data in Table 9-1, find the value of the linear correlation coefficient r, then refer to Table A-6 to determine whether there is a significant
More informationDescriptive statistics; Correlation and regression
Descriptive statistics; and regression Patrick Breheny September 16 Patrick Breheny STA 580: Biostatistics I 1/59 Tables and figures Descriptive statistics Histograms Numerical summaries Percentiles Human
More informationDEPARTMENT OF PSYCHOLOGY UNIVERSITY OF LANCASTER MSC IN PSYCHOLOGICAL RESEARCH METHODS ANALYSING AND INTERPRETING DATA 2 PART 1 WEEK 9
DEPARTMENT OF PSYCHOLOGY UNIVERSITY OF LANCASTER MSC IN PSYCHOLOGICAL RESEARCH METHODS ANALYSING AND INTERPRETING DATA 2 PART 1 WEEK 9 Analysis of covariance and multiple regression So far in this course,
More informationMEASURES OF VARIATION
NORMAL DISTRIBTIONS MEASURES OF VARIATION In statistics, it is important to measure the spread of data. A simple way to measure spread is to find the range. But statisticians want to know if the data are
More informationElementary Statistics Sample Exam #3
Elementary Statistics Sample Exam #3 Instructions. No books or telephones. Only the supplied calculators are allowed. The exam is worth 100 points. 1. A chi square goodness of fit test is considered to
More informationAnalysis of Variance ANOVA
Analysis of Variance ANOVA Overview We ve used the t -test to compare the means from two independent groups. Now we ve come to the final topic of the course: how to compare means from more than two populations.
More informationAdditional sources Compilation of sources: http://lrs.ed.uiuc.edu/tseportal/datacollectionmethodologies/jin-tselink/tselink.htm
Mgt 540 Research Methods Data Analysis 1 Additional sources Compilation of sources: http://lrs.ed.uiuc.edu/tseportal/datacollectionmethodologies/jin-tselink/tselink.htm http://web.utk.edu/~dap/random/order/start.htm
More informationStat 411/511 THE RANDOMIZATION TEST. Charlotte Wickham. stat511.cwick.co.nz. Oct 16 2015
Stat 411/511 THE RANDOMIZATION TEST Oct 16 2015 Charlotte Wickham stat511.cwick.co.nz Today Review randomization model Conduct randomization test What about CIs? Using a t-distribution as an approximation
More information2 Sample t-test (unequal sample sizes and unequal variances)
Variations of the t-test: Sample tail Sample t-test (unequal sample sizes and unequal variances) Like the last example, below we have ceramic sherd thickness measurements (in cm) of two samples representing
More informationInternational Statistical Institute, 56th Session, 2007: Phil Everson
Teaching Regression using American Football Scores Everson, Phil Swarthmore College Department of Mathematics and Statistics 5 College Avenue Swarthmore, PA198, USA E-mail: peverso1@swarthmore.edu 1. Introduction
More informationIntroduction to Quantitative Methods
Introduction to Quantitative Methods October 15, 2009 Contents 1 Definition of Key Terms 2 2 Descriptive Statistics 3 2.1 Frequency Tables......................... 4 2.2 Measures of Central Tendencies.................
More information