5. Linear Regression
|
|
|
- Russell Miles Rodgers
- 9 years ago
- Views:
Transcription
1 5. Linear Regression Outline Simple linear regression 3 Linear model Linear model Linear model Small residuals Minimize ˆǫ 2 i Properties of residuals Regression in R R output - Davis data How good is the fit? 12 Residual standard error R R Analysis of variance r Multiple linear regression 18 2 independent variables Statistical error Estimates and residuals Computing estimates Properties of residuals R 2 and R Ozone example 25 Ozone example Ozone data R output Standardized coefficients 29 Standardized coefficients Using hinge spread Interpretation Using st.dev
2 Interpretation Ozone example Summary 36 Summary Summary
3 Outline We have seen that linear regression has its limitations. However, it is worth studying linear regression because: Sometimes data (nearly) satisfy the assumptions. Sometimes the assumptions can be (nearly) satisfied by transforming the data. There are many useful extensions of linear regression: weighted regression, robust regression, nonparametric regression, and generalized linear models. How does linear regression work? We start with one independent variable. 2 / 38 Simple linear regression 3 / 38 Linear model Linear statistical model: Y = α + βx + ǫ. α is the intercept of the line, and β is the slope of the line. One unit increase in X gives β units increase in Y. (see figure on blackboard) ǫ is called a statistical error. It accounts for the fact that the statistical model does not give an exact fit to the data. Statistical errors can have a fixed and a random component. Fixed component: arises when the true relation is not linear (also called lack of fit error, bias) - we assume this component is negligible. Random component: due to measurement errors in Y, variables that are not included in the model, random variation. 4 / 38 Linear model Data (X 1,Y 1 ),...,(X n,y n ). Then the model gives: Y i = α + βx i + ǫ i, where ǫ i is the statistical error for the ith case. Thus, the observed value Y i equals α + βx i, except that ǫ i, an unknown random quantity is added on. The statistical errors ǫ i cannot be observed. Why? We assume: E(ǫ i ) = 0 Var(ǫ i ) = σ 2 for all i = 1,...,n Cov(ǫ i,ǫ j ) = 0 for all i j 5 / 38 3
4 Linear model The population parameters α, β and σ are unknown. We use lower case Greek letters for population parameters. We compute estimates of the population parameters: ˆα, ˆβ and ˆσ. Ŷ i = ˆα + ˆβX i is called the fitted value. (see figure on blackboard) ˆǫ i = Y i Ŷi = Y i (ˆα + ˆβX i ) is called the residual. The residuals are observable, and can be used to check assumptions on the statistical errors ǫ i. Points above the line have positive residuals, and points below the line have negative residuals. A line that fits the data well has small residuals. 6 / 38 Small residuals We want the residuals to be small in magnitude, because large negative residuals are as bad as large positive residuals. So we cannot simply require ˆǫ i = 0. In fact, any line through the means of the variables - the point ( X,Ȳ ) - satisfies ˆǫ i = 0 (derivation on board). Two immediate solutions: Require ˆǫ i to be small. Require ˆǫ 2 i to be small. We consider the second option because working with squares is mathematically easier than working with absolute values (for example, it is easier to take derivatives). However, the first option is more resistant to outliers. Eyeball regression line (see overhead). 7 / 38 4
5 Minimize ˆǫ 2 i SSE stands for Sum of Squared Error. We want to find the pair (ˆα, ˆβ) that minimizes SSE(ˆα, ˆβ) := (Y i ˆα ˆβX i ) 2. Thus, we set the partial derivatives of RSS(ˆα, ˆβ) with respect to ˆα and ˆβ equal to zero: SSE(ˆα,ˆβ) ˆα = ( 1)(2)(Y i ˆα ˆβX i ) = 0 (Y i ˆα ˆβX i ) = 0. SSE(ˆα,ˆβ) ˆβ = ( X i )(2)(Y i ˆα ˆβX i ) = 0 X i (Y i ˆα ˆβX i ) = 0. We now have two normal equations in two unknowns ˆα and ˆβ. The solution is (derivation on board, p. 18 of script): ˆβ = (Xi X)(Y i Ȳ ) (Xi X) 2 ˆα = Ȳ ˆβ X 8 / 38 Properties of residuals ˆǫ i = 0, since the regression line goes through the point ( X,Ȳ ). X iˆǫ i = 0 and Ŷ iˆǫ i = 0. The residuals are uncorrelated with the independent variables X i and with the fitted values Ŷi. Least squares estimates are uniquely defined as long as the values of the independent variable are not all identical. In that case the numerator (X i X) 2 = 0 (see figure on board). 9 / 38 Regression in R model <- lm(y x) summary(model) Coefficients: model$coef or coef(model) (Alias: coefficients) Fitted mean values: model$fitted or fitted(model) (Alias: fitted.values) Residuals: model$resid or resid(model) (Alias: residuals) 10 / 38 5
6 R output - Davis data > model <- lm(weight ~ repwt) > summary(model) Call: lm(formula = weight ~ repwt) Residuals: Min 1Q Median 3Q Max Coefficients: Estimate Std. Error t value Pr(> t ) (Intercept) repwt <2e-16 *** --- Signif. codes: 0 *** ** 0.01 * Residual standard error: on 99 degrees of freedom Multiple R-Squared: , Adjusted R-squared: F-statistic: 1025 on 1 and 99 DF, p-value: < 2.2e / 38 How good is the fit? 12 / 38 Residual standard error Residual standard error: ˆσ = SSE/(n 2) = ˆǫ 2 i n 2. n 2 is the degrees of freedom (we lose two degrees of freedom because we estimate the two parameters α and β). For the Davis data, ˆσ 2. Interpretation: on average, using the least squares regression line to predict weight from reported weight, results in an error of about 2 kg. If the residuals are approximately normal, then about 2/3 is in the range ±2 and about 95% is in the range ±4. 13 / 38 R 2 We compare our fit to a null model Y = α + ǫ, in which we don t use the independent variable X. We define the fitted value Ŷ i = ˆα, and the residual ˆǫ i = Y i Ŷ i. We find ˆα by minimizing (ˆǫ i )2 = (Y i ˆα ) 2. This gives ˆα = Ȳ. Note that (Y i Ŷi) 2 = ˆǫ 2 i (ˆǫ i )2 = (Y i Ȳ )2 (why?). 14 / 38 6
7 R 2 TSS = (ˆǫ i )2 = (Y i Ȳ )2 is the total sum of squares: the sum of squared errors in the model that does not use the independent variable. SSE = ˆǫ 2 i = (Y i Ŷi) 2 is the sum of squared errors in the linear model. Regression sum of squares: RegSS = T SS SSE gives reduction in squared error due to the linear regression. R 2 = RegSS/TSS = 1 SSE/TSS is the proportional reduction in squared error due to the linear regression. Thus, R 2 is the proportion of the variation in Y that is explained by the linear regression. R 2 has no units doesn t chance when scale is changed. Good values of R 2 vary widely in different fields of application. 15 / 38 Analysis of variance (Y i Ŷi)(Ŷi Ȳ ) = 0 (will be shown later geometrically) RegSS = (Ŷi Ȳ )2 (derivation on board) Hence, TSS = SSE +RegSS (Yi Ȳ )2 = (Y i Ŷi) 2 + (Ŷi Ȳ )2 This decomposition is called analysis of variance. 16 / 38 r Correlation coefficient r = ± R 2 (take positive root if ˆβ > 0 and take negative root if ˆβ < 0). r gives the strength and direction of the relationship. Alternative formula: r = (Xi X)(Y i Ȳ ) (Xi X) 2 (Y i Ȳ )2. Using this formula, we can write ˆβ = r SD Y SD X (derivation on board). In the eyeball regression, the steep line had slope SD Y SD X, and the other line had the correct slope r SD Y SD X. r is symmetric in X and Y. r has no units doesn t change when scale is changed. 17 / 38 7
8 Multiple linear regression 18 / 38 2 independent variables Y = α + β 1 X 1 + β 2 X 2 + ǫ. (see p. 9 of script) This describes a plane in the 3-dimensional space {X 1,X 2,Y } (see figure): α is the intercept β 1 is the increase in Y associated with a one-unit increase in X 1 when X 2 is held constant β 2 is the increase in Y for a one-unit increase in X 2 when X 1 is held constant. 19 / 38 Statistical error Data: (X 11,X 12,Y 1 ),...,(X n1,x n2,y n ). Y i = α + β 1 X i1 + β 2 X i2 + ǫ i, where ǫ i is the statistical error for the ith case. Thus, the observed value Y i equals α + β 1 X i1 + β 2 X i2, except that ǫ i, an unknown random quantity is added on. We make the same assumptions about ǫ as before: E(ǫ i ) = 0 Var(ǫ i ) = σ 2 for all i = 1,...,n Cov(ǫ i,ǫ j ) = 0 for all i j Compare to assumption on p of script. 20 / 38 Estimates and residuals The population parameters α, β 1, β 2, and σ are unknown. We compute estimates of the population parameters: ˆα, ˆβ 1, ˆβ 2 and ˆσ. Ŷi = ˆα + ˆβ 1 X i1 + ˆβ 2 X i2 is called the fitted value. ˆǫ i = Y i Ŷi = Y i (ˆα + ˆβ 1 X i1 + ˆβ 2 X i2 ) is called the residual. The residuals are observable, and can be used to check assumptions on the statistical errors ǫ i. Points above the plane have positive residuals, and points below the plane have negative residuals. A plane that fits the data well has small residuals. 21 / 38 8
9 Computing estimates The triple (ˆα, ˆβ 1, ˆβ 2 ) minimizes SSE(α,β 1,β 2 ) = ˆǫ 2 i = (Y i α β 1 X i1 β 2 X i2 ) 2. We can again take partial derivatives and set these equal to zero. This gives three equations in the three unknowns α, β 1 and β 2. Solving these normal equations gives the regression coefficients ˆα, ˆβ 1 and ˆβ 2. Least squares estimates are unique unless one of the independent variables is invariant, or independent variables are perfectly collinear. The same procedure works for p independent variables X 1,...,X p. However, it is then easier to use matrix notation (see board and section 1.3 of script). In R: model <- lm(y x1 + x2) 22 / 38 Properties of residuals ˆǫ i = 0 The residuals ˆǫ i are uncorrelated with the fitted values Ŷi and with each of the independent variables X 1,...,X p. The standard error of the residuals ˆσ = ˆǫ 2 i /(n p 1) gives the average size of the residuals. n p 1 is the degrees of freedom (we lose p + 1 degrees of freedom because we estimate the p + 1 parameters α, β 1,...,β p ). 23 / 38 R 2 and R 2 TSS = (Y i Ȳ )2. SSE = (Y i Ŷi) 2 = ˆǫ 2 i. RegSS = TSS SSE = (Ŷi Ȳ )2. R 2 = RegSS/TSS = 1 SSE/TSS is the proportion of variation in Y that is captured by its linear regression on the X s. R 2 can never decrease when we add an extra variable to the model. Why? Corrected sum of squares: R2 = 1 SSE/(n p 1) TSS/(n 1) penalizes R 2 when there are extra variables in the model. R 2 and R 2 differ very little if sample size is large. 24 / 38 9
10 Ozone example 25 / 38 Ozone example Data from Sandberg, Basso, Okin (1978): SF = Summer quarter maximum hourly average ozone reading in parts per million in San Francisco SJ = Same, but then in San Jose YEAR = Year of ozone measurement RAIN = Average winter precipitation in centimeters in the San Francisco Bay area for the preceding two winters Research question: How does SF depend on YEAR and RAIN? Think about assumptions: Which one may be violated? 26 / 38 Ozone data YEAR RAIN SF SJ / 38 10
11 R output > model <- lm(sf ~ year + rain) > summary(model) Call: lm(formula = sf ~ year + rain) Residuals: Min 1Q Median 3Q Max Coefficients: Estimate Std. Error t value Pr(> t ) (Intercept) e-05 *** year e-05 *** rain ** --- Signif. codes: 0 *** ** 0.01 * Residual standard error: on 10 degrees of freedom Multiple R-Squared: , Adjusted R-squared: F-statistic: on 2 and 10 DF, p-value: 6.286e / 38 Standardized coefficients 29 / 38 Standardized coefficients We often want to compare coefficients of different independent variables. When the independent variables are measured in the same units, this is straightforward. If the independent variables are not commensurable, we can perform a limited comparison by rescaling the regression coefficients in relation to a measure of variation: using hinge spread using standard deviations 30 / 38 Using hinge spread Hinge spread = interquartile range (IQR) Let IQR 1,...,IQR p be the IQRs of X 1,...,X p. We start with Y i = ˆα + ˆβ 1 X i ˆβ p X ip + ˆǫ i. This can be rewritten as: Y i = ˆα + (ˆβ1 IQR 1 ) Xi1 IQR Let Z ij = X ij IQR j, for j = 1,...,p and i = 1,...,n. Let ˆβ j = ˆβ j IQR j, j = 1,...,p. Then we get Y i = ˆα + ˆβ 1 Z i1 + + ˆβ pz ip + ˆǫ i. ˆβ j = ˆβ j IQR j is called the standardized regression coefficient. (ˆβp IQR p ) Xip IQR p + ˆǫ i. 31 / 38 11
12 Interpretation Interpretation: Increasing Z j by 1 and holding constant the other Z l s (l j), is associated, on average, with an increase of ˆβ j in Y. Increasing Z j by 1, means that X j is increased by one IQR of X j. So increasing X j by one IQR of X j and holding constant the other X l s (l j), is associated, on average, with an increase of ˆβ j in Y. Ozone example: Variable Coeff. Hinge spread Stand. coeff. Year Rain / 38 Using st.dev. Let S Y be the standard deviation of Y, and let S 1,...,S p be the standard deviations of X 1,...,X p. We start with Y i = ˆα + ˆβ 1 X i ˆβ p X ip + ˆǫ i. This can be rewritten ) as (derivation on board): ) Y i Ȳ S S Y = (ˆβ1 1 Xi1 X 1 S S Y S p Xip (ˆβp X p S Y S p + ˆǫ i S Y. Let Z iy = Y i Ȳ S Y Let ˆβ j = ˆβ j S j S Y and Z ij = X ij X j S j, for j = 1,...,p. and ˆǫ i = ˆǫ i S Y. Then we get Z iy = ˆβ 1 Z i1 + + ˆβ pz ip + ˆǫ i. ˆβ j = ˆβ j S j S Y is called the standardized regression coefficient. 33 / 38 Interpretation Interpretation: Increasing Z j by 1 and holding constant the other Z l s (l j), is associated, on average, with an increase of ˆβ j in Z Y. Increasing Z j by 1, means that X j is increased by one SD of X j. Increasing Z Y by 1 means that Y is increased by one SD of Y. So increasing X j by one SD of X j and holding constant the other X l s (l j), is associated, on average, with an increase of ˆβ j SDs of Y in Y. 34 / 38 12
13 Ozone example Ozone example: Variable Coeff. St.dev(variable) St.dev(Y) Year Rain Stand. coeff. Both methods (using hinge spread or standard deviations) only allow for a very limited comparison. They both assume that predictors with a large spread are more important, and that does not need to be the case. 35 / 38 Summary 36 / 38 Summary Linear statistical model: Y = α + β 1 X β p X p + ǫ. We assume that the statistical errors ǫ have mean zero, constant standard deviation σ, and are uncorrelated. The population parameters α, β 1,...,β p and σ cannot be observed. Also the statistical errors ǫ cannot be observed. We define the fitted value Ŷi = ˆα + ˆβ 1 X i1 + + ˆβ p X ip and the residual ˆǫ i = Y i Ŷi. We can use the residuals to check the assumptions about the statistical errors. We compute estimates ˆα, ˆβ 1,..., ˆβ p for α,β 1,...,β p by minimizing the residual sum of squares SSE = ˆǫ 2 i = (Y i (ˆα + ˆβ 1 X i1 + + ˆβ p X ip )) 2. Interpretation of the coefficients? 37 / 38 Summary To measure how good the fit is, we can use: the residual standard error ˆσ = SSE/(n p 1) the multiple correlation coefficient R 2 the adjusted multiple correlation coefficient R 2 the correlation coefficient r Analysis of variance (ANOVA): TSS = SSE + RegSS Standardized regression coefficients 38 / 38 13
Basic Statistics and Data Analysis for Health Researchers from Foreign Countries
Basic Statistics and Data Analysis for Health Researchers from Foreign Countries Volkert Siersma [email protected] The Research Unit for General Practice in Copenhagen Dias 1 Content Quantifying association
Outline. Topic 4 - Analysis of Variance Approach to Regression. Partitioning Sums of Squares. Total Sum of Squares. Partitioning sums of squares
Topic 4 - Analysis of Variance Approach to Regression Outline Partitioning sums of squares Degrees of freedom Expected mean squares General linear test - Fall 2013 R 2 and the coefficient of correlation
Multiple Linear Regression
Multiple Linear Regression A regression with two or more explanatory variables is called a multiple regression. Rather than modeling the mean response as a straight line, as in simple regression, it is
" Y. Notation and Equations for Regression Lecture 11/4. Notation:
Notation: Notation and Equations for Regression Lecture 11/4 m: The number of predictor variables in a regression Xi: One of multiple predictor variables. The subscript i represents any number from 1 through
Univariate Regression
Univariate Regression Correlation and Regression The regression line summarizes the linear relationship between 2 variables Correlation coefficient, r, measures strength of relationship: the closer r is
NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )
Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates
Final Exam Practice Problem Answers
Final Exam Practice Problem Answers The following data set consists of data gathered from 77 popular breakfast cereals. The variables in the data set are as follows: Brand: The brand name of the cereal
Part 2: Analysis of Relationship Between Two Variables
Part 2: Analysis of Relationship Between Two Variables Linear Regression Linear correlation Significance Tests Multiple regression Linear Regression Y = a X + b Dependent Variable Independent Variable
2. Simple Linear Regression
Research methods - II 3 2. Simple Linear Regression Simple linear regression is a technique in parametric statistics that is commonly used for analyzing mean response of a variable Y which changes according
Correlation and Simple Linear Regression
Correlation and Simple Linear Regression We are often interested in studying the relationship among variables to determine whether they are associated with one another. When we think that changes in a
Statistical Models in R
Statistical Models in R Some Examples Steven Buechler Department of Mathematics 276B Hurley Hall; 1-6233 Fall, 2007 Outline Statistical Models Linear Models in R Regression Regression analysis is the appropriate
E(y i ) = x T i β. yield of the refined product as a percentage of crude specific gravity vapour pressure ASTM 10% point ASTM end point in degrees F
Random and Mixed Effects Models (Ch. 10) Random effects models are very useful when the observations are sampled in a highly structured way. The basic idea is that the error associated with any linear,
EDUCATION AND VOCABULARY MULTIPLE REGRESSION IN ACTION
EDUCATION AND VOCABULARY MULTIPLE REGRESSION IN ACTION EDUCATION AND VOCABULARY 5-10 hours of input weekly is enough to pick up a new language (Schiff & Myers, 1988). Dutch children spend 5.5 hours/day
1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96
1 Final Review 2 Review 2.1 CI 1-propZint Scenario 1 A TV manufacturer claims in its warranty brochure that in the past not more than 10 percent of its TV sets needed any repair during the first two years
Exercise 1.12 (Pg. 22-23)
Individuals: The objects that are described by a set of data. They may be people, animals, things, etc. (Also referred to as Cases or Records) Variables: The characteristics recorded about each individual.
Section 1: Simple Linear Regression
Section 1: Simple Linear Regression Carlos M. Carvalho The University of Texas McCombs School of Business http://faculty.mccombs.utexas.edu/carlos.carvalho/teaching/ 1 Regression: General Introduction
Chicago Booth BUSINESS STATISTICS 41000 Final Exam Fall 2011
Chicago Booth BUSINESS STATISTICS 41000 Final Exam Fall 2011 Name: Section: I pledge my honor that I have not violated the Honor Code Signature: This exam has 34 pages. You have 3 hours to complete this
Comparing Nested Models
Comparing Nested Models ST 430/514 Two models are nested if one model contains all the terms of the other, and at least one additional term. The larger model is the complete (or full) model, and the smaller
Testing for Lack of Fit
Chapter 6 Testing for Lack of Fit How can we tell if a model fits the data? If the model is correct then ˆσ 2 should be an unbiased estimate of σ 2. If we have a model which is not complex enough to fit
Multiple Regression in SPSS This example shows you how to perform multiple regression. The basic command is regression : linear.
Multiple Regression in SPSS This example shows you how to perform multiple regression. The basic command is regression : linear. In the main dialog box, input the dependent variable and several predictors.
Using R for Linear Regression
Using R for Linear Regression In the following handout words and symbols in bold are R functions and words and symbols in italics are entries supplied by the user; underlined words and symbols are optional
Simple Linear Regression Inference
Simple Linear Regression Inference 1 Inference requirements The Normality assumption of the stochastic term e is needed for inference even if it is not a OLS requirement. Therefore we have: Interpretation
IAPRI Quantitative Analysis Capacity Building Series. Multiple regression analysis & interpreting results
IAPRI Quantitative Analysis Capacity Building Series Multiple regression analysis & interpreting results How important is R-squared? R-squared Published in Agricultural Economics 0.45 Best article of the
Biostatistics: DESCRIPTIVE STATISTICS: 2, VARIABILITY
Biostatistics: DESCRIPTIVE STATISTICS: 2, VARIABILITY 1. Introduction Besides arriving at an appropriate expression of an average or consensus value for observations of a population, it is important to
17. SIMPLE LINEAR REGRESSION II
17. SIMPLE LINEAR REGRESSION II The Model In linear regression analysis, we assume that the relationship between X and Y is linear. This does not mean, however, that Y can be perfectly predicted from X.
STAT 350 Practice Final Exam Solution (Spring 2015)
PART 1: Multiple Choice Questions: 1) A study was conducted to compare five different training programs for improving endurance. Forty subjects were randomly divided into five groups of eight subjects
DEPARTMENT OF PSYCHOLOGY UNIVERSITY OF LANCASTER MSC IN PSYCHOLOGICAL RESEARCH METHODS ANALYSING AND INTERPRETING DATA 2 PART 1 WEEK 9
DEPARTMENT OF PSYCHOLOGY UNIVERSITY OF LANCASTER MSC IN PSYCHOLOGICAL RESEARCH METHODS ANALYSING AND INTERPRETING DATA 2 PART 1 WEEK 9 Analysis of covariance and multiple regression So far in this course,
Week 5: Multiple Linear Regression
BUS41100 Applied Regression Analysis Week 5: Multiple Linear Regression Parameter estimation and inference, forecasting, diagnostics, dummy variables Robert B. Gramacy The University of Chicago Booth School
International Statistical Institute, 56th Session, 2007: Phil Everson
Teaching Regression using American Football Scores Everson, Phil Swarthmore College Department of Mathematics and Statistics 5 College Avenue Swarthmore, PA198, USA E-mail: [email protected] 1. Introduction
2. What is the general linear model to be used to model linear trend? (Write out the model) = + + + or
Simple and Multiple Regression Analysis Example: Explore the relationships among Month, Adv.$ and Sales $: 1. Prepare a scatter plot of these data. The scatter plots for Adv.$ versus Sales, and Month versus
SPSS Guide: Regression Analysis
SPSS Guide: Regression Analysis I put this together to give you a step-by-step guide for replicating what we did in the computer lab. It should help you run the tests we covered. The best way to get familiar
1 Simple Linear Regression I Least Squares Estimation
Simple Linear Regression I Least Squares Estimation Textbook Sections: 8. 8.3 Previously, we have worked with a random variable x that comes from a population that is normally distributed with mean µ and
Least Squares Estimation
Least Squares Estimation SARA A VAN DE GEER Volume 2, pp 1041 1045 in Encyclopedia of Statistics in Behavioral Science ISBN-13: 978-0-470-86080-9 ISBN-10: 0-470-86080-4 Editors Brian S Everitt & David
Chapter 7: Simple linear regression Learning Objectives
Chapter 7: Simple linear regression Learning Objectives Reading: Section 7.1 of OpenIntro Statistics Video: Correlation vs. causation, YouTube (2:19) Video: Intro to Linear Regression, YouTube (5:18) -
1.5 Oneway Analysis of Variance
Statistics: Rosie Cornish. 200. 1.5 Oneway Analysis of Variance 1 Introduction Oneway analysis of variance (ANOVA) is used to compare several means. This method is often used in scientific or medical experiments
Psychology 205: Research Methods in Psychology
Psychology 205: Research Methods in Psychology Using R to analyze the data for study 2 Department of Psychology Northwestern University Evanston, Illinois USA November, 2012 1 / 38 Outline 1 Getting ready
CHAPTER 13 SIMPLE LINEAR REGRESSION. Opening Example. Simple Regression. Linear Regression
Opening Example CHAPTER 13 SIMPLE LINEAR REGREION SIMPLE LINEAR REGREION! Simple Regression! Linear Regression Simple Regression Definition A regression model is a mathematical equation that descries the
5. Multiple regression
5. Multiple regression QBUS6840 Predictive Analytics https://www.otexts.org/fpp/5 QBUS6840 Predictive Analytics 5. Multiple regression 2/39 Outline Introduction to multiple linear regression Some useful
Simple Regression Theory II 2010 Samuel L. Baker
SIMPLE REGRESSION THEORY II 1 Simple Regression Theory II 2010 Samuel L. Baker Assessing how good the regression equation is likely to be Assignment 1A gets into drawing inferences about how close the
1. The parameters to be estimated in the simple linear regression model Y=α+βx+ε ε~n(0,σ) are: a) α, β, σ b) α, β, ε c) a, b, s d) ε, 0, σ
STA 3024 Practice Problems Exam 2 NOTE: These are just Practice Problems. This is NOT meant to look just like the test, and it is NOT the only thing that you should study. Make sure you know all the material
DATA INTERPRETATION AND STATISTICS
PholC60 September 001 DATA INTERPRETATION AND STATISTICS Books A easy and systematic introductory text is Essentials of Medical Statistics by Betty Kirkwood, published by Blackwell at about 14. DESCRIPTIVE
We extended the additive model in two variables to the interaction model by adding a third term to the equation.
Quadratic Models We extended the additive model in two variables to the interaction model by adding a third term to the equation. Similarly, we can extend the linear model in one variable to the quadratic
Lecture 11: Confidence intervals and model comparison for linear regression; analysis of variance
Lecture 11: Confidence intervals and model comparison for linear regression; analysis of variance 14 November 2007 1 Confidence intervals and hypothesis testing for linear regression Just as there was
Chapter 13 Introduction to Linear Regression and Correlation Analysis
Chapter 3 Student Lecture Notes 3- Chapter 3 Introduction to Linear Regression and Correlation Analsis Fall 2006 Fundamentals of Business Statistics Chapter Goals To understand the methods for displaing
Example: Boats and Manatees
Figure 9-6 Example: Boats and Manatees Slide 1 Given the sample data in Table 9-1, find the value of the linear correlation coefficient r, then refer to Table A-6 to determine whether there is a significant
Algebra I Vocabulary Cards
Algebra I Vocabulary Cards Table of Contents Expressions and Operations Natural Numbers Whole Numbers Integers Rational Numbers Irrational Numbers Real Numbers Absolute Value Order of Operations Expression
Chapter 7. One-way ANOVA
Chapter 7 One-way ANOVA One-way ANOVA examines equality of population means for a quantitative outcome and a single categorical explanatory variable with any number of levels. The t-test of Chapter 6 looks
The importance of graphing the data: Anscombe s regression examples
The importance of graphing the data: Anscombe s regression examples Bruce Weaver Northern Health Research Conference Nipissing University, North Bay May 30-31, 2008 B. Weaver, NHRC 2008 1 The Objective
Simple linear regression
Simple linear regression Introduction Simple linear regression is a statistical method for obtaining a formula to predict values of one variable from another where there is a causal relationship between
Module 5: Multiple Regression Analysis
Using Statistical Data Using to Make Statistical Decisions: Data Multiple to Make Regression Decisions Analysis Page 1 Module 5: Multiple Regression Analysis Tom Ilvento, University of Delaware, College
Statistical Models in R
Statistical Models in R Some Examples Steven Buechler Department of Mathematics 276B Hurley Hall; 1-6233 Fall, 2007 Outline Statistical Models Structure of models in R Model Assessment (Part IA) Anova
HYPOTHESIS TESTING: CONFIDENCE INTERVALS, T-TESTS, ANOVAS, AND REGRESSION
HYPOTHESIS TESTING: CONFIDENCE INTERVALS, T-TESTS, ANOVAS, AND REGRESSION HOD 2990 10 November 2010 Lecture Background This is a lightning speed summary of introductory statistical methods for senior undergraduate
Investment Statistics: Definitions & Formulas
Investment Statistics: Definitions & Formulas The following are brief descriptions and formulas for the various statistics and calculations available within the ease Analytics system. Unless stated otherwise,
Data Mining and Data Warehousing. Henryk Maciejewski. Data Mining Predictive modelling: regression
Data Mining and Data Warehousing Henryk Maciejewski Data Mining Predictive modelling: regression Algorithms for Predictive Modelling Contents Regression Classification Auxiliary topics: Estimation of prediction
Stat 5303 (Oehlert): Tukey One Degree of Freedom 1
Stat 5303 (Oehlert): Tukey One Degree of Freedom 1 > catch
Elementary Statistics Sample Exam #3
Elementary Statistics Sample Exam #3 Instructions. No books or telephones. Only the supplied calculators are allowed. The exam is worth 100 points. 1. A chi square goodness of fit test is considered to
An analysis appropriate for a quantitative outcome and a single quantitative explanatory. 9.1 The model behind linear regression
Chapter 9 Simple Linear Regression An analysis appropriate for a quantitative outcome and a single quantitative explanatory variable. 9.1 The model behind linear regression When we are examining the relationship
Stat 412/512 CASE INFLUENCE STATISTICS. Charlotte Wickham. stat512.cwick.co.nz. Feb 2 2015
Stat 412/512 CASE INFLUENCE STATISTICS Feb 2 2015 Charlotte Wickham stat512.cwick.co.nz Regression in your field See website. You may complete this assignment in pairs. Find a journal article in your field
Econometrics Simple Linear Regression
Econometrics Simple Linear Regression Burcu Eke UC3M Linear equations with one variable Recall what a linear equation is: y = b 0 + b 1 x is a linear equation with one variable, or equivalently, a straight
KSTAT MINI-MANUAL. Decision Sciences 434 Kellogg Graduate School of Management
KSTAT MINI-MANUAL Decision Sciences 434 Kellogg Graduate School of Management Kstat is a set of macros added to Excel and it will enable you to do the statistics required for this course very easily. To
Regression Analysis: A Complete Example
Regression Analysis: A Complete Example This section works out an example that includes all the topics we have discussed so far in this chapter. A complete example of regression analysis. PhotoDisc, Inc./Getty
Implementing Panel-Corrected Standard Errors in R: The pcse Package
Implementing Panel-Corrected Standard Errors in R: The pcse Package Delia Bailey YouGov Polimetrix Jonathan N. Katz California Institute of Technology Abstract This introduction to the R package pcse is
Lets suppose we rolled a six-sided die 150 times and recorded the number of times each outcome (1-6) occured. The data is
In this lab we will look at how R can eliminate most of the annoying calculations involved in (a) using Chi-Squared tests to check for homogeneity in two-way tables of catagorical data and (b) computing
Section 14 Simple Linear Regression: Introduction to Least Squares Regression
Slide 1 Section 14 Simple Linear Regression: Introduction to Least Squares Regression There are several different measures of statistical association used for understanding the quantitative relationship
Part II. Multiple Linear Regression
Part II Multiple Linear Regression 86 Chapter 7 Multiple Regression A multiple linear regression model is a linear model that describes how a y-variable relates to two or more xvariables (or transformations
Week TSX Index 1 8480 2 8470 3 8475 4 8510 5 8500 6 8480
1) The S & P/TSX Composite Index is based on common stock prices of a group of Canadian stocks. The weekly close level of the TSX for 6 weeks are shown: Week TSX Index 1 8480 2 8470 3 8475 4 8510 5 8500
Premaster Statistics Tutorial 4 Full solutions
Premaster Statistics Tutorial 4 Full solutions Regression analysis Q1 (based on Doane & Seward, 4/E, 12.7) a. Interpret the slope of the fitted regression = 125,000 + 150. b. What is the prediction for
Estimation of σ 2, the variance of ɛ
Estimation of σ 2, the variance of ɛ The variance of the errors σ 2 indicates how much observations deviate from the fitted surface. If σ 2 is small, parameters β 0, β 1,..., β k will be reliably estimated
Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression
Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression Objectives: To perform a hypothesis test concerning the slope of a least squares line To recognize that testing for a
Regression step-by-step using Microsoft Excel
Step 1: Regression step-by-step using Microsoft Excel Notes prepared by Pamela Peterson Drake, James Madison University Type the data into the spreadsheet The example used throughout this How to is a regression
MISSING DATA TECHNIQUES WITH SAS. IDRE Statistical Consulting Group
MISSING DATA TECHNIQUES WITH SAS IDRE Statistical Consulting Group ROAD MAP FOR TODAY To discuss: 1. Commonly used techniques for handling missing data, focusing on multiple imputation 2. Issues that could
ECON 142 SKETCH OF SOLUTIONS FOR APPLIED EXERCISE #2
University of California, Berkeley Prof. Ken Chay Department of Economics Fall Semester, 005 ECON 14 SKETCH OF SOLUTIONS FOR APPLIED EXERCISE # Question 1: a. Below are the scatter plots of hourly wages
Simple Methods and Procedures Used in Forecasting
Simple Methods and Procedures Used in Forecasting The project prepared by : Sven Gingelmaier Michael Richter Under direction of the Maria Jadamus-Hacura What Is Forecasting? Prediction of future events
MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS
MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS MSR = Mean Regression Sum of Squares MSE = Mean Squared Error RSS = Regression Sum of Squares SSE = Sum of Squared Errors/Residuals α = Level of Significance
Recall this chart that showed how most of our course would be organized:
Chapter 4 One-Way ANOVA Recall this chart that showed how most of our course would be organized: Explanatory Variable(s) Response Variable Methods Categorical Categorical Contingency Tables Categorical
Additional sources Compilation of sources: http://lrs.ed.uiuc.edu/tseportal/datacollectionmethodologies/jin-tselink/tselink.htm
Mgt 540 Research Methods Data Analysis 1 Additional sources Compilation of sources: http://lrs.ed.uiuc.edu/tseportal/datacollectionmethodologies/jin-tselink/tselink.htm http://web.utk.edu/~dap/random/order/start.htm
Notes on Applied Linear Regression
Notes on Applied Linear Regression Jamie DeCoster Department of Social Psychology Free University Amsterdam Van der Boechorststraat 1 1081 BT Amsterdam The Netherlands phone: +31 (0)20 444-8935 email:
Multivariate Logistic Regression
1 Multivariate Logistic Regression As in univariate logistic regression, let π(x) represent the probability of an event that depends on p covariates or independent variables. Then, using an inv.logit formulation
BIOL 933 Lab 6 Fall 2015. Data Transformation
BIOL 933 Lab 6 Fall 2015 Data Transformation Transformations in R General overview Log transformation Power transformation The pitfalls of interpreting interactions in transformed data Transformations
Causal Forecasting Models
CTL.SC1x -Supply Chain & Logistics Fundamentals Causal Forecasting Models MIT Center for Transportation & Logistics Causal Models Used when demand is correlated with some known and measurable environmental
Introduction to Regression and Data Analysis
Statlab Workshop Introduction to Regression and Data Analysis with Dan Campbell and Sherlock Campbell October 28, 2008 I. The basics A. Types of variables Your variables may take several forms, and it
One-Way Analysis of Variance: A Guide to Testing Differences Between Multiple Groups
One-Way Analysis of Variance: A Guide to Testing Differences Between Multiple Groups In analysis of variance, the main research question is whether the sample means are from different populations. The
Simple Linear Regression
Chapter Nine Simple Linear Regression Consider the following three scenarios: 1. The CEO of the local Tourism Authority would like to know whether a family s annual expenditure on recreation is related
General Regression Formulae ) (N-2) (1 - r 2 YX
General Regression Formulae Single Predictor Standardized Parameter Model: Z Yi = β Z Xi + ε i Single Predictor Standardized Statistical Model: Z Yi = β Z Xi Estimate of Beta (Beta-hat: β = r YX (1 Standard
MSwM examples. Jose A. Sanchez-Espigares, Alberto Lopez-Moreno Dept. of Statistics and Operations Research UPC-BarcelonaTech.
MSwM examples Jose A. Sanchez-Espigares, Alberto Lopez-Moreno Dept. of Statistics and Operations Research UPC-BarcelonaTech February 24, 2014 Abstract Two examples are described to illustrate the use of
Multiple Regression: What Is It?
Multiple Regression Multiple Regression: What Is It? Multiple regression is a collection of techniques in which there are multiple predictors of varying kinds and a single outcome We are interested in
Moderator and Mediator Analysis
Moderator and Mediator Analysis Seminar General Statistics Marijtje van Duijn October 8, Overview What is moderation and mediation? What is their relation to statistical concepts? Example(s) October 8,
Exchange Rate Regime Analysis for the Chinese Yuan
Exchange Rate Regime Analysis for the Chinese Yuan Achim Zeileis Ajay Shah Ila Patnaik Abstract We investigate the Chinese exchange rate regime after China gave up on a fixed exchange rate to the US dollar
Module 3: Correlation and Covariance
Using Statistical Data to Make Decisions Module 3: Correlation and Covariance Tom Ilvento Dr. Mugdim Pašiƒ University of Delaware Sarajevo Graduate School of Business O ften our interest in data analysis
August 2012 EXAMINATIONS Solution Part I
August 01 EXAMINATIONS Solution Part I (1) In a random sample of 600 eligible voters, the probability that less than 38% will be in favour of this policy is closest to (B) () In a large random sample,
Geostatistics Exploratory Analysis
Instituto Superior de Estatística e Gestão de Informação Universidade Nova de Lisboa Master of Science in Geospatial Technologies Geostatistics Exploratory Analysis Carlos Alberto Felgueiras [email protected]
Chapter 3 Quantitative Demand Analysis
Managerial Economics & Business Strategy Chapter 3 uantitative Demand Analysis McGraw-Hill/Irwin Copyright 2010 by the McGraw-Hill Companies, Inc. All rights reserved. Overview I. The Elasticity Concept
INTERPRETING THE ONE-WAY ANALYSIS OF VARIANCE (ANOVA)
INTERPRETING THE ONE-WAY ANALYSIS OF VARIANCE (ANOVA) As with other parametric statistics, we begin the one-way ANOVA with a test of the underlying assumptions. Our first assumption is the assumption of
individualdifferences
1 Simple ANalysis Of Variance (ANOVA) Oftentimes we have more than two groups that we want to compare. The purpose of ANOVA is to allow us to compare group means from several independent samples. In general,
Forecasting in supply chains
1 Forecasting in supply chains Role of demand forecasting Effective transportation system or supply chain design is predicated on the availability of accurate inputs to the modeling process. One of the
Lecture Notes Module 1
Lecture Notes Module 1 Study Populations A study population is a clearly defined collection of people, animals, plants, or objects. In psychological research, a study population usually consists of a specific
Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression
Logistic Regression Department of Statistics The Pennsylvania State University Email: [email protected] Logistic Regression Preserve linear classification boundaries. By the Bayes rule: Ĝ(x) = arg max
Getting Correct Results from PROC REG
Getting Correct Results from PROC REG Nathaniel Derby, Statis Pro Data Analytics, Seattle, WA ABSTRACT PROC REG, SAS s implementation of linear regression, is often used to fit a line without checking
Stata Walkthrough 4: Regression, Prediction, and Forecasting
Stata Walkthrough 4: Regression, Prediction, and Forecasting Over drinks the other evening, my neighbor told me about his 25-year-old nephew, who is dating a 35-year-old woman. God, I can t see them getting
