# Module 5: Multiple Regression Analysis

Size: px
Start display at page:

Transcription

1 Using Statistical Data Using to Make Statistical Decisions: Data Multiple to Make Regression Decisions Analysis Page 1 Module 5: Multiple Regression Analysis Tom Ilvento, University of Delaware, College of Agriculture and Natural Resources, Food and Resource Economics I n the last module we looked at the regression model with a single independent variable helping to explain a single dependent variable. We called this a bi-variate or simple regression because there is only a single independent variable. While this is a good place to start, the world is often more complex than a single explanation or driver for our model. Multiple regression allows us to have many independent variables in a model and examine how each one uniquely helps to explain or predict a single dependent variable. This module will introduce multiple as an extension of bivariate regression. The output of the regression model will be very similar, except there will be several coefficients to examine. In addition, the interpretation of the regression coefficients will change in multiple regression. Now we will look at the effect of an independent variable on a dependent variable while controlling for the other independent variables in the model. The notion of statistical control is a very important feature in regression and it makes it a powerful tool when used properly. BASICS OF MULTIPLE REGRESSION In review, we said that regression fits a linear function to a set of data. It requires that we have a single dependent variable that we are trying to model, explain, or understand. The dependent variable must be a quantitative variable (preferably measured on a continuous level). Regression estimates a set of coefficients that represent the effect of a single variable or a set of independent variables on the dependent variable. The independent variables can be measured on a qualitative level (as in categorical variables represented by dummy variables), an ordinal level, or at a continuous level. Key Objectives Understand how the regression model changes with the introduction of several independent variables, including how to interpret the coefficients Understand the assumptions underlying regression Understand how to use regression to represent a nonlinear function or relationship Understand how dummy variables are interpreted in multiple regression In this Module We Will: Run regression using Excel with many independent variables Look at and interpret the assumptions in regression Estimate a nonlinear function using regression and a trend analysis with seasons represented by dummy variables

2 Using Statistical Data to Make Decisions: Multiple Regression Analysis Page 2 In multiple regression we have more than one independent variable in the model. This allows us to fit a more sophisticated model with several variables that help explain a dependent variable. For example, catalog sales may be a function of more than just a person s salary. Factors such as a person s age, whether they own their home, or the number of catalogs sent to the person may also help explain sales. Multiple regression allows us to include additional variables in the model and estimate their effects on the dependent variable as well. In multiple regression we still estimate a linear equation which can be used for prediction. For a case with three independent variables we estimate: ) Y i ) ) = β + β X ) + β X ) + β X 0 1 i1 2 i2 3 i3 Once we add additional variables into the regression model, several things change, while much remains the same. First let s look at what remains the same. The output of regression will look relatively the same. We will generate a R Square for our model that will be interpreted as the proportion of the variability in Y accounted for by all the independent variables in the model. The model will include a single measure of the standard error which reflects an assumption of constant variance of Y for all levels of the independent variables in the model. Much of the output in multiple regression remains the same - R Square; the ANOVA Table with the decomposition of the sums of squares and the F-test; and the table of the coefficients. The ANOVA table will look similar in the multiple regression. The Total Sum of Squares: TSS = n i= 1 ( Y i Y 2 ) will be decomposed into a part explained by our model (Sum of Squares Regression) and a part unexplained (Sum of Squares Error or Residual). These will be divided by their respective degrees of freedom. The SSR will be divided by its degrees of freedom, which is the number of independent variables in the model (denoted as k). Once we divide the sum of squares by the degrees of freedom they are noted as Mean Squared Regrerssion (MSR) and Mean Squared Error (MSE). The F-test in multiple regression tests to see if at least one of the independent variables significantly contributes to helping to understand the dependent variable. The F-test will still be the ratio of the two variances - MSR divided by MSE - but the interpretation of the statistical test follows a null hypothesis that none of the variables in the model are significantly different from zero. In other words, an F-test with a significance level less than.05 indicates that at least one of the variables in the model helps to explain the dependent variable.

3 Using Statistical Data to Make Decisions: Multiple Regression Analysis Page 3 The remainder of the output will also look similar, except their will be an estimated regression coefficient (slope coefficient) for each independent variable in the model, along with a standard error, t-test, p-value, and confidence interval. The most important change in the multiple regression is that the coefficients for the independent variables will have a different interpretation. This change reflects that the multivariate relationship of an independent variable with the dependent variable won t necessarily be the same as the bivariate relationship. If we regress Y on X 1 and generate an estimated slope coefficient, b 1, we interpret this coefficient as the effect of X 1 on Y. However, if there is another independent variable in the model, the coefficient is likely to change (represented by b 1 *). The reason for this change is that in multiple regression, coefficients are estimated holding constant at the other variables in the model. The slope coefficient for X 1 is now the change in Y for a unit change in X 1, holding all other independent variables in the model constant. We take into account the other independent variables when estimating the impact of X 1 by incorporating the covariance of X 1 with all the other independent variables in the model. The way that multiple regression solves for the coefficients involves matrix algebra and simultaneous equation solving, topics beyond the scope of this module. Excel and other software packages will do this for us and we need not worry about the actual equations. We do need to have a handle on why the coefficients are different, and their interpretation. The reason that the coefficients will change is that often the independent variables are correlated with each other, and this correlation may also be shared with the dependent variable. For example, one s age and salary are often related, and they both are related to catalog sales. Multiple regression takes into account the covariance of the independent variables when estimating the slope coefficients, attempting to estimate the unique effect of each variable, independent of the other variables in the model. Compared to the bivariate regression, controlling for the other independent variables may: The biggest change with multiple regression is that we estimate several slope coefficients at the same time. The interpretation of these coefficients is the effect of a unit change in the independent variable on the dependent variable, holding constant all other independent variables in the model. Controlling for other variables in the model will likely change the bivariate relationship between an independent variables and the dependent variable - the relationship could be strengthened, weakened, change sign, or remain relatively unchanged in the multivariate model. Increase the strength of the relationship between an independent variable ( X 1 ) and the dependent variable (Y) Decrease the strength of the relationship Reverse the sign (e.g., from positive to negative) Leave it relatively unchanged

4 Using Statistical Data to Make Decisions: Multiple Regression Analysis Page 4 In the special case where X 1 is uncorrelated with the other independent variables in the model (i.e., it is independent of the other Xs in the model), the bivariate regression estimate of the b 1 will equal the multivariate regression estimate of b 1 *. Otherwise we should expect some change in the multiple regression estimate. The ability to estimate the affect of an independent variable (X 1 ) independent of the other independent variables in the model is a very powerful and compelling feature of regression. It allows us to use statistical control as opposed to control via an experimental design. If we cannot use a design to random assign subjects to different treatments, the next best alternative is to estimate effects while controlling for other variables in the model. For example, if we are interested in understanding the relationship of education on income, we can t random assign different levels of education to men and women; different age groups; or different racial groups, and then observe the resulting income. We have to observe education and income as it is distributed in the sample. Regression allows us to estimate the effect of education while controlling for other factors such as gender, age, or race. This allows us to make a better estimate of the effect of education devoid of the other independent variables in the model. Hence it s popularity in the business applications, the social sciences, medicine, and nutrition. However, statistical control in regression comes at a price. We can never be certain that we are controlling for all the relevant variables in the model - we may be leaving something out that is unmeasured or not included in the data collection. Leaving out important variables from the model may is called a specification error and may lead to biased results. If there is high correlation between X 1 and the other independent variables we may have a problem estimating our coefficients. This is called Collinearity - when X 1 highly correlated with one other independent variable - or Multi-Collinearity - when X 1 is highly correlated with a set of independent variables. If there is too much collinearity, it means we can t estimate the affect of X1 very well, and our estimate will be unstable and poor. Extreme collinearity means the regression can t be estimated at all! More will be discussed about this under the assumptions of the regression model. The ability to estimate the affect of an independent variable (X 1 ) independent of the other independent variables in the model is a very powerful and compelling feature of regression. It allows for statistical control. Regression analysis does come with certain requirements and assumptions in order to effectively run the models and to make statistical inferences. However, regression analysis is fairly robust - small departures from some of the assumptions do not lead to serious consequences.

5 Using Statistical Data to Make Decisions: Multiple Regression Analysis Page 5 ASSUMPTIONS UNDERLYING REGRESSION Regression, like most statistical techniques, has a set of underlying assumptions that are expected to be in place if we are to have confidence in estimating a model. Some of the assumptions are required to make an estimate, even if the only goal is to describe a set of data. Other assumptions are required if we are going to make an inference from a sample to a population. Requirements of Regression Y is measured as a continuous level variable not a dichotomy or ordinal measurement The independent variables can be continuous, dichotomies, or ordinal The independent variables are not highly correlated with each other The number of independent variables is 1 less than the sample size, n (preferably n is far greater than the number of independent variables) We have the same number of observations for each variable any missing values for any variable in the regression removes that case from the analysis Assumptions About the Error Term We noted in Module 4 that the error term in regression provides and estimate of the standard error of the model and helps in making inferences, including testing the regression coefficients. To properly use regression analysis there are a number of criteria about the error term in the model that we must be able to reasonably assume are true. If we cannot believe these assumptions are reasonable in our model, the results may be biased or no longer have minimum variance. The following are some of the assumptions about the error term in the regression model. Assumptions about the error term in regression are very important for statistical inference - making statements from a sample to a larger population. The mean of the Probability Distribution of the Error term is zero. This is true by design of our estimator of Least Squares, but it also reflects the notion that we don t expect the error terms to be mostly positive or negative (over or underestimate the regression line), but centered around the regression line.

6 Using Statistical Data to Make Decisions: Multiple Regression Analysis Page 6 The Probability Distribution of Error Has Constant Variance = F 2. This implies that we assume a constant variance for Y across all the levels of the independent variables. This is called homoscedasticity and it enables us to pool information from all the data to make a single estimate of the variance. Data that does not show constant error variance is called heteroscedasticity and must be corrected by a data transformation or Weighted Least Squares. The Probability Distribution of the Error term is distributed as a normal distribution. This assumption follows statistical theory of the sampling distribution of the regression coefficient and is a reasonable assumption as our sample size gets larger and larger. This enables us to make an inference from a sample to a population, much like we did for the mean. The errors terms are Independent of each other and with the independent variables in the model. This means the error terms are uncorrelated with each other or with any of the independent variables in the model. Correlated error terms sometimes occur in time series data and is known as auto-correlation. If there is correlation among the error terms of with error terms and the independent variables it usually implies that our model is mis-specified. Another way to view this problem is that there is still pattern left to explain in the data by including a lagged variable in time series, or a nonlinear form in the case of correlation with an independent variable.

7 Using Statistical Data to Make Decisions: Multiple Regression Analysis Page 7 A MULTIPLE REGRESSION EXAMPLE The following example uses a data set for apartment building sales in a city in Minnesota. The value of the apartment building (PRICE) is seen as a function of: 1. The number of apartments in the building (NUMBER) 2. The age of the apartment building (AGE) 3. The lot size that the building is on (LOTSIZE) 4. The number of parking spaces (PARK) 5. The total area is square footage (AREA) The local real estate commission collected a random sample of 25 apartment buildings to estimate a model of value. The sample size is relatively small, but we will still be able to estimate a multiple regression with five independent variables. The descriptive statistics for the variables in the model are given in Table 1. The average sale price of the apartments was \$290,574, but there is considerable variation in the value of apartments. The coefficient of variation is 73% indicating substantial variation. We want to see if the high variability in PRICE is a function of the independent variables. The other variables also show a lot of variability, and in most cases the mean is larger than the median indicating outliers and skew to the data. The exception is AGE, where there is one low value pulling the mean below the median. Table 1. Descriptive Statistics for the MN Apartment Sales Data PRICE NUMBER AGE LOTSIZE PARK AREA Mean Standard Error Median Mode #N/A #N/A 0.00 #N/A Standard Deviation Sample Variance Kurtosis Skewness Range Minimum Maximum Sum Count

8 Using Statistical Data to Make Decisions: Multiple Regression Analysis Page 8 Table 2. The Correlation Matrix for the MN Apartment Sales Data PRICE NUMBER AGE LOTSIZE PARK AREA PRICE NUMBER AGE LOTSIZE PARK AREA The Correlation Matrix in Table 2 provides some insights into which of the independent variables are related to PRICE. The highest correlation with PRICE is for AREA (.968) and NUMBER (.923). Both correlations are very high and positive - as the total area increases, or the number of apartments increase, so does the price. LOTSIZE is also positively correlated with PRICE at.743. On the other hand, the number of parking spaces is only positively correlated at.225 and the age of the building, while negative as might be expected, is only correlated at We will focus on two bivariate relationships before estimating a multiple regression model SALES and AREA and SALES and AGE. We will look at the scatter plot for each one and the bivariate regression. Figure 1 shows the scatter plot for PRICE and AREA. The relationship is strong, positive and linear. The plot also shows the regression line and R Square for a bi-variate regression. The model says that every square foot of space is roughly worth \$20 toward the price of the apartment building. The fit of the model, with only one variable, is very good with 94 percent of the variability of PRICE explained by knowing the area of the apartments. Clearly, area is an important variable to understand the price of the apartment building. Apratment Price Versus Area A useful strategy in any analysis is to start simple - i.e., bivariate relationships - and them move to more sophisticated multi-variate models. This helps you to see how relationships can change once we control for other variables in the model. PRICE \$1,000,000 \$800,000 \$600,000 \$400,000 y = x R 2 = \$200,000 \$ Area Figure 1. Scatter Plot of PRICE Versus AGE for MN Apartments Data

9 Using Statistical Data to Make Decisions: Multiple Regression Analysis Page 9 Let s look at one other bi-variate relationship - between PRICE and AGE. The correlation is -.114, indicating that as the age of building increases, the price decreases. However, the relationship is very weak. The scatter plot in Figure 2 shows that the relationship is weak. Price Versus Age Price \$1,000,000 \$800,000 \$600,000 \$400,000 \$200,000 \$0 y = x R 2 = Age Figure 2. Scatter Plot of PRICE Versus AGE for MN Apartments Data If we look at the hypothesis test for the coefficient for AGE (full Excel output not shown), we find the following information about the coefficient, standard error, t-test, and p-value for the regression of PRICE on AGE. Coeff. Std Error t Stat P-value Intercept AGE The model estimates that for every year old the building is, the price decreases by \$935. However, the t-statistic is only -,553 with a p-value of.586. While the model shows a negative relationship, based on a hypothesis test for a sample, I can t be sure if the real value is any different from zero. The R Square of.01 confirms that there is little going on in the data. The steps of a formal hypothesis test are given below. Null Hypothesis: H 0 : \$ AGE = 0 Alternative: H a : \$ AGE 0 two-tailed test Test Statistic: t* = ( )/ = -.55 In most cases we don t do the formal steps in a hypothesis test for a regression coefficient - we simply look for t-value greater than 2, or a p-value less than.05 in order to conclude the coefficient is different from zero. p-value: p =.59 Conclusion: Cannot Reject H 0 : \$ AGE = 0

10 Using Statistical Data to Make Decisions: Multiple Regression Analysis Page 10 Now let s shift to a multiple regression and see what changes (see Table 3). We will include five independent variables in the model - NUMBER, AGE, LOTSIZE, PARK, and AREA. The Excel output is given below. The R Square for te model is now.98 and the Adjusted R Square is close,.975. One way to tell if the fit has really improved over a simpler model (say for a model with only AREA) is to see what percent of the variability yet to be explained is accounted for by the new model. The model for AREA alone had an R Square of The new model has a R Square of.98. The increase is only.0428, but this is 68 percent of the remaining.0628 left to explain ( ). This is a significant improvement in the fit. The overall F-test for the model (based on a Null Hypothesis that all the coefficients for the independent variables are zero) is with a p-value of.00. This means that something useful is going on in our model and at least one of the independent variables has a coefficient different from zero. We can now turn to the individual coefficients for our variables to see the direction of the relationship (the sign of the coefficient), the magnitude of the relationship (the size of the coefficient) and to note which ones show a significant difference from zero (the p- value). Table 3. Regression Output of PRICE on Five Independent Variables Regression Statistics Multiple R R Square Adjusted R Square Standard Error Observations 25 ANOVA df SS MS F Sig F Regression Residual Total Coef Std Error t Stat P-value Intercept NUMBER AGE LOTSIZE PARKING AREA

11 Using Statistical Data to Make Decisions: Multiple Regression Analysis Page 11 As might be expected, only AGE shows a negative relationship with PRICE - as the apartment building gets older the PRICE decreases. The other signs are positive as might be expected; more of each of the independent variables should increase the price. If I focus on the p- values and use a criterion of.05 as a significance level, I can see that the coefficients for LOTSIZE and PARK are not significantly different from zero. In other words, I do not have information from this sample to safely conclude that the coefficients for these variables are any different from zero. I should note that this test is based on holding constant the other independent variables in the model. The predictive equation from our model is Yˆ = \$92,788 + \$4,140(NUMBER) \$853(AGE) + \$1(LOTSIZE) + \$2,696(PARK) + \$16(AREA) We want to find the predicted value of an apartment building with 20 apartments; 50 years old; 2,000 sq ft of lot size; 20 parking spaces; and 22,000 sq ft of area, our model says the value is: Solving the regression equation for Y given a set of values for the independent variables allows us to make a prediction from our model. Predicted Value = \$92,788 + \$4,140(20) - \$853(50) + \$1(2000) +\$2,696(20) + \$16(22,000) = \$540,858 Next, let s focus on the coefficients for AREA and AGE and compare them to the bi-variate relationship. The coefficient for AGE in the multiple regression model is -853 compared with -935 for the simple regression. However, now our model says that we have evidence that the coefficient for AGE is significantly different from zero (p-value =.01). Once we controlled for the other variables in the model, we can safely say that the age of the apartment has an influence on its value, and that this relationship is negative. Controlling for other variables in the model enabled us to make a better estimate of the effect of age on the value of an apartment. The coefficient for AREA has also decreased in the multiple regression. Once we control for the other variables in the model (NUMBER, AGE, LOTSIZE, and PARK), a unit change in AREA results in a 15.4 unit change in price (down from 20.4 in the bi-variate regression). We still have strong evidence that AREA is positively related to PRICE, but now our model says the effect is smaller once we control for the other variables in the model.

12 Using Statistical Data to Make Decisions: Multiple Regression Analysis Page 12 REPRESENTING NONLINEAR RELATIONSHIPS WITH REGRESSION We noted earlier that regression is used to estimate linear relationships and that the regression model is linear in the parameters. However, it can be used to represent nonlinear relationships in the data. There are many nonlinear functions that can be estimated with regression. The following are but two examples, using a polynomial function and a log function. Polynomial Regression. A polynomial expression is an equation that is linear in the parameters that has an independent variable that is raised to a power included in the model. The equation takes the following general form (where p reflects the order). We can represent nonlinear relationships with regression under certain circumstances. 2 3 Y = a + b X + b X + b X b X 1 2 A second order model has X and X 2 ; a third order model has X, X 2 and X 3, and so forth. Figures 3 and 4 show a second and third order polynomial, respectively. The second order relationship shows a single inflection point where the relationship changes. In the example in Figure 3, the function increases to a point and then the negative squared term begins to turn the relationship downward. Using calculus, we can solve for the inflection point - in this case it is at In figure 4 we have a third order polynomial so there are two inflection points. 3 p p Second Order Polynomial Relationship y = x 2 + 2x Figure 3. Second Order Polynomial Relationship All lower order terms must be in the equation in order for the polynomial function to be represented, but the statistical test should focus on the highest order term in the model. In essence, the test is whether the highest term in the model is significantly different from zero. This tells us if the term is needed in the model, and thus whether the polynomials fits the data.

13 Using Statistical Data to Make Decisions: Multiple Regression Analysis Page 13 Third Order Polynomial Relationship y = x x x Figure 4. Third Order Polynomial Relationship A polynomial relationship with one of the independent variables in the model also can be fitted while including additional variables in the model. For example, The relationship between age and income can be thought of as a polynomial - as age increases income does as well, but at some point the relationship diminishes and may even turn down as people reach retirement age. We could fit a model with AGE and AGE 2 while also including other variables in the model that influence income, such as gender and education. Let s look at an example of a polynomial relationship. The following example is from sales and investments in U.S. manufacturing from 1954 to The data values are in millions of dollars. The graph of the relationship between investment and sales is given in Figure 5. The relationship looks linear, and the fitted line and R Square show a very good fit (R 2 =.976). However, there does appear to be a slight curve to the relationship. If you look closely, the points on the line at the end tend to be above the regression line while the points in the middle tend to be below it. This is an indication of a nonlinear relationship. U.S. Manufacturing Sales Versus Inventory, 1954 to 1999 Sales y = 0.683x R 2 = Inventory Figure 5. Scatter Plot of Sales Versus Inventory with a Linear Trendline

14 Using Statistical Data to Make Decisions: Multiple Regression Analysis Page 14 One way to see if there is a curve (besides the scatter plot) is to plot the residuals from the linear regression line. The best plot is the standardized residuals (with a mean of zero and a standard deviation of 1) against the Predicted values of Y. This plot should look like a random scatter of points and show no tendencies or patter to the plot. Figure 6 shows the residual plot (Excel can help create this), and there does appear to be a pattern to the plot. Values in the middle tend to have negative residuals and values at the top tend to have positive residuals. This is a sign of a nonlinear relationship. Std Residuals Standardized Residuals versus Predicted Y Predicted Y We can determine if a higher order polynomial relationship exists by: examining a scatter plot to see if we can observe a curve looking for a pattern in a plot of the standardized residual versus the predicted Y running a regression model with the higher order term and testing to see if it is significantly different from zero. Figure 6. Scatter Plot of Standardized Residuals and Predicted Y Showing a Non-Random Pattern To test to see if there is a nonlinear relationship, I calculated a squared inventory variable and included it in the model along with inventory. The regression output is given in Table 4. We will focus on two key things - R Square and the significance test for the squared inventory term. The R Square for the model increase from.976 to.991, a near perfect fit. This represents 63 percent of what was left to explain from the first order model, a significant increase. The coefficient for INVENT SQ is very small, but this is not unusual for a squared term in the model. We are squaring very large numbers and the coefficient reflects this. The key issue is whether the coefficient for INVENT SQ is significantly different from zero. This requires a t-test or a check on the p-value. The formal hypothesis test is: Null Hypothesis: H 0 : \$ ISQ = 0 Alternative: H a : \$ ISQ 0 two-tailed test Test Statistic: t* = p-value: p =.000 or p <.001 Conclusion: Reject H 0 : \$ ISQ = 0

15 Using Statistical Data to Make Decisions: Multiple Regression Analysis Page 15 Table 4. Regression Output for Polynomial Regression of SALES on INVENT and INVENT SQ Regression Statistics Multiple R R Square Adjusted R Square Standard Error Observations 46 ANOVA df SS MS F Sig. F Regression Residual Total Coeff. Std Error t Stat P-value Intercept INVENT INVENT SQ E Based on our test we can conclude that the INVENT SQ term is significantly different from zero and thus is important in the model. This validates our use of the second order polynomial model to improve our fit, and the result is in agreement with the increase in R Square. One final point should be emphasized. Once we include a higher order term in the model and find it significant, we rarely focus on the test for the lower order term(s). The lower order terms must be included in the model to make sense of the polynomial relationship. Logarithm Transformation - The Multiplicative or Constant Elasticity Model. One transformation of a multiplicative or constant elasticity model is the use of a log transformation. The multiplicative model is in the form: b1 b Y ax X 2... X = 1 2 bk k Taking the natural log of each variable of this type of a function results in a linear transformation of the form: Taking logs of the variables can transform a nonlinear relationship in one that is linear in the parameters - but the data are transformed and are now in log units. Y = ln( a) + b ln( X ) + b ln( X ) b ln( X ) This is also called the constant elasticity model because we interpret the coefficients as the percentage change in Y for a 1 percent change in X. The definition of an elasticity is the percentage change in Y with a one percent change in an explanatory variable, denoted as X. In most cases k k

16 Using Statistical Data to Make Decisions: Multiple Regression Analysis Page 16 the elasticity is dependent on the level of X and will change over the range of X. However, in this type of model the elasticity is constant and does not change over the range of X - it is constant at the level of the estimated coefficient. You would only use this model if you believe the elasticity is a constant. We will look at one example of a multiplicative model. The data for this example is operating costs at 10 branches of an insurance company (COSTS) as a function of the number of policies for home insurance (HOME) and automobile insurance (AUTO). We start with the belief that operating costs increase by a constant percentage in relation to percentage increases in the respective policies. This type of model requires us to transform all the variables by taking the natural logarithm. This is the function =LN(value) in Excel. In order to take a log of a value it must be greater than zero (it can t be zero or negative). These constraints are not issues in this data. Once I convert the data, I use the new variables in a regression using Excel. The output is given in Table 5. The overall fit of the model is very good with an R Square of.998. However, we only have 10 observations in the data and R Square is sensitive to the number of observations relative to the number of independent variables. The overall F-test is highly significant, as are both of the independent variables in the model. All the p- values are <.001, so we can safely conclude that the Table 5. Regression Output for Multiplicative Model for Insurance Costs Regression Statistics Multiple R R Square Adjusted R Square Standard Error Observations 10 ANOVA df SS MS F Sig F Regression Residual Total Coeff. Std Error t Stat P-value Intercept LNHOME LNAUTO

17 Using Statistical Data to Make Decisions: Multiple Regression Analysis Page 17 coefficients are significantly different from 0. The interpretation of the coefficients is now slightly different to reflect the constant elasticity. In this case we say that when Home Owner policies increase by 1 percent, the operating costs increase by.583 percent, and a one percent the percent increase in Auto policies leads to a.409 percent increase in operating costs. It is difficult to validate the logarithm transformation for the multiplicative model. While the fit is very good, a similar model of the original data also yielded an excellent fit (R 2 =.997, data not shown). In the end it comes down to whether you believe the assumptions of constant elasticities truly fits your situation. That is as much an issue about your intuition and experience (or more formally about economic theory) than about comparing models. Be careful making model comparisons when we transform the data - the transformed data cannot always be directly compared to the original form of the data. In fact, once you transform the data, you cannot directly compare the logarithmic model to a model of untransformed data - it becomes a comparison of apples and oranges. However, you can take the anti-log of a predicted value from your model to transform the result back to real terms. Example of a Multiple Regression with Trend and Dummy Variables. Data over time often show cyclical or seasonal trends. These trends can be modeled by the use of a dummy variable. For example, four quarters can be represented by three dummy variables, coded zero and one to reflect which quarter is being represented. The following is an example of a multiple regression using a linear trend and seasonal dummy variables. Special emphasis in this example will be given to interpreting the dummy variables when another variable (TREND) is in the model. The example shows quarterly data for Coca-Cola sales (given in millions of dollars) from 1986 through This example was taken directly from Data Analysis for Managers, written by Albright, Winston, and Zappe (Brooks/Cole, 2004; ISBN: ). A plot of the data (Figure 7) shows a strong linear trend, but there are regular fluctuations that show seasonal variations. A closer look shows that sales of Coca-Cola are higher in the second and third quarters when the weather is hotter. The linear trend is represented on the graph and R 2 for the model is relatively high However, we are going to see if we can improve the model by including seasonal dummy variables.

18 Using Statistical Data to Make Decisions: Multiple Regression Analysis Page 18 Quarterly Sales of Coca-Cola, 1986 to 2001 \$1,000, y = x R 2 = Time Seasonal variations in data over time can be modeled using dummy variables to represent the quarters. Figure 7. Coca-Cola Sales Showing a Linear Trend and Seasonal Variations Since there are four quarters in each year (k=4), we need to represent quarters with three dummy variables (k-1=3). It is somewhat arbitrary which quarter we leave out of the model to serve as the reference category. Since I calculated the dummy variables in order, it was easiest to leave out the fourth quarter sales in the model. As a result, Q4 is the reference category. The regression analysis is given in Table 6 below. We can note that R 2 for the model increased to.941, reflecting that the inclusion of the quarter dummy variables has increased the fit of the model over the simple trend Table 6. Regression of Coca-Cola Sales on Trend and Quarters Regression Statistics Multiple R R Square Adjusted R Square Standard Error Observations 61 ANOVA df SS MS F Sig F Regression Residual Total Coef Std Error t Stat P-value Intercept Trend Q Q Q

19 Using Statistical Data to Make Decisions: Multiple Regression Analysis Page 19 analysis. If we look at the coefficients we can see that there is still a significant trend in the model (t-stat = and p <.001). All of the quarter dummy variables are significant at the.05 level, indicating that we can conclude that sales in Q1, Q2, and Q3 are all significantly different on average from Q4 (the reference category). Specifically, we can see that Q1 has lower sales that Q4 because the coefficient for Q1 is negative ( ). This means that sales in Q1 tends to be \$ million less than Q4, holding constant the time period. In contrast, sales in Q2 and Q3 are significantly higher than Q4. If we think about it, these results make sense. Sales of soft drink should be related to the time of year with higher on average sales in warmer quarters. As you can see from this model, the interpretation of dummy variables is relatively the same in a multiple regression as in it in simple regression, except the regression estimate of the coefficients takes into account the other independent variables in the model. With dummy variables, we need to interpret the coefficients in relation to the reference category. Two keys in assessing the impact of including dummy variables in a multiple regression is whether the coefficients are significantly different from zero, and whether their inclusion in the model increases R 2 in a meaningful way. The test of the coefficients tell us if the mean of one category is significantly different from the reference category, and the overall F-test or the increase in R 2 tells us if the overall variable (in this case quarters, represented by three dummy variables) is important in the model. In this example, including quarters in the model through dummy variables increased R 2 from.879 to.941. This is an increase to R 2 of.062, which represents over 51 percent of the amount left to explain over the first model (.512 =.062/[1-.879]). Clearly, adding a seasonal component into our model through dummy variables has helped explain an additional source of variability in sales. This will aid in making forecasts into the future.

20 Using Statistical Data to Make Decisions: Multiple Regression Analysis Page 20 CONCLUSIONS Multiple regression involves having more than one independent variable in the model. This allows us to estimate more sophisticated models with many explanatory variables influencing a single dependent variable. Multiple regression also allows us to estimate the relationship of each independent variable to the dependent variable while controlling for the effects of the other independent variables in the model. This is a very powerful feature of regression because we can estimate the unique effect of each variable. Multiple regression still uses the property of Least Squares to estimate the regression function, but it does this while taking into account the covariances among the independent variables, through the use of matrix algebra. Multiple regression requires that the dependent variable is measured on a continuous level, but it has great flexibility for the independent variables. They can be measured as continuous, ordinal, or categorical as represented by dummy variables. There are other requirements in multiple regression - equal number of observations for each variable in the model; limits on the number of independent variables in the model; independent variables cannot be too highly correlated with each other; and assumptions about the error terms. Overall, regression is fairly robust and violations of some of the assumptions about error terms may cause minor problems or they can be managed by transformations of the data or modeling strategies. These issues are advanced topics in regression.

### Module 3: Correlation and Covariance

Using Statistical Data to Make Decisions Module 3: Correlation and Covariance Tom Ilvento Dr. Mugdim Pašiƒ University of Delaware Sarajevo Graduate School of Business O ften our interest in data analysis

### Multiple Regression: What Is It?

Multiple Regression Multiple Regression: What Is It? Multiple regression is a collection of techniques in which there are multiple predictors of varying kinds and a single outcome We are interested in

### MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS

MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS MSR = Mean Regression Sum of Squares MSE = Mean Squared Error RSS = Regression Sum of Squares SSE = Sum of Squared Errors/Residuals α = Level of Significance

### Premaster Statistics Tutorial 4 Full solutions

Premaster Statistics Tutorial 4 Full solutions Regression analysis Q1 (based on Doane & Seward, 4/E, 12.7) a. Interpret the slope of the fitted regression = 125,000 + 150. b. What is the prediction for

### Simple linear regression

Simple linear regression Introduction Simple linear regression is a statistical method for obtaining a formula to predict values of one variable from another where there is a causal relationship between

### Chapter Seven. Multiple regression An introduction to multiple regression Performing a multiple regression on SPSS

Chapter Seven Multiple regression An introduction to multiple regression Performing a multiple regression on SPSS Section : An introduction to multiple regression WHAT IS MULTIPLE REGRESSION? Multiple

### 1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96

1 Final Review 2 Review 2.1 CI 1-propZint Scenario 1 A TV manufacturer claims in its warranty brochure that in the past not more than 10 percent of its TV sets needed any repair during the first two years

### HYPOTHESIS TESTING: CONFIDENCE INTERVALS, T-TESTS, ANOVAS, AND REGRESSION

HYPOTHESIS TESTING: CONFIDENCE INTERVALS, T-TESTS, ANOVAS, AND REGRESSION HOD 2990 10 November 2010 Lecture Background This is a lightning speed summary of introductory statistical methods for senior undergraduate

### Regression step-by-step using Microsoft Excel

Step 1: Regression step-by-step using Microsoft Excel Notes prepared by Pamela Peterson Drake, James Madison University Type the data into the spreadsheet The example used throughout this How to is a regression

### Regression Analysis: A Complete Example

Regression Analysis: A Complete Example This section works out an example that includes all the topics we have discussed so far in this chapter. A complete example of regression analysis. PhotoDisc, Inc./Getty

### Chapter 10. Key Ideas Correlation, Correlation Coefficient (r),

Chapter 0 Key Ideas Correlation, Correlation Coefficient (r), Section 0-: Overview We have already explored the basics of describing single variable data sets. However, when two quantitative variables

### X X X a) perfect linear correlation b) no correlation c) positive correlation (r = 1) (r = 0) (0 < r < 1)

CORRELATION AND REGRESSION / 47 CHAPTER EIGHT CORRELATION AND REGRESSION Correlation and regression are statistical methods that are commonly used in the medical literature to compare two or more variables.

### Chapter 13 Introduction to Linear Regression and Correlation Analysis

Chapter 3 Student Lecture Notes 3- Chapter 3 Introduction to Linear Regression and Correlation Analsis Fall 2006 Fundamentals of Business Statistics Chapter Goals To understand the methods for displaing

### II. DISTRIBUTIONS distribution normal distribution. standard scores

Appendix D Basic Measurement And Statistics The following information was developed by Steven Rothke, PhD, Department of Psychology, Rehabilitation Institute of Chicago (RIC) and expanded by Mary F. Schmidt,

### Statistics courses often teach the two-sample t-test, linear regression, and analysis of variance

2 Making Connections: The Two-Sample t-test, Regression, and ANOVA In theory, there s no difference between theory and practice. In practice, there is. Yogi Berra 1 Statistics courses often teach the two-sample

### Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression

Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression Objectives: To perform a hypothesis test concerning the slope of a least squares line To recognize that testing for a

### Univariate Regression

Univariate Regression Correlation and Regression The regression line summarizes the linear relationship between 2 variables Correlation coefficient, r, measures strength of relationship: the closer r is

### Week TSX Index 1 8480 2 8470 3 8475 4 8510 5 8500 6 8480

1) The S & P/TSX Composite Index is based on common stock prices of a group of Canadian stocks. The weekly close level of the TSX for 6 weeks are shown: Week TSX Index 1 8480 2 8470 3 8475 4 8510 5 8500

### DEPARTMENT OF PSYCHOLOGY UNIVERSITY OF LANCASTER MSC IN PSYCHOLOGICAL RESEARCH METHODS ANALYSING AND INTERPRETING DATA 2 PART 1 WEEK 9

DEPARTMENT OF PSYCHOLOGY UNIVERSITY OF LANCASTER MSC IN PSYCHOLOGICAL RESEARCH METHODS ANALYSING AND INTERPRETING DATA 2 PART 1 WEEK 9 Analysis of covariance and multiple regression So far in this course,

### Introduction to Quantitative Methods

Introduction to Quantitative Methods October 15, 2009 Contents 1 Definition of Key Terms 2 2 Descriptive Statistics 3 2.1 Frequency Tables......................... 4 2.2 Measures of Central Tendencies.................

### 2. What is the general linear model to be used to model linear trend? (Write out the model) = + + + or

Simple and Multiple Regression Analysis Example: Explore the relationships among Month, Adv.\$ and Sales \$: 1. Prepare a scatter plot of these data. The scatter plots for Adv.\$ versus Sales, and Month versus

### Lean Six Sigma Analyze Phase Introduction. TECH 50800 QUALITY and PRODUCTIVITY in INDUSTRY and TECHNOLOGY

TECH 50800 QUALITY and PRODUCTIVITY in INDUSTRY and TECHNOLOGY Before we begin: Turn on the sound on your computer. There is audio to accompany this presentation. Audio will accompany most of the online

### CHAPTER 13 SIMPLE LINEAR REGRESSION. Opening Example. Simple Regression. Linear Regression

Opening Example CHAPTER 13 SIMPLE LINEAR REGREION SIMPLE LINEAR REGREION! Simple Regression! Linear Regression Simple Regression Definition A regression model is a mathematical equation that descries the

### A Primer on Forecasting Business Performance

A Primer on Forecasting Business Performance There are two common approaches to forecasting: qualitative and quantitative. Qualitative forecasting methods are important when historical data is not available.

### False. Model 2 is not a special case of Model 1, because Model 2 includes X5, which is not part of Model 1. What she ought to do is estimate

Sociology 59 - Research Statistics I Final Exam Answer Key December 6, 00 Where appropriate, show your work - partial credit may be given. (On the other hand, don't waste a lot of time on excess verbiage.)

### ECON 142 SKETCH OF SOLUTIONS FOR APPLIED EXERCISE #2

University of California, Berkeley Prof. Ken Chay Department of Economics Fall Semester, 005 ECON 14 SKETCH OF SOLUTIONS FOR APPLIED EXERCISE # Question 1: a. Below are the scatter plots of hourly wages

### POLYNOMIAL AND MULTIPLE REGRESSION. Polynomial regression used to fit nonlinear (e.g. curvilinear) data into a least squares linear regression model.

Polynomial Regression POLYNOMIAL AND MULTIPLE REGRESSION Polynomial regression used to fit nonlinear (e.g. curvilinear) data into a least squares linear regression model. It is a form of linear regression

### NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates

### Final Exam Practice Problem Answers

Final Exam Practice Problem Answers The following data set consists of data gathered from 77 popular breakfast cereals. The variables in the data set are as follows: Brand: The brand name of the cereal

### Descriptive Statistics

Descriptive Statistics Primer Descriptive statistics Central tendency Variation Relative position Relationships Calculating descriptive statistics Descriptive Statistics Purpose to describe or summarize

### UNDERSTANDING THE TWO-WAY ANOVA

UNDERSTANDING THE e have seen how the one-way ANOVA can be used to compare two or more sample means in studies involving a single independent variable. This can be extended to two independent variables

### Simple Regression Theory II 2010 Samuel L. Baker

SIMPLE REGRESSION THEORY II 1 Simple Regression Theory II 2010 Samuel L. Baker Assessing how good the regression equation is likely to be Assignment 1A gets into drawing inferences about how close the

### SPSS Guide: Regression Analysis

SPSS Guide: Regression Analysis I put this together to give you a step-by-step guide for replicating what we did in the computer lab. It should help you run the tests we covered. The best way to get familiar

### Business Statistics. Successful completion of Introductory and/or Intermediate Algebra courses is recommended before taking Business Statistics.

Business Course Text Bowerman, Bruce L., Richard T. O'Connell, J. B. Orris, and Dawn C. Porter. Essentials of Business, 2nd edition, McGraw-Hill/Irwin, 2008, ISBN: 978-0-07-331988-9. Required Computing

### Part 2: Analysis of Relationship Between Two Variables

Part 2: Analysis of Relationship Between Two Variables Linear Regression Linear correlation Significance Tests Multiple regression Linear Regression Y = a X + b Dependent Variable Independent Variable

### Simple Predictive Analytics Curtis Seare

Using Excel to Solve Business Problems: Simple Predictive Analytics Curtis Seare Copyright: Vault Analytics July 2010 Contents Section I: Background Information Why use Predictive Analytics? How to use

### Multiple Linear Regression

Multiple Linear Regression A regression with two or more explanatory variables is called a multiple regression. Rather than modeling the mean response as a straight line, as in simple regression, it is

### 2. Linear regression with multiple regressors

2. Linear regression with multiple regressors Aim of this section: Introduction of the multiple regression model OLS estimation in multiple regression Measures-of-fit in multiple regression Assumptions

### 1) Write the following as an algebraic expression using x as the variable: Triple a number subtracted from the number

1) Write the following as an algebraic expression using x as the variable: Triple a number subtracted from the number A. 3(x - x) B. x 3 x C. 3x - x D. x - 3x 2) Write the following as an algebraic expression

### Statistics. Measurement. Scales of Measurement 7/18/2012

Statistics Measurement Measurement is defined as a set of rules for assigning numbers to represent objects, traits, attributes, or behaviors A variableis something that varies (eye color), a constant does

### Introduction to Regression and Data Analysis

Statlab Workshop Introduction to Regression and Data Analysis with Dan Campbell and Sherlock Campbell October 28, 2008 I. The basics A. Types of variables Your variables may take several forms, and it

### LOGIT AND PROBIT ANALYSIS

LOGIT AND PROBIT ANALYSIS A.K. Vasisht I.A.S.R.I., Library Avenue, New Delhi 110 012 amitvasisht@iasri.res.in In dummy regression variable models, it is assumed implicitly that the dependent variable Y

### CALCULATIONS & STATISTICS

CALCULATIONS & STATISTICS CALCULATION OF SCORES Conversion of 1-5 scale to 0-100 scores When you look at your report, you will notice that the scores are reported on a 0-100 scale, even though respondents

### Sample Size and Power in Clinical Trials

Sample Size and Power in Clinical Trials Version 1.0 May 011 1. Power of a Test. Factors affecting Power 3. Required Sample Size RELATED ISSUES 1. Effect Size. Test Statistics 3. Variation 4. Significance

### Module 6: Introduction to Time Series Forecasting

Using Statistical Data to Make Decisions Module 6: Introduction to Time Series Forecasting Titus Awokuse and Tom Ilvento, University of Delaware, College of Agriculture and Natural Resources, Food and

### 1. The parameters to be estimated in the simple linear regression model Y=α+βx+ε ε~n(0,σ) are: a) α, β, σ b) α, β, ε c) a, b, s d) ε, 0, σ

STA 3024 Practice Problems Exam 2 NOTE: These are just Practice Problems. This is NOT meant to look just like the test, and it is NOT the only thing that you should study. Make sure you know all the material

### Course Text. Required Computing Software. Course Description. Course Objectives. StraighterLine. Business Statistics

Course Text Business Statistics Lind, Douglas A., Marchal, William A. and Samuel A. Wathen. Basic Statistics for Business and Economics, 7th edition, McGraw-Hill/Irwin, 2010, ISBN: 9780077384470 [This

### Course Objective This course is designed to give you a basic understanding of how to run regressions in SPSS.

SPSS Regressions Social Science Research Lab American University, Washington, D.C. Web. www.american.edu/provost/ctrl/pclabs.cfm Tel. x3862 Email. SSRL@American.edu Course Objective This course is designed

### SCHOOL OF HEALTH AND HUMAN SCIENCES DON T FORGET TO RECODE YOUR MISSING VALUES

SCHOOL OF HEALTH AND HUMAN SCIENCES Using SPSS Topics addressed today: 1. Differences between groups 2. Graphing Use the s4data.sav file for the first part of this session. DON T FORGET TO RECODE YOUR

### Financial Risk Management Exam Sample Questions/Answers

Financial Risk Management Exam Sample Questions/Answers Prepared by Daniel HERLEMONT 1 2 3 4 5 6 Chapter 3 Fundamentals of Statistics FRM-99, Question 4 Random walk assumes that returns from one time period

### Section 14 Simple Linear Regression: Introduction to Least Squares Regression

Slide 1 Section 14 Simple Linear Regression: Introduction to Least Squares Regression There are several different measures of statistical association used for understanding the quantitative relationship

### 2. Simple Linear Regression

Research methods - II 3 2. Simple Linear Regression Simple linear regression is a technique in parametric statistics that is commonly used for analyzing mean response of a variable Y which changes according

### Simple Linear Regression Inference

Simple Linear Regression Inference 1 Inference requirements The Normality assumption of the stochastic term e is needed for inference even if it is not a OLS requirement. Therefore we have: Interpretation

### Chapter 7: Simple linear regression Learning Objectives

Chapter 7: Simple linear regression Learning Objectives Reading: Section 7.1 of OpenIntro Statistics Video: Correlation vs. causation, YouTube (2:19) Video: Intro to Linear Regression, YouTube (5:18) -

### Nonlinear Regression Functions. SW Ch 8 1/54/

Nonlinear Regression Functions SW Ch 8 1/54/ The TestScore STR relation looks linear (maybe) SW Ch 8 2/54/ But the TestScore Income relation looks nonlinear... SW Ch 8 3/54/ Nonlinear Regression General

### Introduction to Analysis of Variance (ANOVA) Limitations of the t-test

Introduction to Analysis of Variance (ANOVA) The Structural Model, The Summary Table, and the One- Way ANOVA Limitations of the t-test Although the t-test is commonly used, it has limitations Can only

### 11. Analysis of Case-control Studies Logistic Regression

Research methods II 113 11. Analysis of Case-control Studies Logistic Regression This chapter builds upon and further develops the concepts and strategies described in Ch.6 of Mother and Child Health:

### Association Between Variables

Contents 11 Association Between Variables 767 11.1 Introduction............................ 767 11.1.1 Measure of Association................. 768 11.1.2 Chapter Summary.................... 769 11.2 Chi

### Fairfield Public Schools

Mathematics Fairfield Public Schools AP Statistics AP Statistics BOE Approved 04/08/2014 1 AP STATISTICS Critical Areas of Focus AP Statistics is a rigorous course that offers advanced students an opportunity

### MULTIPLE REGRESSION EXAMPLE

MULTIPLE REGRESSION EXAMPLE For a sample of n = 166 college students, the following variables were measured: Y = height X 1 = mother s height ( momheight ) X 2 = father s height ( dadheight ) X 3 = 1 if

Lecture 16: Generalized Additive Models Regression III: Advanced Methods Bill Jacoby Michigan State University http://polisci.msu.edu/jacoby/icpsr/regress3 Goals of the Lecture Introduce Additive Models

### Difference of Means and ANOVA Problems

Difference of Means and Problems Dr. Tom Ilvento FREC 408 Accounting Firm Study An accounting firm specializes in auditing the financial records of large firm It is interested in evaluating its fee structure,particularly

### Session 7 Bivariate Data and Analysis

Session 7 Bivariate Data and Analysis Key Terms for This Session Previously Introduced mean standard deviation New in This Session association bivariate analysis contingency table co-variation least squares

### IAPRI Quantitative Analysis Capacity Building Series. Multiple regression analysis & interpreting results

IAPRI Quantitative Analysis Capacity Building Series Multiple regression analysis & interpreting results How important is R-squared? R-squared Published in Agricultural Economics 0.45 Best article of the

### Section Format Day Begin End Building Rm# Instructor. 001 Lecture Tue 6:45 PM 8:40 PM Silver 401 Ballerini

NEW YORK UNIVERSITY ROBERT F. WAGNER GRADUATE SCHOOL OF PUBLIC SERVICE Course Syllabus Spring 2016 Statistical Methods for Public, Nonprofit, and Health Management Section Format Day Begin End Building

### STAT 350 Practice Final Exam Solution (Spring 2015)

PART 1: Multiple Choice Questions: 1) A study was conducted to compare five different training programs for improving endurance. Forty subjects were randomly divided into five groups of eight subjects

### 5. Linear Regression

5. Linear Regression Outline.................................................................... 2 Simple linear regression 3 Linear model............................................................. 4

### Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

### Notes on Applied Linear Regression

Notes on Applied Linear Regression Jamie DeCoster Department of Social Psychology Free University Amsterdam Van der Boechorststraat 1 1081 BT Amsterdam The Netherlands phone: +31 (0)20 444-8935 email:

### A Primer on Mathematical Statistics and Univariate Distributions; The Normal Distribution; The GLM with the Normal Distribution

A Primer on Mathematical Statistics and Univariate Distributions; The Normal Distribution; The GLM with the Normal Distribution PSYC 943 (930): Fundamentals of Multivariate Modeling Lecture 4: September

### KSTAT MINI-MANUAL. Decision Sciences 434 Kellogg Graduate School of Management

KSTAT MINI-MANUAL Decision Sciences 434 Kellogg Graduate School of Management Kstat is a set of macros added to Excel and it will enable you to do the statistics required for this course very easily. To

### Data Mining Techniques Chapter 5: The Lure of Statistics: Data Mining Using Familiar Tools

Data Mining Techniques Chapter 5: The Lure of Statistics: Data Mining Using Familiar Tools Occam s razor.......................................................... 2 A look at data I.........................................................

### Data Analysis Tools. Tools for Summarizing Data

Data Analysis Tools This section of the notes is meant to introduce you to many of the tools that are provided by Excel under the Tools/Data Analysis menu item. If your computer does not have that tool

### 16 : Demand Forecasting

16 : Demand Forecasting 1 Session Outline Demand Forecasting Subjective methods can be used only when past data is not available. When past data is available, it is advisable that firms should use statistical

### The correlation coefficient

The correlation coefficient Clinical Biostatistics The correlation coefficient Martin Bland Correlation coefficients are used to measure the of the relationship or association between two quantitative

### Predictor Coef StDev T P Constant 970667056 616256122 1.58 0.154 X 0.00293 0.06163 0.05 0.963. S = 0.5597 R-Sq = 0.0% R-Sq(adj) = 0.

Statistical analysis using Microsoft Excel Microsoft Excel spreadsheets have become somewhat of a standard for data storage, at least for smaller data sets. This, along with the program often being packaged

### UNDERSTANDING ANALYSIS OF COVARIANCE (ANCOVA)

UNDERSTANDING ANALYSIS OF COVARIANCE () In general, research is conducted for the purpose of explaining the effects of the independent variable on the dependent variable, and the purpose of research design

### Introduction to Fixed Effects Methods

Introduction to Fixed Effects Methods 1 1.1 The Promise of Fixed Effects for Nonexperimental Research... 1 1.2 The Paired-Comparisons t-test as a Fixed Effects Method... 2 1.3 Costs and Benefits of Fixed

### Section A. Index. Section A. Planning, Budgeting and Forecasting Section A.2 Forecasting techniques... 1. Page 1 of 11. EduPristine CMA - Part I

Index Section A. Planning, Budgeting and Forecasting Section A.2 Forecasting techniques... 1 EduPristine CMA - Part I Page 1 of 11 Section A. Planning, Budgeting and Forecasting Section A.2 Forecasting

### 5. Multiple regression

5. Multiple regression QBUS6840 Predictive Analytics https://www.otexts.org/fpp/5 QBUS6840 Predictive Analytics 5. Multiple regression 2/39 Outline Introduction to multiple linear regression Some useful

### Class 19: Two Way Tables, Conditional Distributions, Chi-Square (Text: Sections 2.5; 9.1)

Spring 204 Class 9: Two Way Tables, Conditional Distributions, Chi-Square (Text: Sections 2.5; 9.) Big Picture: More than Two Samples In Chapter 7: We looked at quantitative variables and compared the

### Study Guide for the Final Exam

Study Guide for the Final Exam When studying, remember that the computational portion of the exam will only involve new material (covered after the second midterm), that material from Exam 1 will make

### Regression Analysis (Spring, 2000)

Regression Analysis (Spring, 2000) By Wonjae Purposes: a. Explaining the relationship between Y and X variables with a model (Explain a variable Y in terms of Xs) b. Estimating and testing the intensity

### INTRODUCTORY STATISTICS

INTRODUCTORY STATISTICS FIFTH EDITION Thomas H. Wonnacott University of Western Ontario Ronald J. Wonnacott University of Western Ontario WILEY JOHN WILEY & SONS New York Chichester Brisbane Toronto Singapore

### Factors affecting online sales

Factors affecting online sales Table of contents Summary... 1 Research questions... 1 The dataset... 2 Descriptive statistics: The exploratory stage... 3 Confidence intervals... 4 Hypothesis tests... 4

### 4. Multiple Regression in Practice

30 Multiple Regression in Practice 4. Multiple Regression in Practice The preceding chapters have helped define the broad principles on which regression analysis is based. What features one should look

### Pearson s Correlation

Pearson s Correlation Correlation the degree to which two variables are associated (co-vary). Covariance may be either positive or negative. Its magnitude depends on the units of measurement. Assumes the

### 1.5 Oneway Analysis of Variance

Statistics: Rosie Cornish. 200. 1.5 Oneway Analysis of Variance 1 Introduction Oneway analysis of variance (ANOVA) is used to compare several means. This method is often used in scientific or medical experiments

### Outline. Topic 4 - Analysis of Variance Approach to Regression. Partitioning Sums of Squares. Total Sum of Squares. Partitioning sums of squares

Topic 4 - Analysis of Variance Approach to Regression Outline Partitioning sums of squares Degrees of freedom Expected mean squares General linear test - Fall 2013 R 2 and the coefficient of correlation

### 2013 MBA Jump Start Program. Statistics Module Part 3

2013 MBA Jump Start Program Module 1: Statistics Thomas Gilbert Part 3 Statistics Module Part 3 Hypothesis Testing (Inference) Regressions 2 1 Making an Investment Decision A researcher in your firm just

### MULTIPLE REGRESSION WITH CATEGORICAL DATA

DEPARTMENT OF POLITICAL SCIENCE AND INTERNATIONAL RELATIONS Posc/Uapp 86 MULTIPLE REGRESSION WITH CATEGORICAL DATA I. AGENDA: A. Multiple regression with categorical variables. Coding schemes. Interpreting

### Statistics Review PSY379

Statistics Review PSY379 Basic concepts Measurement scales Populations vs. samples Continuous vs. discrete variable Independent vs. dependent variable Descriptive vs. inferential stats Common analyses

### An analysis appropriate for a quantitative outcome and a single quantitative explanatory. 9.1 The model behind linear regression

Chapter 9 Simple Linear Regression An analysis appropriate for a quantitative outcome and a single quantitative explanatory variable. 9.1 The model behind linear regression When we are examining the relationship

### Section 13, Part 1 ANOVA. Analysis Of Variance

Section 13, Part 1 ANOVA Analysis Of Variance Course Overview So far in this course we ve covered: Descriptive statistics Summary statistics Tables and Graphs Probability Probability Rules Probability

### SIMPLE LINEAR CORRELATION. r can range from -1 to 1, and is independent of units of measurement. Correlation can be done on two dependent variables.

SIMPLE LINEAR CORRELATION Simple linear correlation is a measure of the degree to which two variables vary together, or a measure of the intensity of the association between two variables. Correlation

### 1/27/2013. PSY 512: Advanced Statistics for Psychological and Behavioral Research 2

PSY 512: Advanced Statistics for Psychological and Behavioral Research 2 Introduce moderated multiple regression Continuous predictor continuous predictor Continuous predictor categorical predictor Understand