# MULTIPLE LINEAR REGRESSION ANALYSIS USING MICROSOFT EXCEL. by Michael L. Orlov Chemistry Department, Oregon State University (1996)

Save this PDF as:

Size: px
Start display at page:

Download "MULTIPLE LINEAR REGRESSION ANALYSIS USING MICROSOFT EXCEL. by Michael L. Orlov Chemistry Department, Oregon State University (1996)"

## Transcription

1 MULTIPLE LINEAR REGRESSION ANALYSIS USING MICROSOFT EXCEL by Michael L. Orlov Chemistry Department, Oregon State University (1996) INTRODUCTION In modern science, regression analysis is a necessary part of virtually almost any data reduction process. Popular spreadsheet programs, such as Quattro Pro, Microsoft Excel, and Lotus provide comprehensive statistical program packages, which include a regression tool among many others. Usually, the regression module is explained clearly enough in on-line help and spreadsheet documentation (i.e. items in the regression input dialog box). However, the description of the output is minimal and is often a mystery for the user who is unfamiliar with certain statistical concepts. The objective of this short handout is to give a more detailed description of the regression tool and to touch upon related statistical topics in a hopefully readable manner. It is designed for science undergraduate and graduate students inexperienced in statistical matters. The regression output in Microsoft Excel is pretty standard and is chosen as a basis for illustrations and examples ( Quattro Pro and Lotus use an almost identical format). CLASSIFICATION OF REGRESSION MODELS In a regression analysis we study the relationship, called the regression function, between one variable y, called the dependent variable, and several others x i, called the independent variables. Regression function also involves a set of unknown parameters b i. If a regression function is linear in the parameters (but not necessarily in the independent variables! ) we term it a linear regression model. Otherwise, the model is called non-linear. Linear regression models with more than one independent variable are referred to as multiple linear models, as opposed to simple linear models with one independent variable.

2 2 The following notation is used in this work: y - dependent variable (predicted by a regression model) y* - dependent variable (experimental value) p - number of independent variables (number of coefficients) x i (i=1,2, p) - ith independent variable from total set of p variables b i (i=1,2, p) - ith coefficient corresponding to x i b 0 - intercept (or constant) k=p+1 - total number of parameters including intercept (constant) n - number of observations ( experimental data points) i =1,2 p - independent variables index j=1,2, n - data points index Now let us illustrate the classification of regression models with mathematical expressions: Multiple linear model General formula: y = b 0 + b 1 x 1 + b 2 x 2 + b p x p (1) or y = b 0 + i b i x i i=1,2, p (1a) Polynomial (model is linear in parameters, but not in independent variables): y = b 0 + b 1 x + b 2 x 2 + b 3 x 3 b p x p, which is just a specific case of (1) with x 1 = x, x 2 = x 2, x 3 = x 3..x p = x p Simple linear model y = b 0 + b 1 x 1 It is obvious that simple linear model is just specific case of multiple one with k=2 (p=1) Non-linear model y = A(1-e -Bx ), where A, B are parameters In further discussion we restrict ourselves to multiple linear regression analysis.

3 3 MAIN OBJECTIVES OF MULTIPLE LINEAR REGRESSION ANALYSIS Our primary goal is to determine the best set of parameters b i, such that the model predicts experimental values of the dependent variable as accurately as possible (i.e. calculated values y j should be close to experimental values y j * ). We also wish to judge whether our model itself is adequate to fit the observed experimental data (i.e. whether we chose the correct mathematical form of it). We need to check whether all terms in our model are significant (i.e. is the improvement in goodness of fit due to the addition of a certain term to the model bigger than the noise in experimental data). DESCRIPTION OF REGRESSION INPUT AND OUTPUT The standard regression output of spreadsheet programs provides information to reach the objectives raised in the previous section. Now we explain how to do that and touch upon related statistical terms and definitions. The following numerical example will be used throughout the handout to illustrate the discussion: Table 1. Original experimental data Data point # y* z j We choose y* to be the dependent experimental observable and z to be the independent one. Suppose we have, say, theoretical reasons to believe that relationship between two is: y* = b 0 + b 1 *z + b 2 *z 2 + b 3 *z 3

5 5 Fig. 1. Regression input dialog box Residual output Example In Microsoft Excel the residual output has the following format: Table3. Residual output* Observation Predicted Y Residuals Standard (j) (y j ) ( r ) Residuals (r ) * - Corresponding notation used in this handout is given in parenthesis Residual (or error, or deviation) is the difference between the observed value y* of the dependent variable for the jth experimental data point (x 1j, x 2j,, x pj, y j *) and the

6 Residuals (r) 6 corresponding value y j given by the regression function y j = b 0 + b 1 x 1j + b 2 x 2j + b p x pj (y j = b 0 + b 1 x 1j + b 2 x 2j + b 3 x 3j in our example): r j = y j * - y j (2) Parameters b (b 0, b 1, b 2, b p ) are part of the ANOVA output (discussed later). If there is an obvious correlation between the residuals and the independent variable x (say, residuals systematically increase with increasing x), it means that the chosen model is not adequate to fit the experiment (e.g. we may need to add an extra term x 4 =z 4 to our model (1b)). A plot of residuals is very helpful in detecting such a correlation. This plot will be included in the regression output if the box Residual Plots was checked in the regression input dialog window (Fig. 1). Example X Variable 1 Residual Plot X Variable 1 (x 1 ) However, the fact that the residuals look random and that there is no obvious correlation with the variable x does not necessarily mean by itself that the model is adequate. More tests are needed. Standard ( or standardized ) residual is a residual scaled with respect to the standard error (deviation) S y in a dependent variable: r j = r j / S y (2a) The quantity S y is part of the Regression statistics output (discussed later). Standardized residuals are used for some statistical tests, which are not usually needed for models in physical sciences.

7 7 ANOVA output There are two tables in ANOVA (Analysis of Variance). Example Table 4. ANOVA output (part I)* df SS MS F Significance F Regression 3 (df R ) (SS R ) (MS R ) (F R ) 6.12E-09 (P R ) Residual (error) 3 (df E ) 1.70 (SS E ) 0.57 (MS E ) Total 6 (df T ) (SS T ) N/A (MS T ) Table 4a. ANOVA output (part II)* Coefficients Standard t Stat P-value Lower 95% Upper 95% (b i ) Error (se (b i )) (t i ) (P i ) (b L, (1-Pi) ) (b U, (1-Pi) ) Intercept (b 0 ) X Variable 1 (x 1 ) X Variable 2 (x 2 ) X Variable 3 (x 3 ) * - Corresponding notation used in this handout is given in parenthesis; N/A means not available in Microsoft Excel regression output. Coefficients. The regression program determines the best set of parameters b (b 0, b 1, b 2, b p ) in the model y j =b 0 +b 1 x 1j +b 2 x 2j + b p x pj by minimizing the error sum of squares SS E (discussed later). Coefficients are listed in the second table of ANOVA (see Table 4a). These coefficients allow the program to calculate predicted values of the dependent variable y (y 1, y 2, y n ), which were used above in formula (2) and are part of Residual output ( Table 3). Sum of squares. In general, the sum of squares of some arbitrary variable q is determined as: SS q = j n (q j - q avg ) 2, where (3) q j - jth observation out of n total observations of quantity q q avg - average value of q in n observations: q avg = n j q j )/n

8 8 In the ANOVA regression output one will find three types of sum of squares (see Table 4): 1). Total sum of squares SS T : SS T = n j (y j * - y* avg ) 2, where y* avg = n j y j *)/n (3a) It is obvious that SS T is the sum of squares of deviations of the experimental values of dependent variable y* from its average value. SS T could be interpreted as the sum of deviations of y* from the simplest possible model (y is constant and does not depend on any variable x): y = b 0, with b 0 = y* avg (4) SS T has two contributors: residual (error) sum of squares (SS E ) and regression sum of squares(ss R ): SS T = SS E + SS R (5) 2). Residual (or error) sum of squares SS E : SS E = j n (r j - r avg ) 2 (6) Since in the underlying theory the expected value of residuals r avg is assumed to be zero, expression (6) simplifies to: SS E = j n (r j ) 2 (6a) The significance of this quantity is that by the minimization of SS E the spreadsheet regression tool determines the best set of parameters b= b 0, b 1, b 2,, b p for a given regression model. SS E could be also viewed as the due-to-random-scattering-of -y*-aboutpredicted-line contributor to the total sum of squares SS T. This is the reason for calling the quantity due to error (residual) sum of squares. 3). Regression sum of squares SS R : SS R = j n (y j - y* avg ) 2 (7) SS R is the sum of squares of deviations of the predicted-by-regression-model values of dependent variable y from its average experimental value y* avg. It accounts for addition of p variables (x 1, x 2,, x p ) to the simplest possible model (4) (variable y is just a constant and does not depend on variables x), i.e. y = b 0 vs. y = b 0 + b 1 x 1 + b 2 x b p x p. Since this is a transformation from the non-regression model (4) to the true regression model (1), SS R is called the due to regression sum of squares.

9 9 The definition of SS R in the form (7) is not always given in the literature. One can find different expressions in books on statistics [1, 2] : p SS R = i b i n j (x i j - x avg ) y j, where x avg = n j x j *)/n (7a) or SS R = i p b i j n x i j y j * - ( j n y j *) 2 /n (7b) Relationships (7a-b) give the same numerical result, however, it is difficult to see the physical meaning of SS R from them. Mean square (variance) and degrees of freedom The general expression for the mean square of an arbitrary quantity q is: MS q = SS q / df (8) SS q is defined by (3) and df is the number of degrees of freedom associated with quantity SS q. MS is also often referred to as the variance. The number of degrees of freedom could be viewed as the difference between the number of observations n and the number of constraints (fixed parameters associated with the corresponding sum of squares SS q ). 1). Total mean square MS T (total variance): MS T = SS T /(n - 1) (9) SS T is associated with the model (4), which has only one constraint (parameter b 0 ), therefore the number of degrees of freedom in this case is: df T = n - 1 (10) 2). Residual (error) mean square MS E (error variance): MS E = SS E / (n - k) (11) SS E is associated with the random error around the regression model (1), which has k=p+1 parameters (one per each variable out of p variables total plus intercept). It means there are k constraints and the number of degrees of freedom is : df E = n - k (12)

10 10 3). Regression mean square MS R (regression variance): MS R = SS R /(k - 1) (13) The number of degrees of freedom in this case can be viewed as the difference between the total number of degrees of freedom df T (10) and the number of degrees of freedom for residuals df E (12) : df R = df T - df E = (n - 1) - (n - k) df R = k - 1 = p (14) Tests of significance and F-numbers The F-number is the quantity which can be used to test for the statistical difference between two variances. For example, if we have two random variables q and v, the corresponding F- number is: F qv = MS q / MS v (15) The variances MS q and MS v are defined by an expression of type (8). In order to tell whether two variances are statistically different, we determine the corresponding probability P from F-distribution function: P=P(F qv, df q, df v ) (16) The quantities df q, df v - degrees of freedom for numerator and denominator - are parameters of this function. Tabulated numerical values of P for the F-distribution can be found in various texts on statistics or simply determined in a spreadsheet directly by using the corresponding statistical function (e.g. in Microsoft Excel one would use FDIST(F qv, df q, df v ) to return the numerical value of P). An interested reader can find the analytical form of P=P(F qv, df q, df v ) in the literature (e.g. [1, p.383]). The probability P given by (16) is a probability that the variances MS q and MS v are statistically indistinguishable. On the other hand, 1-P is the probability that they are different and is often called confidence level. Conventionally, a reasonable confidence level is 0.95 or higher. If it turns out that 1-P < 0.95, we say that MS q and MS v are statistically the same. If 1-P > 0.95, we say that at least with the 0.95 (or 95%) confidence MS q and MS v are different. The higher the confidence level, the more reliable our conclusion. The procedure just described is called the F-test. There are several F-tests related to regression analysis. We will discuss the three most common ones. They deal with significance of parameters in the regression model. The first

11 11 and the last of them is performed by spreadsheet regression tool automatically, whereas the second one is not. 1). Significance test of all coefficients in the regression model In this case we ask ourselves: With what level of confidence can we state that AT LEAST ONE of the coefficients b (b 1, b 2, b p ) in the regression model is significantly different from zero?. The first step is to calculate the F-number for the whole regression (part of the regression output (see Table 4)): F R = MS R / MS E (17) The second step is to determine the numerical value of the corresponding probability P R (also part of the regression output ( see Table 4)) : P R = FDIST(F R, df R, df E ) (18) Taking into account expressions (12) and (14) we obtain: P R = FDIST(F R, k - 1, n - k) (18a) Finally we can determine the confidence level 1 - P R. At this level of confidence, the variance due to regression MS R is statistically different from the variance due to error MS E. In its turn it means that the addition of p variables (x 1, x 2,, x p ) to the simplest model (4) (dependent variable y is just a constant) is a statistically significant improvement of the fit. Thus, at the confidence level not less than 1- P R we can say: At least ONE of coefficients in the model is significant. F R could be also used to compare two models describing the same experimental data: the higher F R the more adequate the corresponding model. Example In our illustrative exercise we have P R = 6.12E-09 (Table 4), the corresponding level of confidence 1 - P R = Therefore with the confidence close to 100% we can say that at least one of coefficients b 1, b 2 and b 3 is significant for the model y = b 0 + b 1 x 1 + b 2 x 2 + b 3 x 3, where x 1 =z, x 2 = z 2 and x 3 = z 3. NOTE: From this test, however, we can not be sure that ALL coefficients b 1,b 2 and b 3 are non-zero. If 1- P R is not big enough (usually less than 0.95), we conclude that ALL the coefficients in the regression model are zero (in other words, the hypothesis that the variable y is just a constant is better than it is function of variables x (x 1, x 2,, x p ) ).

12 12 2). Significance test of subset of coefficients in the regression model Now we want to decide With what level of confidence can we be sure that at least ONE of the coefficients in a selected subset of all the coefficients is significant?. Let us test a subset of the last m coefficients in the model with a total of p coefficients (b 1, b 2, b p ). Here we need to consider two models: y = b 0 + b 1 x 1 + b 2 x 2 + b p x p (unrestricted) (19) and y = b 0 + b 1 x 1 + b 2 x + b p-m x p-m (restricted) (20) These models are called unrestricted (19) and restricted (20) respectively. We need to perform two separate least square regression analyses for each model. From the regression output (see Table 4) for each model we obtain the corresponding error sum of squares SS E and SS E as well as variance MS E for the unrestricted model. The next step is to calculate the F-number for testing a subset of m variables by hand (it is not part of Microsoft Excel ANOVA for an obvious reason, i.e. you must decide how many variables to include in the subset): F m = {( SS E - SS E ) / m} / MS E (21) F m could be viewed as an indicator of whether the reduction in the error variance due to the addition of the subset of m variables to the restricted model (20) (( SS E - SS E ) / m ) is statistically significant with respect to the overall error variance MS E for the unrestricted model (19). It is equivalent to testing the hypothesis that at least one of coefficients in the subset is not zero. In the final step, we determine probability P m (also by hand ): P m = FDIST(F m, m, n - k) (22) At the confidence level 1- P m at least ONE of the coefficients in the subset of m is significant. If 1- P m is not big enough (less than 0.95) we state that ALL m coefficients in the subset are insignificant.

13 13 Example The regression output for the unrestricted model (y = b 0 + b 1 x 1 + b 2 x 2 + b 3 x 3, where x 1 =z, x 2 = z 2 and x 3 = z 3 ) is presented in Table 4. Say, we want to test whether the quadratic and the cubic terms are significant. In this case the restricted model is: y = b 0 + b 1 x 1, (restricted model) (23) where x 1 = z The subset of parameters consists of two parameter and m=2. By analogy with the input table for the unrestricted model (Table 2) we prepare one for the restricted model: Table 5. Regression input for restricted model Data point # Dependent var. Independent var. j y* x 1 (=z) We perform an additional regression using this input table and as part of ANOVA obtain: Table 6. Regression ANOVA output for the restricted model df SS MS F Significance F Regression Residual (error) Total From Table 4 and Table 6 we have: SS E = 1.70 MS E = 0.57 df E =(n - k)= 3 SS E = (error sum of squares; unrestricted model) (error mean square; unrestricted model) (degrees of freedom; unrestricted model) (error sum of squares; restricted model) Now we are able to calculate F m=2 : F m=2 = {( )/ 2} / 0.57 F m=2 =

14 14 Using the Microsoft Excel function for the F-distribution we determine the probability P m=2 : P m=2 = FDIST( , 2, 3) P m=2 = 3.50E-07 Finally we calculate the level of confidence 1- P m=2 : 1-P m=1 = E-07 1-P m=1 = The confidence level is high (more than %). We conclude that at least one of the parameters (b 2 or b 3 ) in the subset is non-zero. However, we can not be sure that both quadratic and cubic terms are significant. 3). Significance test of an individual coefficient in the regression model Here the question to answer is: With what confidence level can we state that the ith coefficient b i in the model is significant?. The corresponding F-number is: F i = b i 2 / [se(b i )] 2 (24) se(b i ) is the standard error in the individual coefficient b i and is part of the ANOVA output (see Table 4a). The corresponding probability P i = FDIST(F i, 1, n - k) (25) leads us to the confidence level 1- P i at which we can state that coefficient b i is significant. If this level is lower than desired one we say that coefficient b i is insignificant. F i is not part of spreadsheet regression output, but might be calculated by hand if needed. However, there is another statistics for testing individual parameters, which is part of ANOVA (see Table 4a): t i = b i / se(b i ) (26) The t i - number is the square root of F i (expression (24)). It has a Student s distribution (see [1, p. 381] for the analytical form of the distribution). The corresponding probability is numerically the same as that given by (25). There is a statistical function in Microsoft Excel which allows one to determine P i ( part of ANOVA (see Table 4a)): P i= TDIST(t i, n-k, 2) (27)

15 15 Parameters of the function (27) are: the number of degrees of freedom df (df E =n - k) and form of test (TL=2). If TL=1 a result for a one-tailed distribution is returned; if TL=2 two-tailed distribution result is returned. An interested reader can find more information about the issue in ref. [1] Example In our illustration P 0 = and P 3 = (see Table 4a) corresponds to fairly low confidence levels, 1 - P 0 = and 1 - P 3 = This suggests that parameters b 0 and b 3 are not significant. The confidence levels for b 1 and b 2 are high (1 - P 1 = = and 1- P 2 = = ), which means that they are significant. In conclusion of this F-test discussion, it should be noted that in case we remove even one insignificant variable from the model, we need to test the model once again, since coefficients which were significant in certain cases might become insignificant after removal and visa versa. It is a good practice to use a reasonable combination of all three tests in order to achieve the most reliable conclusions. Confidence interval In the previous section we were obtaining confidence levels given F-numbers or t- numbers. We can go in an opposite direction: given a desired minimal confidence level 1-P (e.g. 0.95) calculate the related F- or t-number. Microsoft Excel provides two statistical functions for that purpose: F (1-P) =FINV (P, df q, df v ) (28) t (1-P) =TINV(P, df ) (29) df q, df v - degrees of freedom of numerator and denominator, respectively (see (15)) df - degree of freedom associated with a given t-test (varies from test to test) NOTE: in expression (29) P is the probability associated with so called twotailed Student s distribution. A one- tailed distribution has the different probability The relationship between the two is: =P/2 (30) Values of F-numbers and t-numbers for various probabilities and degrees of freedom are tabulated and can be found in any text on statistics [1,2,3,4]. Usually the one-tailed Student s distribution is presented.

16 16 Knowing the t-number for a coefficient b i we can calculate the numerical interval which contains the coefficient b i with the desired probability 1-P i : b L, (1- Pi) = b i - se(b i )*t (1-Pi) (lower limit) (31) b U, (1- Pi) = b i + se(b i )*t (1-Pi) (upper limit) (31a) t (1-Pi) =TINV(P i, n-k) (32) The standard errors for individual parameters se(b i ) are part of ANOVA (Table 4a). The interval [b L, (1- Pi) ; b U, (1- Pi) ] is called the confidence interval of parameter b i with the 1-P i confidence level. The upper and lower limits of this interval at a 95% confidence are listed in the ANOVA output by default ( Table 4a; columns Lower 95% and Upper 95% ). If in addition to this default, the confidence interval at a confidence other than 95% is desired, the box Confidence level should be checked and the value of the alternative confidence entered in the corresponding window of the Regression input dialog box (see Fig. 1). Example For the unrestricted model (1b), the lower and upper 95% limits for intercept are and respectively (see Table 4a). The fact that with the 95% probability zero falls in this interval is consistent with our conclusion of insignificance of b 0 made in the course of F-testing of individual parameters (see Example at the end of previous section). The confidence intervals at the 95% level for b 1 and b 2 do not include zero. This also agrees with the F-test of individual parameters. In fact, analysis whether zero falls in a confidence interval could be viewed as a different way to perform the F-test (t-test) of individual parameters and must not be used as an additional proof of conclusions made in such a test. Regression statistics output The information contained in the Regression statistics output characterizes the goodness of the model as a whole. Note that quantities listed in this output can be expressed in terms of the regression F-number F R (Table 4) which we have already used for the significance test of all coefficients.

17 17 Example For our unrestricted model (1b) the output is: Table 7. Regression statistics output* Multiple R R Square (R 2 ) Adjusted R Square (R 2 adj) Standard Error (S y ) Observations (n) 7 * - Corresponding notation used in this handout is given in parenthesis Standard error (S y ): S y = (MS E ) 0.5 (33) MS E is an error variance discussed before (see expression (11)). Quantity S y is an estimate of the standard error (deviation) of experimental values of the dependent variable y* with respect to those predicted by the regression model. It is used in statistics for different purposes. One of the applications we saw in the discussion of Residual output (Standardized residuals; see expression (2a)). Coefficient of determination R 2 (or R Square): R 2 =SS R / SS T = 1 - SS E /SS T (34) SS R, SS E and SS T are regression, residual (error) and total sum of squares defined by (7), (6a) and (3a) respectively. The coefficient of determination is a measure of the regression model as whole. The closer R 2 is to one, the better the model (1) describes the data. In the case of a perfect fit R 2 =1. Adjusted coefficient of determination R 2 (or Adjusted R Square): R 2 adj=1- {SS E / (n-k)} / {SS T / (n-1)} (35) SS E and SS T are the residual (error) and the total sum of squares (see expressions (6a) and (3a)). The significance of R 2 adj is basically the same as that of R 2 (the closer to one the better). Strictly speaking R 2 adj should be used as an indicator of an adequacy of the model, since it takes in to account not only deviations, but also numbers of degrees of freedom.

18 18 Multiple correlation coefficient R: R = ( SS R / SS T ) 0.5 (36) This quantity is just the square root of coefficient of determination. Example The fact that R 2 adj = in our illustration is fairly close to 1 (see Table 7) suggests that overall model (1b) is adequate to fit the experimental data presented in Table 1. However, it does not mean that there are no insignificant parameters in it. REGRESSION OUTPUT FORMULA MAP For references, the following tables present a summary of the formula numbers for individual items in the Microsoft Excel Regression Output. Variables in parenthesis, introduced and used in this handout, do not appear in the output. Table 8. Formula map of Regression statistics output Multiple R (36) R Square (R 2 ) (34) Adjusted R Square (R 2 adj) (35) Standard Error (S y ) (33) Observations (n) Table 9. Formula map of Residual output Observation (j) Predicted Y (y j ) Residuals (r j ) Standard Residuals(r j ) 1 (1) (2) (2a) 2 (1) (2) (2a) Table 10. Formula map of ANOVA output (part I) df SS MS F Significance F Regression (df R ) (14) (SS R ) (7) (MS R ) (13) (F R ) (17) (P R ) (18) Residual (error) (df E ) (12) (SS E ) (6a) (MS E ) (11) Total (df T ) (10) (SS T ) (3a) (MS T )* (9) *- not reported in Microsoft Excel Regression output

19 19 Table 10a. Formula map of ANOVA output (part II) Coefficients (b i ) Standard Error (se(b i )) t Stat (t i ) P-value (P i ) Lower 95% (b L,(1-Pi) ) Upper 95% (b U,(1- Pi)) Intercept (b 0 ) (26) (25) (27) (31) (31a) X Variable 1 (x 1 ) (26) (25) (27) (31) (31a) X Variable 2 (x 2 ) (26) (25) (27) (31) (31a) LITERATURE 1. Afifi A.A., Azen S.P. Statistical analysis. Computer oriented approach, Academic press, New York (1979) 2. Natrella M.G. Experimental Statistics, National Bureau of Standards, Washington DC (1963) 3. Neter J., Wasserman W. Applied linear statistical models, R.D. Irwin Inc., Homewood, Illinois (1974) 4. Gunst R.F., Mason R.L. Regression analysis and its application, Marcel Dekker Inc., NewYork (1980) 5. Shoemaker D.P., Garland C.W., Nibler J.W. Experiments in Physical Chemistry, The McGraw-Hill Companies Inc. (1996)

### Predictor Coef StDev T P Constant 970667056 616256122 1.58 0.154 X 0.00293 0.06163 0.05 0.963. S = 0.5597 R-Sq = 0.0% R-Sq(adj) = 0.

Statistical analysis using Microsoft Excel Microsoft Excel spreadsheets have become somewhat of a standard for data storage, at least for smaller data sets. This, along with the program often being packaged

### 1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96

1 Final Review 2 Review 2.1 CI 1-propZint Scenario 1 A TV manufacturer claims in its warranty brochure that in the past not more than 10 percent of its TV sets needed any repair during the first two years

### Data Analysis Tools. Tools for Summarizing Data

Data Analysis Tools This section of the notes is meant to introduce you to many of the tools that are provided by Excel under the Tools/Data Analysis menu item. If your computer does not have that tool

### Using R for Linear Regression

Using R for Linear Regression In the following handout words and symbols in bold are R functions and words and symbols in italics are entries supplied by the user; underlined words and symbols are optional

### Regression step-by-step using Microsoft Excel

Step 1: Regression step-by-step using Microsoft Excel Notes prepared by Pamela Peterson Drake, James Madison University Type the data into the spreadsheet The example used throughout this How to is a regression

### Guide to Microsoft Excel for calculations, statistics, and plotting data

Page 1/47 Guide to Microsoft Excel for calculations, statistics, and plotting data Topic Page A. Writing equations and text 2 1. Writing equations with mathematical operations 2 2. Writing equations with

### Hypothesis Testing hypothesis testing approach formulation of the test statistic

Hypothesis Testing For the next few lectures, we re going to look at various test statistics that are formulated to allow us to test hypotheses in a variety of contexts: In all cases, the hypothesis testing

: Table of Contents... 1 Overview of Model... 1 Dispersion... 2 Parameterization... 3 Sigma-Restricted Model... 3 Overparameterized Model... 4 Reference Coding... 4 Model Summary (Summary Tab)... 5 Summary

### Multiple Regression Analysis in Minitab 1

Multiple Regression Analysis in Minitab 1 Suppose we are interested in how the exercise and body mass index affect the blood pressure. A random sample of 10 males 50 years of age is selected and their

### Regression Analysis: A Complete Example

Regression Analysis: A Complete Example This section works out an example that includes all the topics we have discussed so far in this chapter. A complete example of regression analysis. PhotoDisc, Inc./Getty

### Chapter 11: Two Variable Regression Analysis

Department of Mathematics Izmir University of Economics Week 14-15 2014-2015 In this chapter, we will focus on linear models and extend our analysis to relationships between variables, the definitions

### KSTAT MINI-MANUAL. Decision Sciences 434 Kellogg Graduate School of Management

KSTAT MINI-MANUAL Decision Sciences 434 Kellogg Graduate School of Management Kstat is a set of macros added to Excel and it will enable you to do the statistics required for this course very easily. To

### EXCEL Analysis TookPak [Statistical Analysis] 1. First of all, check to make sure that the Analysis ToolPak is installed. Here is how you do it:

EXCEL Analysis TookPak [Statistical Analysis] 1 First of all, check to make sure that the Analysis ToolPak is installed. Here is how you do it: a. From the Tools menu, choose Add-Ins b. Make sure Analysis

### How To Run Statistical Tests in Excel

How To Run Statistical Tests in Excel Microsoft Excel is your best tool for storing and manipulating data, calculating basic descriptive statistics such as means and standard deviations, and conducting

### POLYNOMIAL AND MULTIPLE REGRESSION. Polynomial regression used to fit nonlinear (e.g. curvilinear) data into a least squares linear regression model.

Polynomial Regression POLYNOMIAL AND MULTIPLE REGRESSION Polynomial regression used to fit nonlinear (e.g. curvilinear) data into a least squares linear regression model. It is a form of linear regression

### Simple Linear Regression Chapter 11

Simple Linear Regression Chapter 11 Rationale Frequently decision-making situations require modeling of relationships among business variables. For instance, the amount of sale of a product may be related

### One-Way Analysis of Variance (ANOVA) Example Problem

One-Way Analysis of Variance (ANOVA) Example Problem Introduction Analysis of Variance (ANOVA) is a hypothesis-testing technique used to test the equality of two or more population (or treatment) means

### Recall this chart that showed how most of our course would be organized:

Chapter 4 One-Way ANOVA Recall this chart that showed how most of our course would be organized: Explanatory Variable(s) Response Variable Methods Categorical Categorical Contingency Tables Categorical

### Lesson Lesson Outline Outline

Lesson 15 Linear Regression Lesson 15 Outline Review correlation analysis Dependent and Independent variables Least Squares Regression line Calculating l the slope Calculating the Intercept Residuals and

### where b is the slope of the line and a is the intercept i.e. where the line cuts the y axis.

Least Squares Introduction We have mentioned that one should not always conclude that because two variables are correlated that one variable is causing the other to behave a certain way. However, sometimes

### Statistical Functions in Excel

Statistical Functions in Excel There are many statistical functions in Excel. Moreover, there are other functions that are not specified as statistical functions that are helpful in some statistical analyses.

### Figure 1. An embedded chart on a worksheet.

8. Excel Charts and Analysis ToolPak Charts, also known as graphs, have been an integral part of spreadsheets since the early days of Lotus 1-2-3. Charting features have improved significantly over the

### Below is a very brief tutorial on the basic capabilities of Excel. Refer to the Excel help files for more information.

Excel Tutorial Below is a very brief tutorial on the basic capabilities of Excel. Refer to the Excel help files for more information. Working with Data Entering and Formatting Data Before entering data

### Statistics and Data Analysis

Statistics and Data Analysis In this guide I will make use of Microsoft Excel in the examples and explanations. This should not be taken as an endorsement of Microsoft or its products. In fact, there are

### NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates

### Obtaining Uncertainty Measures on Parameters of a

Obtaining Uncertainty Measures on Parameters of a Polynomial Least Squares Fit with Excel s LINEST Professor of Chemical Engineering Michigan Technological University, Houghton, MI 39931 24 June 2015 In

### Sydney Roberts Predicting Age Group Swimmers 50 Freestyle Time 1. 1. Introduction p. 2. 2. Statistical Methods Used p. 5. 3. 10 and under Males p.

Sydney Roberts Predicting Age Group Swimmers 50 Freestyle Time 1 Table of Contents 1. Introduction p. 2 2. Statistical Methods Used p. 5 3. 10 and under Males p. 8 4. 11 and up Males p. 10 5. 10 and under

### Bill Burton Albert Einstein College of Medicine william.burton@einstein.yu.edu April 28, 2014 EERS: Managing the Tension Between Rigor and Resources 1

Bill Burton Albert Einstein College of Medicine william.burton@einstein.yu.edu April 28, 2014 EERS: Managing the Tension Between Rigor and Resources 1 Calculate counts, means, and standard deviations Produce

### Using Minitab for Regression Analysis: An extended example

Using Minitab for Regression Analysis: An extended example The following example uses data from another text on fertilizer application and crop yield, and is intended to show how Minitab can be used to

### SELF-TEST: SIMPLE REGRESSION

ECO 22000 McRAE SELF-TEST: SIMPLE REGRESSION Note: Those questions indicated with an (N) are unlikely to appear in this form on an in-class examination, but you should be able to describe the procedures

### Final Exam Practice Problem Answers

Final Exam Practice Problem Answers The following data set consists of data gathered from 77 popular breakfast cereals. The variables in the data set are as follows: Brand: The brand name of the cereal

### SIMPLE REGRESSION ANALYSIS

SIMPLE REGRESSION ANALYSIS Introduction. Regression analysis is used when two or more variables are thought to be systematically connected by a linear relationship. In simple regression, we have only two

### Data analysis. Data analysis in Excel using Windows 7/Office 2010

Data analysis Data analysis in Excel using Windows 7/Office 2010 Open the Data tab in Excel If Data Analysis is not visible along the top toolbar then do the following: o Right click anywhere on the toolbar

### Randomized Block Analysis of Variance

Chapter 565 Randomized Block Analysis of Variance Introduction This module analyzes a randomized block analysis of variance with up to two treatment factors and their interaction. It provides tables of

### Chapter 7. One-way ANOVA

Chapter 7 One-way ANOVA One-way ANOVA examines equality of population means for a quantitative outcome and a single categorical explanatory variable with any number of levels. The t-test of Chapter 6 looks

### Testing Group Differences using T-tests, ANOVA, and Nonparametric Measures

Testing Group Differences using T-tests, ANOVA, and Nonparametric Measures Jamie DeCoster Department of Psychology University of Alabama 348 Gordon Palmer Hall Box 870348 Tuscaloosa, AL 35487-0348 Phone:

### Assignment objectives:

Assignment objectives: Regression Pivot table Exercise #1- Simple Linear Regression Often the relationship between two variables, Y and X, can be adequately represented by a simple linear equation of the

### SAS Software to Fit the Generalized Linear Model

SAS Software to Fit the Generalized Linear Model Gordon Johnston, SAS Institute Inc., Cary, NC Abstract In recent years, the class of generalized linear models has gained popularity as a statistical modeling

### t Tests in Excel The Excel Statistical Master By Mark Harmon Copyright 2011 Mark Harmon

t-tests in Excel By Mark Harmon Copyright 2011 Mark Harmon No part of this publication may be reproduced or distributed without the express permission of the author. mark@excelmasterseries.com www.excelmasterseries.com

### Factorial Analysis of Variance

Chapter 560 Factorial Analysis of Variance Introduction A common task in research is to compare the average response across levels of one or more factor variables. Examples of factor variables are income

### Traditional Conjoint Analysis with Excel

hapter 8 Traditional onjoint nalysis with Excel traditional conjoint analysis may be thought of as a multiple regression problem. The respondent s ratings for the product concepts are observations on the

### Getting Correct Results from PROC REG

Getting Correct Results from PROC REG Nathaniel Derby, Statis Pro Data Analytics, Seattle, WA ABSTRACT PROC REG, SAS s implementation of linear regression, is often used to fit a line without checking

### The importance of graphing the data: Anscombe s regression examples

The importance of graphing the data: Anscombe s regression examples Bruce Weaver Northern Health Research Conference Nipissing University, North Bay May 30-31, 2008 B. Weaver, NHRC 2008 1 The Objective

### 1.5 Oneway Analysis of Variance

Statistics: Rosie Cornish. 200. 1.5 Oneway Analysis of Variance 1 Introduction Oneway analysis of variance (ANOVA) is used to compare several means. This method is often used in scientific or medical experiments

### Full Factorial Design of Experiments

Full Factorial Design of Experiments 0 Module Objectives Module Objectives By the end of this module, the participant will: Generate a full factorial design Look for factor interactions Develop coded orthogonal

### EXCEL Tutorial: How to use EXCEL for Graphs and Calculations.

EXCEL Tutorial: How to use EXCEL for Graphs and Calculations. Excel is powerful tool and can make your life easier if you are proficient in using it. You will need to use Excel to complete most of your

### 3 The spreadsheet execution model and its consequences

Paper SP06 On the use of spreadsheets in statistical analysis Martin Gregory, Merck Serono, Darmstadt, Germany 1 Abstract While most of us use spreadsheets in our everyday work, usually for keeping track

### One-Way Analysis of Variance

One-Way Analysis of Variance Note: Much of the math here is tedious but straightforward. We ll skim over it in class but you should be sure to ask questions if you don t understand it. I. Overview A. We

Bowerman, O'Connell, Aitken Schermer, & Adcock, Business Statistics in Practice, Canadian edition Online Learning Centre Technology Step-by-Step - Excel Microsoft Excel is a spreadsheet software application

### Technology Step-by-Step Using StatCrunch

Technology Step-by-Step Using StatCrunch Section 1.3 Simple Random Sampling 1. Select Data, highlight Simulate Data, then highlight Discrete Uniform. 2. Fill in the following window with the appropriate

### 1.1. Simple Regression in Excel (Excel 2010).

.. Simple Regression in Excel (Excel 200). To get the Data Analysis tool, first click on File > Options > Add-Ins > Go > Select Data Analysis Toolpack & Toolpack VBA. Data Analysis is now available under

### Using Microsoft Excel for Probability and Statistics

Introduction Using Microsoft Excel for Probability and Despite having been set up with the business user in mind, Microsoft Excel is rather poor at handling precisely those aspects of statistics which

### Simple Regression Theory II 2010 Samuel L. Baker

SIMPLE REGRESSION THEORY II 1 Simple Regression Theory II 2010 Samuel L. Baker Assessing how good the regression equation is likely to be Assignment 1A gets into drawing inferences about how close the

### Wooldridge, Introductory Econometrics, 4th ed. Multiple regression analysis:

Wooldridge, Introductory Econometrics, 4th ed. Chapter 4: Inference Multiple regression analysis: We have discussed the conditions under which OLS estimators are unbiased, and derived the variances of

### An analysis method for a quantitative outcome and two categorical explanatory variables.

Chapter 11 Two-Way ANOVA An analysis method for a quantitative outcome and two categorical explanatory variables. If an experiment has a quantitative outcome and two categorical explanatory variables that

### Directions for using SPSS

Directions for using SPSS Table of Contents Connecting and Working with Files 1. Accessing SPSS... 2 2. Transferring Files to N:\drive or your computer... 3 3. Importing Data from Another File Format...

### Multiple Linear Regression

Multiple Linear Regression A regression with two or more explanatory variables is called a multiple regression. Rather than modeling the mean response as a straight line, as in simple regression, it is

### Using Excel for inferential statistics

FACT SHEET Using Excel for inferential statistics Introduction When you collect data, you expect a certain amount of variation, just caused by chance. A wide variety of statistical tests can be applied

### E205 Final: Version B

Name: Class: Date: E205 Final: Version B Multiple Choice Identify the choice that best completes the statement or answers the question. 1. The owner of a local nightclub has recently surveyed a random

### CHAPTER 13 SIMPLE LINEAR REGRESSION. Opening Example. Simple Regression. Linear Regression

Opening Example CHAPTER 13 SIMPLE LINEAR REGREION SIMPLE LINEAR REGREION! Simple Regression! Linear Regression Simple Regression Definition A regression model is a mathematical equation that descries the

### Multiple Linear Regression in Data Mining

Multiple Linear Regression in Data Mining Contents 2.1. A Review of Multiple Linear Regression 2.2. Illustration of the Regression Process 2.3. Subset Selection in Linear Regression 1 2 Chap. 2 Multiple

### Analysis of Variance. MINITAB User s Guide 2 3-1

3 Analysis of Variance Analysis of Variance Overview, 3-2 One-Way Analysis of Variance, 3-5 Two-Way Analysis of Variance, 3-11 Analysis of Means, 3-13 Overview of Balanced ANOVA and GLM, 3-18 Balanced

### Chapter 7: Simple linear regression Learning Objectives

Chapter 7: Simple linear regression Learning Objectives Reading: Section 7.1 of OpenIntro Statistics Video: Correlation vs. causation, YouTube (2:19) Video: Intro to Linear Regression, YouTube (5:18) -

### Simple Methods and Procedures Used in Forecasting

Simple Methods and Procedures Used in Forecasting The project prepared by : Sven Gingelmaier Michael Richter Under direction of the Maria Jadamus-Hacura What Is Forecasting? Prediction of future events

### Regression in ANOVA. James H. Steiger. Department of Psychology and Human Development Vanderbilt University

Regression in ANOVA James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) 1 / 30 Regression in ANOVA 1 Introduction 2 Basic Linear

### In Chapter 2, we used linear regression to describe linear relationships. The setting for this is a

Math 143 Inference on Regression 1 Review of Linear Regression In Chapter 2, we used linear regression to describe linear relationships. The setting for this is a bivariate data set (i.e., a list of cases/subjects

### Simple Linear Regression Inference

Simple Linear Regression Inference 1 Inference requirements The Normality assumption of the stochastic term e is needed for inference even if it is not a OLS requirement. Therefore we have: Interpretation

### Regression III: Dummy Variable Regression

Regression III: Dummy Variable Regression Tom Ilvento FREC 408 Linear Regression Assumptions about the error term Mean of Probability Distribution of the Error term is zero Probability Distribution of

### Simple Predictive Analytics Curtis Seare

Using Excel to Solve Business Problems: Simple Predictive Analytics Curtis Seare Copyright: Vault Analytics July 2010 Contents Section I: Background Information Why use Predictive Analytics? How to use

### Curve Fitting. Before You Begin

Curve Fitting Chapter 16: Curve Fitting Before You Begin Selecting the Active Data Plot When performing linear or nonlinear fitting when the graph window is active, you must make the desired data plot

### Chapter 4 and 5 solutions

Chapter 4 and 5 solutions 4.4. Three different washing solutions are being compared to study their effectiveness in retarding bacteria growth in five gallon milk containers. The analysis is done in a laboratory,

### 17. SIMPLE LINEAR REGRESSION II

17. SIMPLE LINEAR REGRESSION II The Model In linear regression analysis, we assume that the relationship between X and Y is linear. This does not mean, however, that Y can be perfectly predicted from X.

### Collinearity of independent variables. Collinearity is a condition in which some of the independent variables are highly correlated.

Collinearity of independent variables Collinearity is a condition in which some of the independent variables are highly correlated. Why is this a problem? Collinearity tends to inflate the variance of

### Statistics Review PSY379

Statistics Review PSY379 Basic concepts Measurement scales Populations vs. samples Continuous vs. discrete variable Independent vs. dependent variable Descriptive vs. inferential stats Common analyses

### 1. The parameters to be estimated in the simple linear regression model Y=α+βx+ε ε~n(0,σ) are: a) α, β, σ b) α, β, ε c) a, b, s d) ε, 0, σ

STA 3024 Practice Problems Exam 2 NOTE: These are just Practice Problems. This is NOT meant to look just like the test, and it is NOT the only thing that you should study. Make sure you know all the material

### Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression

Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression Objectives: To perform a hypothesis test concerning the slope of a least squares line To recognize that testing for a

### Please follow the directions once you locate the Stata software in your computer. Room 114 (Business Lab) has computers with Stata software

STATA Tutorial Professor Erdinç Please follow the directions once you locate the Stata software in your computer. Room 114 (Business Lab) has computers with Stata software 1.Wald Test Wald Test is used

### HOW TO USE MINITAB: DESIGN OF EXPERIMENTS. Noelle M. Richard 08/27/14

HOW TO USE MINITAB: DESIGN OF EXPERIMENTS 1 Noelle M. Richard 08/27/14 CONTENTS 1. Terminology 2. Factorial Designs When to Use? (preliminary experiments) Full Factorial Design General Full Factorial Design

### Data Mining and Data Warehousing. Henryk Maciejewski. Data Mining Predictive modelling: regression

Data Mining and Data Warehousing Henryk Maciejewski Data Mining Predictive modelling: regression Algorithms for Predictive Modelling Contents Regression Classification Auxiliary topics: Estimation of prediction

### 2. Simple Linear Regression

Research methods - II 3 2. Simple Linear Regression Simple linear regression is a technique in parametric statistics that is commonly used for analyzing mean response of a variable Y which changes according

### Statistical Modelling in Stata 5: Linear Models

Statistical Modelling in Stata 5: Linear Models Mark Lunt Arthritis Research UK Centre for Excellence in Epidemiology University of Manchester 08/11/2016 Structure This Week What is a linear model? How

### Regression in SPSS. Workshop offered by the Mississippi Center for Supercomputing Research and the UM Office of Information Technology

Regression in SPSS Workshop offered by the Mississippi Center for Supercomputing Research and the UM Office of Information Technology John P. Bentley Department of Pharmacy Administration University of

### REGRESSION LINES IN STATA

REGRESSION LINES IN STATA THOMAS ELLIOTT 1. Introduction to Regression Regression analysis is about eploring linear relationships between a dependent variable and one or more independent variables. Regression

### LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING

LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING In this lab you will explore the concept of a confidence interval and hypothesis testing through a simulation problem in engineering setting.

### Name: Student ID#: Serial #:

STAT 22 Business Statistics II- Term3 KING FAHD UNIVERSITY OF PETROLEUM & MINERALS Department Of Mathematics & Statistics DHAHRAN, SAUDI ARABIA STAT 22: BUSINESS STATISTICS II Third Exam July, 202 9:20

### " Y. Notation and Equations for Regression Lecture 11/4. Notation:

Notation: Notation and Equations for Regression Lecture 11/4 m: The number of predictor variables in a regression Xi: One of multiple predictor variables. The subscript i represents any number from 1 through

### Chapter 5: Basic Statistics and Hypothesis Testing

Chapter 5: Basic Statistics and Hypothesis Testing In this chapter: 1. Viewing the t-value from an OLS regression (UE 5.2.1) 2. Calculating critical t-values and applying the decision rule (UE 5.2.2) 3.

### Chapter 11: Linear Regression - Inference in Regression Analysis - Part 2

Chapter 11: Linear Regression - Inference in Regression Analysis - Part 2 Note: Whether we calculate confidence intervals or perform hypothesis tests we need the distribution of the statistic we will use.

### Using Microsoft Excel to Analyze Data from the Disk Diffusion Assay

Using Microsoft Excel to Analyze Data from the Disk Diffusion Assay Entering and Formatting Data Open Excel. Set up the spreadsheet page (Sheet 1) so that anyone who reads it will understand the page (Figure

### 1.1 Solving a Linear Equation ax + b = 0

1.1 Solving a Linear Equation ax + b = 0 To solve an equation ax + b = 0 : (i) move b to the other side (subtract b from both sides) (ii) divide both sides by a Example: Solve x = 0 (i) x- = 0 x = (ii)

### The F-Test by Hand Calculator

1 The F-Test by Hand Calculator Where possible, one-way analysis of variance summaries should be obtained using a statistical computer package. Hand-calculation is a tedious and errorprone process, especially

### For example, enter the following data in three COLUMNS in a new View window.

Statistics with Statview - 18 Paired t-test A paired t-test compares two groups of measurements when the data in the two groups are in some way paired between the groups (e.g., before and after on the

### Multiple Regression YX1 YX2 X1X2 YX1.X2

Multiple Regression Simple or total correlation: relationship between one dependent and one independent variable, Y versus X Coefficient of simple determination: r (or r, r ) YX YX XX Partial correlation:

### Introduction to Regression and Data Analysis

Statlab Workshop Introduction to Regression and Data Analysis with Dan Campbell and Sherlock Campbell October 28, 2008 I. The basics A. Types of variables Your variables may take several forms, and it

### Hypothesis Testing Level I Quantitative Methods. IFT Notes for the CFA exam

Hypothesis Testing 2014 Level I Quantitative Methods IFT Notes for the CFA exam Contents 1. Introduction... 3 2. Hypothesis Testing... 3 3. Hypothesis Tests Concerning the Mean... 10 4. Hypothesis Tests

### Statistics for Management II-STAT 362-Final Review

Statistics for Management II-STAT 362-Final Review Multiple Choice Identify the letter of the choice that best completes the statement or answers the question. 1. The ability of an interval estimate to

### 4.4. Further Analysis within ANOVA

4.4. Further Analysis within ANOVA 1) Estimation of the effects Fixed effects model: α i = µ i µ is estimated by a i = ( x i x) if H 0 : µ 1 = µ 2 = = µ k is rejected. Random effects model: If H 0 : σa

### 2013 MBA Jump Start Program. Statistics Module Part 3

2013 MBA Jump Start Program Module 1: Statistics Thomas Gilbert Part 3 Statistics Module Part 3 Hypothesis Testing (Inference) Regressions 2 1 Making an Investment Decision A researcher in your firm just