MULTIPLE LINEAR REGRESSION ANALYSIS USING MICROSOFT EXCEL. by Michael L. Orlov Chemistry Department, Oregon State University (1996)
|
|
|
- Aileen O’Connor’
- 10 years ago
- Views:
Transcription
1 MULTIPLE LINEAR REGRESSION ANALYSIS USING MICROSOFT EXCEL by Michael L. Orlov Chemistry Department, Oregon State University (1996) INTRODUCTION In modern science, regression analysis is a necessary part of virtually almost any data reduction process. Popular spreadsheet programs, such as Quattro Pro, Microsoft Excel, and Lotus provide comprehensive statistical program packages, which include a regression tool among many others. Usually, the regression module is explained clearly enough in on-line help and spreadsheet documentation (i.e. items in the regression input dialog box). However, the description of the output is minimal and is often a mystery for the user who is unfamiliar with certain statistical concepts. The objective of this short handout is to give a more detailed description of the regression tool and to touch upon related statistical topics in a hopefully readable manner. It is designed for science undergraduate and graduate students inexperienced in statistical matters. The regression output in Microsoft Excel is pretty standard and is chosen as a basis for illustrations and examples ( Quattro Pro and Lotus use an almost identical format). CLASSIFICATION OF REGRESSION MODELS In a regression analysis we study the relationship, called the regression function, between one variable y, called the dependent variable, and several others x i, called the independent variables. Regression function also involves a set of unknown parameters b i. If a regression function is linear in the parameters (but not necessarily in the independent variables! ) we term it a linear regression model. Otherwise, the model is called non-linear. Linear regression models with more than one independent variable are referred to as multiple linear models, as opposed to simple linear models with one independent variable.
2 2 The following notation is used in this work: y - dependent variable (predicted by a regression model) y* - dependent variable (experimental value) p - number of independent variables (number of coefficients) x i (i=1,2, p) - ith independent variable from total set of p variables b i (i=1,2, p) - ith coefficient corresponding to x i b 0 - intercept (or constant) k=p+1 - total number of parameters including intercept (constant) n - number of observations ( experimental data points) i =1,2 p - independent variables index j=1,2, n - data points index Now let us illustrate the classification of regression models with mathematical expressions: Multiple linear model General formula: y = b 0 + b 1 x 1 + b 2 x 2 + b p x p (1) or y = b 0 + i b i x i i=1,2, p (1a) Polynomial (model is linear in parameters, but not in independent variables): y = b 0 + b 1 x + b 2 x 2 + b 3 x 3 b p x p, which is just a specific case of (1) with x 1 = x, x 2 = x 2, x 3 = x 3..x p = x p Simple linear model y = b 0 + b 1 x 1 It is obvious that simple linear model is just specific case of multiple one with k=2 (p=1) Non-linear model y = A(1-e -Bx ), where A, B are parameters In further discussion we restrict ourselves to multiple linear regression analysis.
3 3 MAIN OBJECTIVES OF MULTIPLE LINEAR REGRESSION ANALYSIS Our primary goal is to determine the best set of parameters b i, such that the model predicts experimental values of the dependent variable as accurately as possible (i.e. calculated values y j should be close to experimental values y j * ). We also wish to judge whether our model itself is adequate to fit the observed experimental data (i.e. whether we chose the correct mathematical form of it). We need to check whether all terms in our model are significant (i.e. is the improvement in goodness of fit due to the addition of a certain term to the model bigger than the noise in experimental data). DESCRIPTION OF REGRESSION INPUT AND OUTPUT The standard regression output of spreadsheet programs provides information to reach the objectives raised in the previous section. Now we explain how to do that and touch upon related statistical terms and definitions. The following numerical example will be used throughout the handout to illustrate the discussion: Table 1. Original experimental data Data point # y* z j We choose y* to be the dependent experimental observable and z to be the independent one. Suppose we have, say, theoretical reasons to believe that relationship between two is: y* = b 0 + b 1 *z + b 2 *z 2 + b 3 *z 3
4 4 We can rewrite this expression in form (1): y = b 0 + b 1 x 1 + b 2 x 2 + b 3 x 3, where x 1 = z, x 2 = z 2 and x 2 = z 3 (1b) In the next step we prepare the spreadsheet input table for regression analysis: Table 2. Regression input Data point # Dependent var. Independent variables j y* x 1 (=z) x 2 (=z 2 ) x 3 (=z 3 ) In order to perform a regression analysis we choose from the Microsoft Excel menu*: Tools Data analysis Regression Note that data analysis tool should have been previously added to Microsoft Excel during the program setup (Tools Add-Ins Analysis ToolPak). The pop-up input dialog box is shown on Fig.1. Elements of this box are described in online help. Most of them become clear in the course of our discussion as well. The Input Y range refers to the spreadsheet cells containing the independent variable y* and the Input X range to those containing independent variables x ( in our example x = x 1, x 2, x 3 ) (see Table 2). If we do not want to force our model through the origin we leave the Constant is Zero box unchecked. The meaning of Confidence level entry will become clear later. The block Output options allows one to choose the content and locations of the regression output. The minimal output has two parts Regression Statistics and ANOVA (ANalysis Of VAriance). Checking the appropriate boxes in subblocks Residuals and Normal Probability will expand the default output information. We omit from our discussion description of Normal Probability output. Now we are ready to proceed with the discussion of the regression output. * - In Quattro Pro the sequence is Tools - Numeric Tools - Analysis Tools - Advanced Regression. In the last step instead of Advanced Regression, one can choose Regression from the menu. In this case the simplified regression output will be obtained.
5 5 Fig. 1. Regression input dialog box Residual output Example In Microsoft Excel the residual output has the following format: Table3. Residual output* Observation Predicted Y Residuals Standard (j) (y j ) ( r ) Residuals (r ) * - Corresponding notation used in this handout is given in parenthesis Residual (or error, or deviation) is the difference between the observed value y* of the dependent variable for the jth experimental data point (x 1j, x 2j,, x pj, y j *) and the
6 Residuals (r) 6 corresponding value y j given by the regression function y j = b 0 + b 1 x 1j + b 2 x 2j + b p x pj (y j = b 0 + b 1 x 1j + b 2 x 2j + b 3 x 3j in our example): r j = y j * - y j (2) Parameters b (b 0, b 1, b 2, b p ) are part of the ANOVA output (discussed later). If there is an obvious correlation between the residuals and the independent variable x (say, residuals systematically increase with increasing x), it means that the chosen model is not adequate to fit the experiment (e.g. we may need to add an extra term x 4 =z 4 to our model (1b)). A plot of residuals is very helpful in detecting such a correlation. This plot will be included in the regression output if the box Residual Plots was checked in the regression input dialog window (Fig. 1). Example X Variable 1 Residual Plot X Variable 1 (x 1 ) However, the fact that the residuals look random and that there is no obvious correlation with the variable x does not necessarily mean by itself that the model is adequate. More tests are needed. Standard ( or standardized ) residual is a residual scaled with respect to the standard error (deviation) S y in a dependent variable: r j = r j / S y (2a) The quantity S y is part of the Regression statistics output (discussed later). Standardized residuals are used for some statistical tests, which are not usually needed for models in physical sciences.
7 7 ANOVA output There are two tables in ANOVA (Analysis of Variance). Example Table 4. ANOVA output (part I)* df SS MS F Significance F Regression 3 (df R ) (SS R ) (MS R ) (F R ) 6.12E-09 (P R ) Residual (error) 3 (df E ) 1.70 (SS E ) 0.57 (MS E ) Total 6 (df T ) (SS T ) N/A (MS T ) Table 4a. ANOVA output (part II)* Coefficients Standard t Stat P-value Lower 95% Upper 95% (b i ) Error (se (b i )) (t i ) (P i ) (b L, (1-Pi) ) (b U, (1-Pi) ) Intercept (b 0 ) X Variable 1 (x 1 ) X Variable 2 (x 2 ) X Variable 3 (x 3 ) * - Corresponding notation used in this handout is given in parenthesis; N/A means not available in Microsoft Excel regression output. Coefficients. The regression program determines the best set of parameters b (b 0, b 1, b 2, b p ) in the model y j =b 0 +b 1 x 1j +b 2 x 2j + b p x pj by minimizing the error sum of squares SS E (discussed later). Coefficients are listed in the second table of ANOVA (see Table 4a). These coefficients allow the program to calculate predicted values of the dependent variable y (y 1, y 2, y n ), which were used above in formula (2) and are part of Residual output ( Table 3). Sum of squares. In general, the sum of squares of some arbitrary variable q is determined as: SS q = j n (q j - q avg ) 2, where (3) q j - jth observation out of n total observations of quantity q q avg - average value of q in n observations: q avg = n j q j )/n
8 8 In the ANOVA regression output one will find three types of sum of squares (see Table 4): 1). Total sum of squares SS T : SS T = n j (y j * - y* avg ) 2, where y* avg = n j y j *)/n (3a) It is obvious that SS T is the sum of squares of deviations of the experimental values of dependent variable y* from its average value. SS T could be interpreted as the sum of deviations of y* from the simplest possible model (y is constant and does not depend on any variable x): y = b 0, with b 0 = y* avg (4) SS T has two contributors: residual (error) sum of squares (SS E ) and regression sum of squares(ss R ): SS T = SS E + SS R (5) 2). Residual (or error) sum of squares SS E : SS E = j n (r j - r avg ) 2 (6) Since in the underlying theory the expected value of residuals r avg is assumed to be zero, expression (6) simplifies to: SS E = j n (r j ) 2 (6a) The significance of this quantity is that by the minimization of SS E the spreadsheet regression tool determines the best set of parameters b= b 0, b 1, b 2,, b p for a given regression model. SS E could be also viewed as the due-to-random-scattering-of -y*-aboutpredicted-line contributor to the total sum of squares SS T. This is the reason for calling the quantity due to error (residual) sum of squares. 3). Regression sum of squares SS R : SS R = j n (y j - y* avg ) 2 (7) SS R is the sum of squares of deviations of the predicted-by-regression-model values of dependent variable y from its average experimental value y* avg. It accounts for addition of p variables (x 1, x 2,, x p ) to the simplest possible model (4) (variable y is just a constant and does not depend on variables x), i.e. y = b 0 vs. y = b 0 + b 1 x 1 + b 2 x b p x p. Since this is a transformation from the non-regression model (4) to the true regression model (1), SS R is called the due to regression sum of squares.
9 9 The definition of SS R in the form (7) is not always given in the literature. One can find different expressions in books on statistics [1, 2] : p SS R = i b i n j (x i j - x avg ) y j, where x avg = n j x j *)/n (7a) or SS R = i p b i j n x i j y j * - ( j n y j *) 2 /n (7b) Relationships (7a-b) give the same numerical result, however, it is difficult to see the physical meaning of SS R from them. Mean square (variance) and degrees of freedom The general expression for the mean square of an arbitrary quantity q is: MS q = SS q / df (8) SS q is defined by (3) and df is the number of degrees of freedom associated with quantity SS q. MS is also often referred to as the variance. The number of degrees of freedom could be viewed as the difference between the number of observations n and the number of constraints (fixed parameters associated with the corresponding sum of squares SS q ). 1). Total mean square MS T (total variance): MS T = SS T /(n - 1) (9) SS T is associated with the model (4), which has only one constraint (parameter b 0 ), therefore the number of degrees of freedom in this case is: df T = n - 1 (10) 2). Residual (error) mean square MS E (error variance): MS E = SS E / (n - k) (11) SS E is associated with the random error around the regression model (1), which has k=p+1 parameters (one per each variable out of p variables total plus intercept). It means there are k constraints and the number of degrees of freedom is : df E = n - k (12)
10 10 3). Regression mean square MS R (regression variance): MS R = SS R /(k - 1) (13) The number of degrees of freedom in this case can be viewed as the difference between the total number of degrees of freedom df T (10) and the number of degrees of freedom for residuals df E (12) : df R = df T - df E = (n - 1) - (n - k) df R = k - 1 = p (14) Tests of significance and F-numbers The F-number is the quantity which can be used to test for the statistical difference between two variances. For example, if we have two random variables q and v, the corresponding F- number is: F qv = MS q / MS v (15) The variances MS q and MS v are defined by an expression of type (8). In order to tell whether two variances are statistically different, we determine the corresponding probability P from F-distribution function: P=P(F qv, df q, df v ) (16) The quantities df q, df v - degrees of freedom for numerator and denominator - are parameters of this function. Tabulated numerical values of P for the F-distribution can be found in various texts on statistics or simply determined in a spreadsheet directly by using the corresponding statistical function (e.g. in Microsoft Excel one would use FDIST(F qv, df q, df v ) to return the numerical value of P). An interested reader can find the analytical form of P=P(F qv, df q, df v ) in the literature (e.g. [1, p.383]). The probability P given by (16) is a probability that the variances MS q and MS v are statistically indistinguishable. On the other hand, 1-P is the probability that they are different and is often called confidence level. Conventionally, a reasonable confidence level is 0.95 or higher. If it turns out that 1-P < 0.95, we say that MS q and MS v are statistically the same. If 1-P > 0.95, we say that at least with the 0.95 (or 95%) confidence MS q and MS v are different. The higher the confidence level, the more reliable our conclusion. The procedure just described is called the F-test. There are several F-tests related to regression analysis. We will discuss the three most common ones. They deal with significance of parameters in the regression model. The first
11 11 and the last of them is performed by spreadsheet regression tool automatically, whereas the second one is not. 1). Significance test of all coefficients in the regression model In this case we ask ourselves: With what level of confidence can we state that AT LEAST ONE of the coefficients b (b 1, b 2, b p ) in the regression model is significantly different from zero?. The first step is to calculate the F-number for the whole regression (part of the regression output (see Table 4)): F R = MS R / MS E (17) The second step is to determine the numerical value of the corresponding probability P R (also part of the regression output ( see Table 4)) : P R = FDIST(F R, df R, df E ) (18) Taking into account expressions (12) and (14) we obtain: P R = FDIST(F R, k - 1, n - k) (18a) Finally we can determine the confidence level 1 - P R. At this level of confidence, the variance due to regression MS R is statistically different from the variance due to error MS E. In its turn it means that the addition of p variables (x 1, x 2,, x p ) to the simplest model (4) (dependent variable y is just a constant) is a statistically significant improvement of the fit. Thus, at the confidence level not less than 1- P R we can say: At least ONE of coefficients in the model is significant. F R could be also used to compare two models describing the same experimental data: the higher F R the more adequate the corresponding model. Example In our illustrative exercise we have P R = 6.12E-09 (Table 4), the corresponding level of confidence 1 - P R = Therefore with the confidence close to 100% we can say that at least one of coefficients b 1, b 2 and b 3 is significant for the model y = b 0 + b 1 x 1 + b 2 x 2 + b 3 x 3, where x 1 =z, x 2 = z 2 and x 3 = z 3. NOTE: From this test, however, we can not be sure that ALL coefficients b 1,b 2 and b 3 are non-zero. If 1- P R is not big enough (usually less than 0.95), we conclude that ALL the coefficients in the regression model are zero (in other words, the hypothesis that the variable y is just a constant is better than it is function of variables x (x 1, x 2,, x p ) ).
12 12 2). Significance test of subset of coefficients in the regression model Now we want to decide With what level of confidence can we be sure that at least ONE of the coefficients in a selected subset of all the coefficients is significant?. Let us test a subset of the last m coefficients in the model with a total of p coefficients (b 1, b 2, b p ). Here we need to consider two models: y = b 0 + b 1 x 1 + b 2 x 2 + b p x p (unrestricted) (19) and y = b 0 + b 1 x 1 + b 2 x + b p-m x p-m (restricted) (20) These models are called unrestricted (19) and restricted (20) respectively. We need to perform two separate least square regression analyses for each model. From the regression output (see Table 4) for each model we obtain the corresponding error sum of squares SS E and SS E as well as variance MS E for the unrestricted model. The next step is to calculate the F-number for testing a subset of m variables by hand (it is not part of Microsoft Excel ANOVA for an obvious reason, i.e. you must decide how many variables to include in the subset): F m = {( SS E - SS E ) / m} / MS E (21) F m could be viewed as an indicator of whether the reduction in the error variance due to the addition of the subset of m variables to the restricted model (20) (( SS E - SS E ) / m ) is statistically significant with respect to the overall error variance MS E for the unrestricted model (19). It is equivalent to testing the hypothesis that at least one of coefficients in the subset is not zero. In the final step, we determine probability P m (also by hand ): P m = FDIST(F m, m, n - k) (22) At the confidence level 1- P m at least ONE of the coefficients in the subset of m is significant. If 1- P m is not big enough (less than 0.95) we state that ALL m coefficients in the subset are insignificant.
13 13 Example The regression output for the unrestricted model (y = b 0 + b 1 x 1 + b 2 x 2 + b 3 x 3, where x 1 =z, x 2 = z 2 and x 3 = z 3 ) is presented in Table 4. Say, we want to test whether the quadratic and the cubic terms are significant. In this case the restricted model is: y = b 0 + b 1 x 1, (restricted model) (23) where x 1 = z The subset of parameters consists of two parameter and m=2. By analogy with the input table for the unrestricted model (Table 2) we prepare one for the restricted model: Table 5. Regression input for restricted model Data point # Dependent var. Independent var. j y* x 1 (=z) We perform an additional regression using this input table and as part of ANOVA obtain: Table 6. Regression ANOVA output for the restricted model df SS MS F Significance F Regression Residual (error) Total From Table 4 and Table 6 we have: SS E = 1.70 MS E = 0.57 df E =(n - k)= 3 SS E = (error sum of squares; unrestricted model) (error mean square; unrestricted model) (degrees of freedom; unrestricted model) (error sum of squares; restricted model) Now we are able to calculate F m=2 : F m=2 = {( )/ 2} / 0.57 F m=2 =
14 14 Using the Microsoft Excel function for the F-distribution we determine the probability P m=2 : P m=2 = FDIST( , 2, 3) P m=2 = 3.50E-07 Finally we calculate the level of confidence 1- P m=2 : 1-P m=1 = E-07 1-P m=1 = The confidence level is high (more than %). We conclude that at least one of the parameters (b 2 or b 3 ) in the subset is non-zero. However, we can not be sure that both quadratic and cubic terms are significant. 3). Significance test of an individual coefficient in the regression model Here the question to answer is: With what confidence level can we state that the ith coefficient b i in the model is significant?. The corresponding F-number is: F i = b i 2 / [se(b i )] 2 (24) se(b i ) is the standard error in the individual coefficient b i and is part of the ANOVA output (see Table 4a). The corresponding probability P i = FDIST(F i, 1, n - k) (25) leads us to the confidence level 1- P i at which we can state that coefficient b i is significant. If this level is lower than desired one we say that coefficient b i is insignificant. F i is not part of spreadsheet regression output, but might be calculated by hand if needed. However, there is another statistics for testing individual parameters, which is part of ANOVA (see Table 4a): t i = b i / se(b i ) (26) The t i - number is the square root of F i (expression (24)). It has a Student s distribution (see [1, p. 381] for the analytical form of the distribution). The corresponding probability is numerically the same as that given by (25). There is a statistical function in Microsoft Excel which allows one to determine P i ( part of ANOVA (see Table 4a)): P i= TDIST(t i, n-k, 2) (27)
15 15 Parameters of the function (27) are: the number of degrees of freedom df (df E =n - k) and form of test (TL=2). If TL=1 a result for a one-tailed distribution is returned; if TL=2 two-tailed distribution result is returned. An interested reader can find more information about the issue in ref. [1] Example In our illustration P 0 = and P 3 = (see Table 4a) corresponds to fairly low confidence levels, 1 - P 0 = and 1 - P 3 = This suggests that parameters b 0 and b 3 are not significant. The confidence levels for b 1 and b 2 are high (1 - P 1 = = and 1- P 2 = = ), which means that they are significant. In conclusion of this F-test discussion, it should be noted that in case we remove even one insignificant variable from the model, we need to test the model once again, since coefficients which were significant in certain cases might become insignificant after removal and visa versa. It is a good practice to use a reasonable combination of all three tests in order to achieve the most reliable conclusions. Confidence interval In the previous section we were obtaining confidence levels given F-numbers or t- numbers. We can go in an opposite direction: given a desired minimal confidence level 1-P (e.g. 0.95) calculate the related F- or t-number. Microsoft Excel provides two statistical functions for that purpose: F (1-P) =FINV (P, df q, df v ) (28) t (1-P) =TINV(P, df ) (29) df q, df v - degrees of freedom of numerator and denominator, respectively (see (15)) df - degree of freedom associated with a given t-test (varies from test to test) NOTE: in expression (29) P is the probability associated with so called twotailed Student s distribution. A one- tailed distribution has the different probability The relationship between the two is: =P/2 (30) Values of F-numbers and t-numbers for various probabilities and degrees of freedom are tabulated and can be found in any text on statistics [1,2,3,4]. Usually the one-tailed Student s distribution is presented.
16 16 Knowing the t-number for a coefficient b i we can calculate the numerical interval which contains the coefficient b i with the desired probability 1-P i : b L, (1- Pi) = b i - se(b i )*t (1-Pi) (lower limit) (31) b U, (1- Pi) = b i + se(b i )*t (1-Pi) (upper limit) (31a) t (1-Pi) =TINV(P i, n-k) (32) The standard errors for individual parameters se(b i ) are part of ANOVA (Table 4a). The interval [b L, (1- Pi) ; b U, (1- Pi) ] is called the confidence interval of parameter b i with the 1-P i confidence level. The upper and lower limits of this interval at a 95% confidence are listed in the ANOVA output by default ( Table 4a; columns Lower 95% and Upper 95% ). If in addition to this default, the confidence interval at a confidence other than 95% is desired, the box Confidence level should be checked and the value of the alternative confidence entered in the corresponding window of the Regression input dialog box (see Fig. 1). Example For the unrestricted model (1b), the lower and upper 95% limits for intercept are and respectively (see Table 4a). The fact that with the 95% probability zero falls in this interval is consistent with our conclusion of insignificance of b 0 made in the course of F-testing of individual parameters (see Example at the end of previous section). The confidence intervals at the 95% level for b 1 and b 2 do not include zero. This also agrees with the F-test of individual parameters. In fact, analysis whether zero falls in a confidence interval could be viewed as a different way to perform the F-test (t-test) of individual parameters and must not be used as an additional proof of conclusions made in such a test. Regression statistics output The information contained in the Regression statistics output characterizes the goodness of the model as a whole. Note that quantities listed in this output can be expressed in terms of the regression F-number F R (Table 4) which we have already used for the significance test of all coefficients.
17 17 Example For our unrestricted model (1b) the output is: Table 7. Regression statistics output* Multiple R R Square (R 2 ) Adjusted R Square (R 2 adj) Standard Error (S y ) Observations (n) 7 * - Corresponding notation used in this handout is given in parenthesis Standard error (S y ): S y = (MS E ) 0.5 (33) MS E is an error variance discussed before (see expression (11)). Quantity S y is an estimate of the standard error (deviation) of experimental values of the dependent variable y* with respect to those predicted by the regression model. It is used in statistics for different purposes. One of the applications we saw in the discussion of Residual output (Standardized residuals; see expression (2a)). Coefficient of determination R 2 (or R Square): R 2 =SS R / SS T = 1 - SS E /SS T (34) SS R, SS E and SS T are regression, residual (error) and total sum of squares defined by (7), (6a) and (3a) respectively. The coefficient of determination is a measure of the regression model as whole. The closer R 2 is to one, the better the model (1) describes the data. In the case of a perfect fit R 2 =1. Adjusted coefficient of determination R 2 (or Adjusted R Square): R 2 adj=1- {SS E / (n-k)} / {SS T / (n-1)} (35) SS E and SS T are the residual (error) and the total sum of squares (see expressions (6a) and (3a)). The significance of R 2 adj is basically the same as that of R 2 (the closer to one the better). Strictly speaking R 2 adj should be used as an indicator of an adequacy of the model, since it takes in to account not only deviations, but also numbers of degrees of freedom.
18 18 Multiple correlation coefficient R: R = ( SS R / SS T ) 0.5 (36) This quantity is just the square root of coefficient of determination. Example The fact that R 2 adj = in our illustration is fairly close to 1 (see Table 7) suggests that overall model (1b) is adequate to fit the experimental data presented in Table 1. However, it does not mean that there are no insignificant parameters in it. REGRESSION OUTPUT FORMULA MAP For references, the following tables present a summary of the formula numbers for individual items in the Microsoft Excel Regression Output. Variables in parenthesis, introduced and used in this handout, do not appear in the output. Table 8. Formula map of Regression statistics output Multiple R (36) R Square (R 2 ) (34) Adjusted R Square (R 2 adj) (35) Standard Error (S y ) (33) Observations (n) Table 9. Formula map of Residual output Observation (j) Predicted Y (y j ) Residuals (r j ) Standard Residuals(r j ) 1 (1) (2) (2a) 2 (1) (2) (2a) Table 10. Formula map of ANOVA output (part I) df SS MS F Significance F Regression (df R ) (14) (SS R ) (7) (MS R ) (13) (F R ) (17) (P R ) (18) Residual (error) (df E ) (12) (SS E ) (6a) (MS E ) (11) Total (df T ) (10) (SS T ) (3a) (MS T )* (9) *- not reported in Microsoft Excel Regression output
19 19 Table 10a. Formula map of ANOVA output (part II) Coefficients (b i ) Standard Error (se(b i )) t Stat (t i ) P-value (P i ) Lower 95% (b L,(1-Pi) ) Upper 95% (b U,(1- Pi)) Intercept (b 0 ) (26) (25) (27) (31) (31a) X Variable 1 (x 1 ) (26) (25) (27) (31) (31a) X Variable 2 (x 2 ) (26) (25) (27) (31) (31a) LITERATURE 1. Afifi A.A., Azen S.P. Statistical analysis. Computer oriented approach, Academic press, New York (1979) 2. Natrella M.G. Experimental Statistics, National Bureau of Standards, Washington DC (1963) 3. Neter J., Wasserman W. Applied linear statistical models, R.D. Irwin Inc., Homewood, Illinois (1974) 4. Gunst R.F., Mason R.L. Regression analysis and its application, Marcel Dekker Inc., NewYork (1980) 5. Shoemaker D.P., Garland C.W., Nibler J.W. Experiments in Physical Chemistry, The McGraw-Hill Companies Inc. (1996)
Regression step-by-step using Microsoft Excel
Step 1: Regression step-by-step using Microsoft Excel Notes prepared by Pamela Peterson Drake, James Madison University Type the data into the spreadsheet The example used throughout this How to is a regression
Predictor Coef StDev T P Constant 970667056 616256122 1.58 0.154 X 0.00293 0.06163 0.05 0.963. S = 0.5597 R-Sq = 0.0% R-Sq(adj) = 0.
Statistical analysis using Microsoft Excel Microsoft Excel spreadsheets have become somewhat of a standard for data storage, at least for smaller data sets. This, along with the program often being packaged
Data Analysis Tools. Tools for Summarizing Data
Data Analysis Tools This section of the notes is meant to introduce you to many of the tools that are provided by Excel under the Tools/Data Analysis menu item. If your computer does not have that tool
1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96
1 Final Review 2 Review 2.1 CI 1-propZint Scenario 1 A TV manufacturer claims in its warranty brochure that in the past not more than 10 percent of its TV sets needed any repair during the first two years
Using R for Linear Regression
Using R for Linear Regression In the following handout words and symbols in bold are R functions and words and symbols in italics are entries supplied by the user; underlined words and symbols are optional
Guide to Microsoft Excel for calculations, statistics, and plotting data
Page 1/47 Guide to Microsoft Excel for calculations, statistics, and plotting data Topic Page A. Writing equations and text 2 1. Writing equations with mathematical operations 2 2. Writing equations with
Regression Analysis: A Complete Example
Regression Analysis: A Complete Example This section works out an example that includes all the topics we have discussed so far in this chapter. A complete example of regression analysis. PhotoDisc, Inc./Getty
STATISTICA Formula Guide: Logistic Regression. Table of Contents
: Table of Contents... 1 Overview of Model... 1 Dispersion... 2 Parameterization... 3 Sigma-Restricted Model... 3 Overparameterized Model... 4 Reference Coding... 4 Model Summary (Summary Tab)... 5 Summary
EXCEL Analysis TookPak [Statistical Analysis] 1. First of all, check to make sure that the Analysis ToolPak is installed. Here is how you do it:
EXCEL Analysis TookPak [Statistical Analysis] 1 First of all, check to make sure that the Analysis ToolPak is installed. Here is how you do it: a. From the Tools menu, choose Add-Ins b. Make sure Analysis
How To Run Statistical Tests in Excel
How To Run Statistical Tests in Excel Microsoft Excel is your best tool for storing and manipulating data, calculating basic descriptive statistics such as means and standard deviations, and conducting
POLYNOMIAL AND MULTIPLE REGRESSION. Polynomial regression used to fit nonlinear (e.g. curvilinear) data into a least squares linear regression model.
Polynomial Regression POLYNOMIAL AND MULTIPLE REGRESSION Polynomial regression used to fit nonlinear (e.g. curvilinear) data into a least squares linear regression model. It is a form of linear regression
Recall this chart that showed how most of our course would be organized:
Chapter 4 One-Way ANOVA Recall this chart that showed how most of our course would be organized: Explanatory Variable(s) Response Variable Methods Categorical Categorical Contingency Tables Categorical
KSTAT MINI-MANUAL. Decision Sciences 434 Kellogg Graduate School of Management
KSTAT MINI-MANUAL Decision Sciences 434 Kellogg Graduate School of Management Kstat is a set of macros added to Excel and it will enable you to do the statistics required for this course very easily. To
One-Way Analysis of Variance (ANOVA) Example Problem
One-Way Analysis of Variance (ANOVA) Example Problem Introduction Analysis of Variance (ANOVA) is a hypothesis-testing technique used to test the equality of two or more population (or treatment) means
Statistical Functions in Excel
Statistical Functions in Excel There are many statistical functions in Excel. Moreover, there are other functions that are not specified as statistical functions that are helpful in some statistical analyses.
Below is a very brief tutorial on the basic capabilities of Excel. Refer to the Excel help files for more information.
Excel Tutorial Below is a very brief tutorial on the basic capabilities of Excel. Refer to the Excel help files for more information. Working with Data Entering and Formatting Data Before entering data
Bill Burton Albert Einstein College of Medicine [email protected] April 28, 2014 EERS: Managing the Tension Between Rigor and Resources 1
Bill Burton Albert Einstein College of Medicine [email protected] April 28, 2014 EERS: Managing the Tension Between Rigor and Resources 1 Calculate counts, means, and standard deviations Produce
Figure 1. An embedded chart on a worksheet.
8. Excel Charts and Analysis ToolPak Charts, also known as graphs, have been an integral part of spreadsheets since the early days of Lotus 1-2-3. Charting features have improved significantly over the
EXCEL Tutorial: How to use EXCEL for Graphs and Calculations.
EXCEL Tutorial: How to use EXCEL for Graphs and Calculations. Excel is powerful tool and can make your life easier if you are proficient in using it. You will need to use Excel to complete most of your
Statistics and Data Analysis
Statistics and Data Analysis In this guide I will make use of Microsoft Excel in the examples and explanations. This should not be taken as an endorsement of Microsoft or its products. In fact, there are
Simple Regression Theory II 2010 Samuel L. Baker
SIMPLE REGRESSION THEORY II 1 Simple Regression Theory II 2010 Samuel L. Baker Assessing how good the regression equation is likely to be Assignment 1A gets into drawing inferences about how close the
NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )
Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates
Final Exam Practice Problem Answers
Final Exam Practice Problem Answers The following data set consists of data gathered from 77 popular breakfast cereals. The variables in the data set are as follows: Brand: The brand name of the cereal
Traditional Conjoint Analysis with Excel
hapter 8 Traditional onjoint nalysis with Excel traditional conjoint analysis may be thought of as a multiple regression problem. The respondent s ratings for the product concepts are observations on the
Getting Correct Results from PROC REG
Getting Correct Results from PROC REG Nathaniel Derby, Statis Pro Data Analytics, Seattle, WA ABSTRACT PROC REG, SAS s implementation of linear regression, is often used to fit a line without checking
t Tests in Excel The Excel Statistical Master By Mark Harmon Copyright 2011 Mark Harmon
t-tests in Excel By Mark Harmon Copyright 2011 Mark Harmon No part of this publication may be reproduced or distributed without the express permission of the author. [email protected] www.excelmasterseries.com
Chapter 7. One-way ANOVA
Chapter 7 One-way ANOVA One-way ANOVA examines equality of population means for a quantitative outcome and a single categorical explanatory variable with any number of levels. The t-test of Chapter 6 looks
Randomized Block Analysis of Variance
Chapter 565 Randomized Block Analysis of Variance Introduction This module analyzes a randomized block analysis of variance with up to two treatment factors and their interaction. It provides tables of
Simple Methods and Procedures Used in Forecasting
Simple Methods and Procedures Used in Forecasting The project prepared by : Sven Gingelmaier Michael Richter Under direction of the Maria Jadamus-Hacura What Is Forecasting? Prediction of future events
Testing Group Differences using T-tests, ANOVA, and Nonparametric Measures
Testing Group Differences using T-tests, ANOVA, and Nonparametric Measures Jamie DeCoster Department of Psychology University of Alabama 348 Gordon Palmer Hall Box 870348 Tuscaloosa, AL 35487-0348 Phone:
One-Way Analysis of Variance
One-Way Analysis of Variance Note: Much of the math here is tedious but straightforward. We ll skim over it in class but you should be sure to ask questions if you don t understand it. I. Overview A. We
SAS Software to Fit the Generalized Linear Model
SAS Software to Fit the Generalized Linear Model Gordon Johnston, SAS Institute Inc., Cary, NC Abstract In recent years, the class of generalized linear models has gained popularity as a statistical modeling
Using Microsoft Excel for Probability and Statistics
Introduction Using Microsoft Excel for Probability and Despite having been set up with the business user in mind, Microsoft Excel is rather poor at handling precisely those aspects of statistics which
3 The spreadsheet execution model and its consequences
Paper SP06 On the use of spreadsheets in statistical analysis Martin Gregory, Merck Serono, Darmstadt, Germany 1 Abstract While most of us use spreadsheets in our everyday work, usually for keeping track
Multiple Linear Regression
Multiple Linear Regression A regression with two or more explanatory variables is called a multiple regression. Rather than modeling the mean response as a straight line, as in simple regression, it is
1.5 Oneway Analysis of Variance
Statistics: Rosie Cornish. 200. 1.5 Oneway Analysis of Variance 1 Introduction Oneway analysis of variance (ANOVA) is used to compare several means. This method is often used in scientific or medical experiments
CHAPTER 13 SIMPLE LINEAR REGRESSION. Opening Example. Simple Regression. Linear Regression
Opening Example CHAPTER 13 SIMPLE LINEAR REGREION SIMPLE LINEAR REGREION! Simple Regression! Linear Regression Simple Regression Definition A regression model is a mathematical equation that descries the
Simple Predictive Analytics Curtis Seare
Using Excel to Solve Business Problems: Simple Predictive Analytics Curtis Seare Copyright: Vault Analytics July 2010 Contents Section I: Background Information Why use Predictive Analytics? How to use
Bowerman, O'Connell, Aitken Schermer, & Adcock, Business Statistics in Practice, Canadian edition
Bowerman, O'Connell, Aitken Schermer, & Adcock, Business Statistics in Practice, Canadian edition Online Learning Centre Technology Step-by-Step - Excel Microsoft Excel is a spreadsheet software application
Simple Linear Regression Inference
Simple Linear Regression Inference 1 Inference requirements The Normality assumption of the stochastic term e is needed for inference even if it is not a OLS requirement. Therefore we have: Interpretation
Using Excel for inferential statistics
FACT SHEET Using Excel for inferential statistics Introduction When you collect data, you expect a certain amount of variation, just caused by chance. A wide variety of statistical tests can be applied
Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression
Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression Objectives: To perform a hypothesis test concerning the slope of a least squares line To recognize that testing for a
Assignment objectives:
Assignment objectives: Regression Pivot table Exercise #1- Simple Linear Regression Often the relationship between two variables, Y and X, can be adequately represented by a simple linear equation of the
Curve Fitting. Before You Begin
Curve Fitting Chapter 16: Curve Fitting Before You Begin Selecting the Active Data Plot When performing linear or nonlinear fitting when the graph window is active, you must make the desired data plot
1.1. Simple Regression in Excel (Excel 2010).
.. Simple Regression in Excel (Excel 200). To get the Data Analysis tool, first click on File > Options > Add-Ins > Go > Select Data Analysis Toolpack & Toolpack VBA. Data Analysis is now available under
Multiple Regression in SPSS This example shows you how to perform multiple regression. The basic command is regression : linear.
Multiple Regression in SPSS This example shows you how to perform multiple regression. The basic command is regression : linear. In the main dialog box, input the dependent variable and several predictors.
Directions for using SPSS
Directions for using SPSS Table of Contents Connecting and Working with Files 1. Accessing SPSS... 2 2. Transferring Files to N:\drive or your computer... 3 3. Importing Data from Another File Format...
Analysis of Variance. MINITAB User s Guide 2 3-1
3 Analysis of Variance Analysis of Variance Overview, 3-2 One-Way Analysis of Variance, 3-5 Two-Way Analysis of Variance, 3-11 Analysis of Means, 3-13 Overview of Balanced ANOVA and GLM, 3-18 Balanced
Chapter 4 and 5 solutions
Chapter 4 and 5 solutions 4.4. Three different washing solutions are being compared to study their effectiveness in retarding bacteria growth in five gallon milk containers. The analysis is done in a laboratory,
" Y. Notation and Equations for Regression Lecture 11/4. Notation:
Notation: Notation and Equations for Regression Lecture 11/4 m: The number of predictor variables in a regression Xi: One of multiple predictor variables. The subscript i represents any number from 1 through
2. Simple Linear Regression
Research methods - II 3 2. Simple Linear Regression Simple linear regression is a technique in parametric statistics that is commonly used for analyzing mean response of a variable Y which changes according
Data Mining and Data Warehousing. Henryk Maciejewski. Data Mining Predictive modelling: regression
Data Mining and Data Warehousing Henryk Maciejewski Data Mining Predictive modelling: regression Algorithms for Predictive Modelling Contents Regression Classification Auxiliary topics: Estimation of prediction
Multiple Linear Regression in Data Mining
Multiple Linear Regression in Data Mining Contents 2.1. A Review of Multiple Linear Regression 2.2. Illustration of the Regression Process 2.3. Subset Selection in Linear Regression 1 2 Chap. 2 Multiple
Chapter 7: Simple linear regression Learning Objectives
Chapter 7: Simple linear regression Learning Objectives Reading: Section 7.1 of OpenIntro Statistics Video: Correlation vs. causation, YouTube (2:19) Video: Intro to Linear Regression, YouTube (5:18) -
1. The parameters to be estimated in the simple linear regression model Y=α+βx+ε ε~n(0,σ) are: a) α, β, σ b) α, β, ε c) a, b, s d) ε, 0, σ
STA 3024 Practice Problems Exam 2 NOTE: These are just Practice Problems. This is NOT meant to look just like the test, and it is NOT the only thing that you should study. Make sure you know all the material
Using Microsoft Excel to Analyze Data from the Disk Diffusion Assay
Using Microsoft Excel to Analyze Data from the Disk Diffusion Assay Entering and Formatting Data Open Excel. Set up the spreadsheet page (Sheet 1) so that anyone who reads it will understand the page (Figure
The importance of graphing the data: Anscombe s regression examples
The importance of graphing the data: Anscombe s regression examples Bruce Weaver Northern Health Research Conference Nipissing University, North Bay May 30-31, 2008 B. Weaver, NHRC 2008 1 The Objective
MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS
MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS MSR = Mean Regression Sum of Squares MSE = Mean Squared Error RSS = Regression Sum of Squares SSE = Sum of Squared Errors/Residuals α = Level of Significance
Univariate Regression
Univariate Regression Correlation and Regression The regression line summarizes the linear relationship between 2 variables Correlation coefficient, r, measures strength of relationship: the closer r is
Statistics Review PSY379
Statistics Review PSY379 Basic concepts Measurement scales Populations vs. samples Continuous vs. discrete variable Independent vs. dependent variable Descriptive vs. inferential stats Common analyses
LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING
LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING In this lab you will explore the concept of a confidence interval and hypothesis testing through a simulation problem in engineering setting.
An analysis method for a quantitative outcome and two categorical explanatory variables.
Chapter 11 Two-Way ANOVA An analysis method for a quantitative outcome and two categorical explanatory variables. If an experiment has a quantitative outcome and two categorical explanatory variables that
Please follow the directions once you locate the Stata software in your computer. Room 114 (Business Lab) has computers with Stata software
STATA Tutorial Professor Erdinç Please follow the directions once you locate the Stata software in your computer. Room 114 (Business Lab) has computers with Stata software 1.Wald Test Wald Test is used
Advanced Excel for Institutional Researchers
Advanced Excel for Institutional Researchers Presented by: Sandra Archer Helen Fu University Analysis and Planning Support University of Central Florida September 22-25, 2012 Agenda Sunday, September 23,
17. SIMPLE LINEAR REGRESSION II
17. SIMPLE LINEAR REGRESSION II The Model In linear regression analysis, we assume that the relationship between X and Y is linear. This does not mean, however, that Y can be perfectly predicted from X.
Using Excel for Statistics Tips and Warnings
Using Excel for Statistics Tips and Warnings November 2000 University of Reading Statistical Services Centre Biometrics Advisory and Support Service to DFID Contents 1. Introduction 3 1.1 Data Entry and
Simple linear regression
Simple linear regression Introduction Simple linear regression is a statistical method for obtaining a formula to predict values of one variable from another where there is a causal relationship between
GLM I An Introduction to Generalized Linear Models
GLM I An Introduction to Generalized Linear Models CAS Ratemaking and Product Management Seminar March 2009 Presented by: Tanya D. Havlicek, Actuarial Assistant 0 ANTITRUST Notice The Casualty Actuarial
Chapter 13 Introduction to Linear Regression and Correlation Analysis
Chapter 3 Student Lecture Notes 3- Chapter 3 Introduction to Linear Regression and Correlation Analsis Fall 2006 Fundamentals of Business Statistics Chapter Goals To understand the methods for displaing
Using Excel s Analysis ToolPak Add-In
Using Excel s Analysis ToolPak Add-In S. Christian Albright, September 2013 Introduction This document illustrates the use of Excel s Analysis ToolPak add-in for data analysis. The document is aimed at
1/27/2013. PSY 512: Advanced Statistics for Psychological and Behavioral Research 2
PSY 512: Advanced Statistics for Psychological and Behavioral Research 2 Introduce moderated multiple regression Continuous predictor continuous predictor Continuous predictor categorical predictor Understand
Week TSX Index 1 8480 2 8470 3 8475 4 8510 5 8500 6 8480
1) The S & P/TSX Composite Index is based on common stock prices of a group of Canadian stocks. The weekly close level of the TSX for 6 weeks are shown: Week TSX Index 1 8480 2 8470 3 8475 4 8510 5 8500
Testing for Lack of Fit
Chapter 6 Testing for Lack of Fit How can we tell if a model fits the data? If the model is correct then ˆσ 2 should be an unbiased estimate of σ 2. If we have a model which is not complex enough to fit
Introduction to Regression and Data Analysis
Statlab Workshop Introduction to Regression and Data Analysis with Dan Campbell and Sherlock Campbell October 28, 2008 I. The basics A. Types of variables Your variables may take several forms, and it
Business Valuation Review
Business Valuation Review Regression Analysis in Valuation Engagements By: George B. Hawkins, ASA, CFA Introduction Business valuation is as much as art as it is science. Sage advice, however, quantitative
ANOVA. February 12, 2015
ANOVA February 12, 2015 1 ANOVA models Last time, we discussed the use of categorical variables in multivariate regression. Often, these are encoded as indicator columns in the design matrix. In [1]: %%R
AP Physics 1 and 2 Lab Investigations
AP Physics 1 and 2 Lab Investigations Student Guide to Data Analysis New York, NY. College Board, Advanced Placement, Advanced Placement Program, AP, AP Central, and the acorn logo are registered trademarks
HOW TO USE MINITAB: DESIGN OF EXPERIMENTS. Noelle M. Richard 08/27/14
HOW TO USE MINITAB: DESIGN OF EXPERIMENTS 1 Noelle M. Richard 08/27/14 CONTENTS 1. Terminology 2. Factorial Designs When to Use? (preliminary experiments) Full Factorial Design General Full Factorial Design
2013 MBA Jump Start Program. Statistics Module Part 3
2013 MBA Jump Start Program Module 1: Statistics Thomas Gilbert Part 3 Statistics Module Part 3 Hypothesis Testing (Inference) Regressions 2 1 Making an Investment Decision A researcher in your firm just
Causal Forecasting Models
CTL.SC1x -Supply Chain & Logistics Fundamentals Causal Forecasting Models MIT Center for Transportation & Logistics Causal Models Used when demand is correlated with some known and measurable environmental
When to use Excel. When NOT to use Excel 9/24/2014
Analyzing Quantitative Assessment Data with Excel October 2, 2014 Jeremy Penn, Ph.D. Director When to use Excel You want to quickly summarize or analyze your assessment data You want to create basic visual
Module 6: Introduction to Time Series Forecasting
Using Statistical Data to Make Decisions Module 6: Introduction to Time Series Forecasting Titus Awokuse and Tom Ilvento, University of Delaware, College of Agriculture and Natural Resources, Food and
Using Microsoft Excel to Analyze Data
Entering and Formatting Data Using Microsoft Excel to Analyze Data Open Excel. Set up the spreadsheet page (Sheet 1) so that anyone who reads it will understand the page. For the comparison of pipets:
Statistical flaws in Excel
Statistical flaws in Excel Hans Pottel Innogenetics NV, Technologiepark 6, 9052 Zwijnaarde, Belgium Introduction In 1980, our ENBIS Newsletter Editor, Tony Greenfield, published a paper on Statistical
business statistics using Excel OXFORD UNIVERSITY PRESS Glyn Davis & Branko Pecar
business statistics using Excel Glyn Davis & Branko Pecar OXFORD UNIVERSITY PRESS Detailed contents Introduction to Microsoft Excel 2003 Overview Learning Objectives 1.1 Introduction to Microsoft Excel
August 2012 EXAMINATIONS Solution Part I
August 01 EXAMINATIONS Solution Part I (1) In a random sample of 600 eligible voters, the probability that less than 38% will be in favour of this policy is closest to (B) () In a large random sample,
Premaster Statistics Tutorial 4 Full solutions
Premaster Statistics Tutorial 4 Full solutions Regression analysis Q1 (based on Doane & Seward, 4/E, 12.7) a. Interpret the slope of the fitted regression = 125,000 + 150. b. What is the prediction for
What are the place values to the left of the decimal point and their associated powers of ten?
The verbal answers to all of the following questions should be memorized before completion of algebra. Answers that are not memorized will hinder your ability to succeed in geometry and algebra. (Everything
Estimation of σ 2, the variance of ɛ
Estimation of σ 2, the variance of ɛ The variance of the errors σ 2 indicates how much observations deviate from the fitted surface. If σ 2 is small, parameters β 0, β 1,..., β k will be reliably estimated
Microsoft Excel Tutorial
Microsoft Excel Tutorial Microsoft Excel spreadsheets are a powerful and easy to use tool to record, plot and analyze experimental data. Excel is commonly used by engineers to tackle sophisticated computations
A Primer on Forecasting Business Performance
A Primer on Forecasting Business Performance There are two common approaches to forecasting: qualitative and quantitative. Qualitative forecasting methods are important when historical data is not available.
Stat 5303 (Oehlert): Tukey One Degree of Freedom 1
Stat 5303 (Oehlert): Tukey One Degree of Freedom 1 > catch
Module 5: Multiple Regression Analysis
Using Statistical Data Using to Make Statistical Decisions: Data Multiple to Make Regression Decisions Analysis Page 1 Module 5: Multiple Regression Analysis Tom Ilvento, University of Delaware, College
CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA
We Can Early Learning Curriculum PreK Grades 8 12 INSIDE ALGEBRA, GRADES 8 12 CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA April 2016 www.voyagersopris.com Mathematical
Statistical Models in R
Statistical Models in R Some Examples Steven Buechler Department of Mathematics 276B Hurley Hall; 1-6233 Fall, 2007 Outline Statistical Models Linear Models in R Regression Regression analysis is the appropriate
X X X a) perfect linear correlation b) no correlation c) positive correlation (r = 1) (r = 0) (0 < r < 1)
CORRELATION AND REGRESSION / 47 CHAPTER EIGHT CORRELATION AND REGRESSION Correlation and regression are statistical methods that are commonly used in the medical literature to compare two or more variables.
One-Way ANOVA using SPSS 11.0. SPSS ANOVA procedures found in the Compare Means analyses. Specifically, we demonstrate
1 One-Way ANOVA using SPSS 11.0 This section covers steps for testing the difference between three or more group means using the SPSS ANOVA procedures found in the Compare Means analyses. Specifically,
Using Excel for Handling, Graphing, and Analyzing Scientific Data:
Using Excel for Handling, Graphing, and Analyzing Scientific Data: A Resource for Science and Mathematics Students Scott A. Sinex Barbara A. Gage Department of Physical Sciences and Engineering Prince
Introduction to Statistical Computing in Microsoft Excel By Hector D. Flores; [email protected], and Dr. J.A. Dobelman
Introduction to Statistical Computing in Microsoft Excel By Hector D. Flores; [email protected], and Dr. J.A. Dobelman Statistics lab will be mainly focused on applying what you have learned in class with
E(y i ) = x T i β. yield of the refined product as a percentage of crude specific gravity vapour pressure ASTM 10% point ASTM end point in degrees F
Random and Mixed Effects Models (Ch. 10) Random effects models are very useful when the observations are sampled in a highly structured way. The basic idea is that the error associated with any linear,
