# UNDERSTANDING THE TWO-WAY ANOVA

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 UNDERSTANDING THE e have seen how the one-way ANOVA can be used to compare two or more sample means in studies involving a single independent variable. This can be extended to two independent variables (still maintaining a single dependent variable, which falls under the general heading of univariate statistics). It should come as no surprise that the kind of ANOVA to be considered here is referred to as a two-way ANOVA. A minimum of four X s are involved in any two-way ANOVA (i.e., two independent variables with a minimum of two levels each). Like any one-way ANOVA, a two-way ANOVA focuses on group means. Because it is an inferential technique, any two-way ANOVA is actually concerned with the set of µ values that correspond to the sample means that are computed from study s data. Statistically significant findings deal with means. If the phrase on average or some similar wording does not appear in the research report your readers may make the literal interpretation of that all of the individuals in one group outperformed those in the comparison group(s). Statistical assumptions may need to be tested, and the research questions will dictate whether planned and/or post hoc comparisons are used in conjunction with (or in lieu of) the two-way ANOVA. The two-way ANOVA has several variations of its name. For example, given that a factor is an independent variable, we can call it a two-way factorial design or a two-factor ANOVA. Another alternative method of labeling this design is in terms of the number of levels of each factor. For example, if a study had two levels of the first independent variable and five levels of the second independent variable, this would be referred to as a 2 5 factorial or a 2 5 ANOVA. Factorial ANOVA is used to address research questions that focus on the difference in the means of one dependent variable when there are two or more independent variables. For example, consider the following research question: is there a difference in the fitness levels of men and women who fall into one of three different age groups? In this case we have one dependent variable (i.e., fitness) and two independent variables (i.e., gender and age). The first of these, gender, has two levels (i.e., Men and omen), and the second, age, has three levels. The appropriate statistical test to address this question is known as a 2 3 factorial ANOVA, where the number 2 and 3 refer to the number of levels in the two respective independent variables. THE VARIABLES IN THE There are two kinds of variables in any ANOVA 1. INDEPENDENT VARIABLE A variable that is controlled or manipulated by the researcher. A categorical (discrete) variable used to form the groupings of observations. There are two types of independent variables: active and attribute. If the independent variable is an active variable then we manipulate the values of the variable to study its affect on another variable. For example, anxiety level is an active independent variable. An attribute independent variable is a variable where we do not alter the variable during the study. For example, we might want to study the effect of age on weight. e cannot change a person s age, but we can study people of different ages and weights.

2 A two-way ANOVA always involves two independent variables. Each independent variable, or factor, is made up of, or defined by, two or more elements called levels. Sometimes factors are called independent variables and sometimes they are called main effects. hen looked at simultaneously, the levels of the first factor and the levels of the second factor create the conditions of the study to be compared. Each of these conditions is referred to as a cell. 2. DEPENDENT VARIABLE The continuous variable that is, or is presumed to be, the result (outcome) of the manipulation of the independent variable(s). In the two-way (two-factor) ANOVA, there are two independent variables (factors) and a single dependent variable. Changes in the dependent variable are, or are presumed to be, the result of changes in the independent variables. Generally, the first independent variable is the variable in the rows (typically identified as J), and the second variable is in the columns (typically identified as K). e will use a new symbol called the dot subscript to denote the means of the rows and columns. For example, X 1 is the mean of all observations in the first row of the data matrix averaged across all columns. X 1 is the mean of all observations in the first column of the data matrix averaged across the rows. The mean of the observations in the first cell of the matrix corresponding to the first row and the first column is denoted X 11. In general, X jk is the cell mean for the jth row and the kth column, j mean for the jth row, and X k is the column mean for the kth column. X is the row ADVANTAGES OF A FACTORIAL DESIGN One reason that we do not use multiple one-way ANOVAs instead of a single two-way ANOVA is the same as the reason for carrying out a one-way ANOVA rather than using multiple t-tests, for to do so would involve analyzing in part the same data twice and therefore there may be an increased risk of committing a Type I error. However, there is one additional and very important reason for carrying out a factorial ANOVA in preference to using multiple one-way ANOVA tests. This reason is that carrying out multiple one-way ANOVAs would not enable the researcher to test for any interactions between the independent variables. Therefore a factorial ANOVA is carried out in preference to multiple one-way ANOVAs to avoid any increased risk in committing a Type I error and to enable both main and interaction effects to be tested. A researcher may decide to study the effects of K levels of a single independent variable (or factor) on a dependent variable. Such a study assumes that effects of other potentially important variables are the same over the K levels of this independent variable. PAGE 2

3 An alternative is to investigate the effects of one independent variable on the dependent variable, in conjunction with one or more additional independent variables. This would be called a factorial design. There are several advantages to using a factorial design: 1. One is efficiency (or economy) ith simultaneous analysis of the two independent variables, we are in essence carrying out two separate research studies concurrently. In addition to investigating how different levels of the two independent variables affect the dependent variable, we can test whether levels of one independent variable affect the dependent variable in the same way across the levels of the second independent variable. If the effect is not the same, we say there is an interaction between the two independent variables. Thus, with the two-factor design, we can study the effects of the individual independent variables, called the main effects, as well as the interaction effect. Since we are going to average the effects of one variable across the levels of the other variable, a two-variable factorial will require fewer participants than would two one-ways for the same degree of power. 2. Another advantage is control over a second variable by including it in the design as an independent variable. 3. A third advantage of factorial designs is that they allow greater generalizability of the results. Factorial designs allow for a much broader interpretation of the results, and at the same time give us the ability to say something meaningful about the results for each of the independent variables separately. 4. The fourth (and perhaps the most important) advantage of factorial designs is that it is possible to investigate the interaction of two or more independent variables. Because of the real world of research, the effect of a single independent variable is rarely unaffected by one or more other independent variables, the study of interaction among the independent variables may be the more important objective of an investigation. If only single-factor studies were conducted, the study of interaction among independent variables would be impossible. In addition to investigating how different levels of the two independent variables affect the dependent variable, we can test whether levels of one independent variable affect the dependent variable in the same way across the levels of the second independent variable. If the effect is not the same, we say that there is an interaction between the two independent variables. PAGE 3

4 MODELS IN THE In an ANOVA, there are two specific types of models that describe how we choose the levels of our independent variable. e can obtain the levels of the treatment (independent) variable in at least two different ways: e could, and most often do, deliberately select them or we could sample them at random. The way in which the levels are derived has important implications for the generalization we might draw from our study. If the levels of an independent variable (factor) were selected by the researcher because they were of particular interest and/or were all possible levels, it is a fixed-model (fixed-factor or effect). In other words, the levels did not constitute random samples from some larger population of levels. The treatment levels are deliberately selected and will remain constant from one replication to another. Generalization of such a model can be made only to the levels tested. Although most designs used in behavioral science research are fixed, there is another model we can use. If the levels of an independent variable (factor) are randomly selected from a larger population of levels, that variable is called a random-model (random-factor or effect). The treatment levels are randomly selected and if we replicated the study we would again choose the levels randomly and would most likely have a whole new set of levels. Results can be generalized to the population levels from which the levels of the independent variable were randomly selected. Applicable to the factorial design, if the levels of one of the independent variables are fixed and the levels of the second independent variable are random, we have a mixed-effects model. In a mixed-model, the results relative to the random effects can be generalized to the population of levels from which the levels were selected for the investigation; the results relative to the fixed effect can be generalized to the specific levels selected. HYPOTHESES FOR THE The null hypothesis for the J row population means is o H 0 : µ 1. = µ 2. = = µ J. o That is, there is no difference among the J row means. o The alternative hypothesis would be that the J row µs are not all equal to each other. That is, at least one pair or combination of J row means differs. The null hypothesis for the K column population means is o H 0 : µ. 1 = µ. 2 = = µ. K o That is, there is no difference among the K column means. o The alternative hypothesis would be that the K column µs are not all equal to each other. That is, at least one pair or combination of K column means differs. PAGE 4

5 The null hypothesis for the JK interaction is o H 0 : all (µ JK µ J. µ. K + µ) = 0 o That is, there is no difference in the JK cell means that cannot be explained by the differences among the row means, the column means, or both. o The alternative hypothesis would be that the pattern of differences among the cell µs in the first column (or the first row) fails to describe accurately the pattern of differences among the cell µs in at least one other column (row). That is, there are differences among the cell population means that cannot be attributable to the main effects. In other words, there is an interaction between the two independent variables. ASSUMPTIONS UNDERLYING THE A two-way ANOVA is a parametric test, and as such, it shares similar assumptions to all other parametric tests. Recall that parametric statistics are used with data that have certain characteristics approximate a normal distribution and are used to test hypotheses about the population. In addition to those listed below, the dependent variable should be measured on an interval (continuous) scale and the factors (independent variables) should be measured on a categorical or discrete scale. The importance of the assumptions is in the use of the F distribution as the appropriate sampling distribution for testing the null hypotheses. Similar to the assumptions underlying the one-way ANOVA, the assumptions underlying the two-way (factorial) ANOVA are: 1. The samples are independent, random samples from defined populations. This is commonly referred to as the assumption of independence. Independence and randomness are methodological concerns; they are dealt with (or should be dealt with) when a study is set up, when data are collected, and when results are generalized beyond the participants and conditions of the researcher s investigation. Although the independence and randomness assumption can ruin a study if they are violated, there is no way to use the study s sample data to test the validity of these prerequisite conditions. This assumption is therefore tested by examining the research design of the study. 2. The scores on the dependent variable are normally distributed in the population. This is commonly referred to as the assumption of normality. This assumption can be tested in certain circumstances and should be tested using such procedures as a Shapiro-ilks. 3. The population variances in all cells of the factorial design are equal. This is commonly referred to as the assumption of homogeneity of variance. This assumption can be tested in certain circumstances and should be tested using such procedures as a Levene s test of Homogeneity. The consequences of violating one or more of these assumptions are similar to that of the one-way ANOVA. As is the case for one-way ANOVA, the two-way ANOVA is robust with respect to the violations of the assumptions, particularly when there are large and equal numbers of observations in each cell of the factorial. PAGE 5

6 Researchers have several options when it becomes apparent that their data sets are characterized by extreme non-normality or heterogeneity of variance. One option is to apply a data transformation before testing any null hypotheses involving main effects or cell means. Different kinds of transformations are available because non-normality or heterogeneity of variance can exist in different forms. It is the researcher s job to choose an appropriate transformation that accomplishes the desired objective of bringing the data into greater agreement with the normality and equal variance assumptions. Ideally, for an interaction to be present the two independent variables should not significantly correlate with each other, however, each independent variable should correlate significantly with the dependent variable. THE MEANING OF MAIN EFFECTS ith the two-way ANOVA, there are two main effects (i.e., one for each of the independent variables or factors). Recall that we refer to the first independent variable as the J row and the second independent variable as the K column. For the J (row) main effect the row means are averaged across the K columns. For the K (column) main effect the column means are averaged across the J rows. e can look at the main effects in relation to research questions for the study. In any two-way ANOVA, the first research question asks whether there is a statistically significant main effect for the factor that corresponds to the rows of the two-dimensional picture of the study. That is, the first research question is asking whether the main effect means associated with the first factor are further apart from each other than would be expected by chance alone. There are as many such means as there are levels of the first factor. The second research question in any two-way ANOVA asks whether there is a statistically significant main effect for the factor that corresponds to the columns of the twodimensional picture of the study. e would consider the column main effect to be statistically significant if the main effect means for the second factor turn out to be further apart from each other than would be expected by chance alone. If there are only two levels of either J or K, and they are found to be significant, formal post hoc procedures are not necessary. Simply examine the group means for interpretation of significant differences. If there are three or more levels of either J or K, and they are found to be significant, formal post hoc procedures are required to determine which pairs or combinations of means differ. The choice of post hoc (multiple-comparison) procedures is dependent upon several factors with the key consideration as to whether the assumption of homogeneity of variance has been met (e.g., Tukey) or violated (e.g., Games-Howell). THE MEANING OF INTERACTION An interaction between the two factors is present in a two-way ANOVA when the effect of the levels of one factor is not the same across the levels of the other factor (or vice versa). For the two-way ANOVA, a third research question exists that asks whether there is a statistically significant interaction between the two factors involved in the study. The interaction deals with the cell means, not main effect means. An interaction exists to the extent that the difference between the levels of the first factor changes when we move from level to level of the second factor. There will always be at least four means involved in the interaction of any 2 2 ANOVA, six means involved in any 2 3 ANOVA, and so on. PAGE 6

7 The indication of a significant interaction is an examination of the F ratio for the interaction comparing the probability value to the alpha level. If p < α, we would reject the null hypothesis and conclude that there is a significant interaction between the two independent variables. Conversely, if p > α, we would retain the null hypothesis and conclude that there is not a significant interaction between the two independent variables. One procedure for examining an interaction is to plot the cell means. The scale of the dependent variable is placed on the vertical (Y) axis; the level of one of the independent variables is placed on the horizontal (X) axis, and the lines represent the second independent variable. The decision about which variable to place on the X axis is arbitrary. The decision is typically based on the focus of the study. That is, are we concerned primarily (or exclusively) with the row (or column) effect (see Simple Main Effects). Regardless of which variable is placed on this axis, the interpretation of the interaction is, for all practical purposes, the same. If there is no significant interaction between the two independent variables, the lines connecting the cell means will be parallel or nearly parallel (within sampling fluctuation). The plot of a significant interaction (the lines are not parallel) can have many patterns. If the levels (lines) of one independent variable never cross at any level of the other independent variable this pattern is called an ordinal interaction in part because there is a consistent order to the levels. That is, when one group is consistently above the other group, we have an ordinal interaction. If the levels (lines) of one independent variable cross at any level of the other independent variable this pattern is called a disordinal interaction. That is, an interaction in which group differences reverse their sign at some level of the other variable is referred to as a disordinal interaction. hat we need to keep in mind is that our interpretation is based on the data from the study. e cannot speculate as to what may (or may not) occur beyond the given data points. For example, if we look at the ordinal interaction plot, it would appear that if the lines were to continue, they would eventually cross. This is an interpretation that we cannot make, as we do not have data beyond those data points shown in the plots. Ordinal Interaction Disordinal Interaction Estimated Marginal Means of Change in GPA Estimated Marginal Means of Change in GPA 0.80 Gender Men omen 0.80 Gender Men omen 0.70 Estimated Marginal Means Estimated Marginal Means Method 1 Method 2 Control Note-Taking methods Method 1 Method 2 Control Note-Taking methods PAGE 7

8 SUMMARY TABLE FOR THE Summary ANOVA: The General Case for Two-way ANOVA Source of Variation Sum of Squares (SS) Degrees of Freedom (df) Rows SS J J 1 J = Columns SS K K 1 K = Interaction SS JK (J 1)(K 1) JK = Variance Estimate (Mean Square, ) SS J J 1 SS K K 1 SS JK ( J 1)( K 1) F Ratio J K JK ithin-cells SS JK(n 1) = Total SS T N 1 SS JK( n 1) A MEASURE OF ASSOCIATION (ω 2, OMEGA SQUARED) Omega squared is a measure of association between the independent and dependent variables in an ANOVA statistic. The interpretation of ω 2 is analogous to the interpretation of r 2 (coefficient of determination) that is, ω 2 indicates the proportion of variance in the dependent variable that is accounted for by the levels of the independent variable. The use of ω 2 can be extended to two-way ANOVA; the general formula is 2 ω SS = effect ( df SS T effect + )( For the (J) row main effect, the formula is: ) 2 ω For the (K) column main effect, the formula is: For the (JK) interaction, the formula is: 2 ω SS = SS = 2 ω JK J ( J 1)( SS + T SS = K T T ) ( K 1)( SS + ( J 1)( K 1)( SS + Suppose for example that we obtained ω 2 =.13 for the row main effect (gender), ω 2 =.44 for the column main effect (program length), and ω 2 =.13 for the interaction and our dependent variable was a flexibility measure. These data indicate that approximately 13 percent of the variance of the scores on the flexibility measure (dependent variable) can be attributed to the differences between males and females (row independent variable); approximately 44 percent can be attributed to the differences in the length of exercise programs (column independent variable); and approximately 13 percent can be attributed to the interaction of the two independent variables. ) ) PAGE 8

9 THE MEANING OF SIMPLE MAIN EFFECTS Recall that a main effect is that effect of a factor ignoring (averaging across) the other factor. Simple main effects (a.k.a., simple effect) are conditional on the level of the other variable. That is, a simple effect is the effect of one factor (independent variable) at each level of the other factor. The analysis of simple effects can be an important technique for analyzing data that contain significant interactions. In a very real sense, it allows us to tease apart significant interactions. Recall that a significant interaction tells us that the effect of one independent variable depends on the level of the other independent variable. That is, we seldom look at simple effects unless a significant interaction is present. As a general rule, you should only look at those simple effects in which you are interested. e can look at the J row simple effect (e.g., A) at each of the levels of the K column (e.g., B) and/or the K column simple effect (B) at each of the levels of the J row (A). For example, if we have a 2 3 ANOVA, our simple effects could look like the following: Simple Effects of A Simple Effects of B A at B 1 A at B 2 B at A 1 B at A 2 A at B 3 To test the simple main effects, we must use the error term from the overall analysis (, the estimate of error variance). error continues to be based on all the data because it is a better estimate with more degrees of freedom. The sums of squares (SS) for the simple effects are calculated the same way as any sum of squares, using only the data that we are interested in. The degrees of freedom (df) for the simple effects are calculated in the same way as for the corresponding main effects sense the number of means we are comparing are the same. Post hoc follow-up procedures (dependent on the number of levels) are needed to understand the meaning of the simple effect results. e must continue to parcel out our calculations until we get to pairwise comparisons (i.e., X i compared to X k ). hen we find significant pairwise differences we will need to calculate an effect size for each of the significant pairs, which will need to be calculated by hand. An examination of the group means will tell us which group performed significantly higher than the other did. For example, using the following formula: ES = Note that X i X j (which can also be written as X i X k ) is the mean difference of the two groups (pair) under consideration. This value can be calculated by hand or found on the Contrast Results (K Matrix) a table looking at the value of the Difference (Estimate Hypothesized). is the ithin Group s Mean Square value (a.k.a. Mean Square ithin or ERROR), which is found on the ANOVA Summary Table (or Test Results Table). X i X j PAGE 9

10 TESTING THE NULL HYPOTHESIS FOR THE ONE-AY ANOVA 1. State the hypotheses (null, H 0 and alternative, H a ) There is a null and alternative hypothesis for each main effect and the interaction 2. Set the criterion for rejecting H 0 (initial alpha level) 3. Check the assumptions Independence (design criteria) Normality (checked with Shapiro-ilks, Standardized Skewness, etc.) Homogeneity of Variance (checked with Levene s test of Homogeneity and Variance Ratio) 4. Compute the test statistic (F) Row (J) main effect Column (K) main effect Interaction (JK) effect 5. Decide whether to reject or retain H 0 If Interaction is significant follow-up with Simple Main Effects o If Simple Main Effect is significant (controlling for Type I error) and has more than 2 levels follow-up with Post hoc Procedures for pairwise comparisons If Pairwise Comparison is significant (controlling for Type I error) followup with Effect Size (using appropriate error term) o If Simple Main Effect is significant (controlling for Type I error) and has 2 levels review group means follow-up with Effect Size (using appropriate error term) If Interaction is not significant interpret main effect(s) o If Main Effect(s) is(are) significant and has more than 2 levels follow-up with Post hoc Procedures for pairwise comparisons If Pairwise Comparison is significant follow-up with Effect Size (using appropriate error term) o If Main Effect(s) is(are) significant and has 2 levels review group means follow-up with Effect Size (using appropriate error term) 6. Calculate Measure of Association as appropriate and interpret 7. Interpret the results 8. rite a Results Section based on the findings PAGE 10

11 Results A two-factor (2 3) Analysis of Variance was conducted to evaluate the effects of the length of an exercise program on the flexibility of female and male participants. The two independent variables in this study are gender and length of exercise program (1-week, 2-weeks, and 3-weeks). The dependent variable is the score on the flexibility measure, with higher scores indicating higher levels of flexibility. The means and standard deviations for the flexibility measure as a function of the two factors are presented in Table 1. Table 1 Means and Standard Deviations of Flexibility Levels* 1-eek 2-eeks 3-eeks Total Female (4.24) (3.16) (4.32) (8.23) Male (4.76) (2.90) (3.28) (4.56) Total (4.58) (2.93) (7.56) * Standard Deviations shown in parentheses The test for normality, examining standardized skewness and the Shapiro-ilks test indicated the data were statistically normal. The test for homogeneity of variance was not significant, Levene F (5, 42) =.67, p =.649, indicating that this assumption underlying the application of the two-way ANOVA was met. An alpha level of.05 was used for the initial analyses. The results for the two-way ANOVA indicated a significant main effect for gender, F(1, 42) = 22.37, p <.001 and a significant main effect for length of exercise program, F(2, 42) PAGE 11

12 = 36.03, p <.001. Additionally, the results show a significant interaction between gender and length of exercise program, F(2, 42) = 11.84, p <.001 (see Table 2), indicating that any differences between the length of exercise programs were dependent upon which gender the subjects were and that any differences between females and males were dependent upon which length of exercise program they were in (see Figure 1 for a graph of this interaction). Approximately 14% (ω 2 =.135) of the total variance of the flexibility levels was attributed to the interaction of gender and length of exercise program. Table 2 Two-way Analysis of Variance for Flexibility Levels Source SS df F p Gender Length of Program Gender Length ithin (Error) Total PAGE 12

13 50 Flexibility Measure Means Gender Female 20 1 eek 2 eeks 3 eeks Male Length of Exercise Program Figure 1 Flexibility Levels Because the interaction between gender and the length of exercise program was significant, we chose to ignore the two main effects and instead first examined the gender simple main effects, that is, the differences between females and males for each of the three lengths of exercise programs. To control for Type I error rate across the three simple effects, we set the alpha level for each at.0167 (α/3 =.05/3). The only significant difference between females and males was found in the 3-weeks exercise program. A review of the group means indicated that females (M = 41.00) had a significantly higher level of flexibility than males (M = 28.25), F(1, 42) = 43.98, p <.001, ES = Additionally, we examined the length of exercise program simple main effects, that is, the differences among the three lengths of exercise programs for females and males separately. To control for Type I error across the two simple main effects, we set the alpha level for each at PAGE 13

14 .025 (α/2 =.05/2). There was a significant difference among the three lengths of exercise programs for females, F(2, 42) = 41.60, p <.001, and for males, F(2, 42) = 6.26, p <.01. Followup tests were conducted to evaluate the three lengths of exercise programs pairwise differences for females. The alpha level was set at.0083 (.025/3) to control for Type I error over the three pairwise comparisons. The females in the 3-weeks exercise program (M = 41.00) had significantly higher flexibility levels compared to the females in the 1-week program (M = 24.63), F(1, 42) = 72.54, p <.001, ES = 4.26 and the females in the 2-weeks program (M = 27.37), F(1, 42) = 50.22, p <.001, ES = Follow-up tests were also conducted to evaluate the three lengths of exercise programs pairwise differences for males. The alpha level was set at.0083 (.025/3) to control for Type I error over these three pairwise comparisons. The males in the 3-weeks exercise program (M = 28.25) had significantly higher flexibility levels compared to the males in the 1-week program (M = 21.88), F(1, 42) = 11.00, p <.008, ES = REFERENCES Hinkle, D. E., iersma,., & Jurs. S. G. (2003). Applied Statistics for the Behavioral Sciences (5th ed.). Boston, MA: Houfton Mifflin. Howell, D. C. (2002). Statistical Methods for Psychology (5th ed.). Pacific Grove, CA: Duxbury. Huck, S.. (2004). Reading Statistics and Research (4th ed.). Boston, MA: Allyn and Bacon. Kerr, A.., Hall, H. K., & Kozub, S. A. (2003). Doing Statistics with SPSS. Thousand Oaks, CA: Sage Publications. PAGE 14

### UNDERSTANDING THE ONE-WAY ANOVA

UNDERSTANDING The One-way Analysis of Variance (ANOVA) is a procedure for testing the hypothesis that K population means are equal, where K >. The One-way ANOVA compares the means of the samples or groups

### INTERPRETING THE ONE-WAY ANALYSIS OF VARIANCE (ANOVA)

INTERPRETING THE ONE-WAY ANALYSIS OF VARIANCE (ANOVA) As with other parametric statistics, we begin the one-way ANOVA with a test of the underlying assumptions. Our first assumption is the assumption of

### UNDERSTANDING ANALYSIS OF COVARIANCE (ANCOVA)

UNDERSTANDING ANALYSIS OF COVARIANCE () In general, research is conducted for the purpose of explaining the effects of the independent variable on the dependent variable, and the purpose of research design

### UNDERSTANDING THE INDEPENDENT-SAMPLES t TEST

UNDERSTANDING The independent-samples t test evaluates the difference between the means of two independent or unrelated groups. That is, we evaluate whether the means for two independent groups are significantly

### Introduction to Analysis of Variance (ANOVA) Limitations of the t-test

Introduction to Analysis of Variance (ANOVA) The Structural Model, The Summary Table, and the One- Way ANOVA Limitations of the t-test Although the t-test is commonly used, it has limitations Can only

### Chapter 5 Analysis of variance SPSS Analysis of variance

Chapter 5 Analysis of variance SPSS Analysis of variance Data file used: gss.sav How to get there: Analyze Compare Means One-way ANOVA To test the null hypothesis that several population means are equal,

### Factor B: Curriculum New Math Control Curriculum (B (B 1 ) Overall Mean (marginal) Females (A 1 ) Factor A: Gender Males (A 2) X 21

1 Factorial ANOVA The ANOVA designs we have dealt with up to this point, known as simple ANOVA or oneway ANOVA, had only one independent grouping variable or factor. However, oftentimes a researcher has

### EPS 625 ANALYSIS OF COVARIANCE (ANCOVA) EXAMPLE USING THE GENERAL LINEAR MODEL PROGRAM

EPS 6 ANALYSIS OF COVARIANCE (ANCOVA) EXAMPLE USING THE GENERAL LINEAR MODEL PROGRAM ANCOVA One Continuous Dependent Variable (DVD Rating) Interest Rating in DVD One Categorical/Discrete Independent Variable

### Descriptive Statistics

Descriptive Statistics Primer Descriptive statistics Central tendency Variation Relative position Relationships Calculating descriptive statistics Descriptive Statistics Purpose to describe or summarize

### Module 9: Nonparametric Tests. The Applied Research Center

Module 9: Nonparametric Tests The Applied Research Center Module 9 Overview } Nonparametric Tests } Parametric vs. Nonparametric Tests } Restrictions of Nonparametric Tests } One-Sample Chi-Square Test

### Null Hypothesis H 0. The null hypothesis (denoted by H 0

Hypothesis test In statistics, a hypothesis is a claim or statement about a property of a population. A hypothesis test (or test of significance) is a standard procedure for testing a claim about a property

### Research Methods & Experimental Design

Research Methods & Experimental Design 16.422 Human Supervisory Control April 2004 Research Methods Qualitative vs. quantitative Understanding the relationship between objectives (research question) and

### How to choose a statistical test. Francisco J. Candido dos Reis DGO-FMRP University of São Paulo

How to choose a statistical test Francisco J. Candido dos Reis DGO-FMRP University of São Paulo Choosing the right test One of the most common queries in stats support is Which analysis should I use There

### One-Way ANOVA using SPSS 11.0. SPSS ANOVA procedures found in the Compare Means analyses. Specifically, we demonstrate

1 One-Way ANOVA using SPSS 11.0 This section covers steps for testing the difference between three or more group means using the SPSS ANOVA procedures found in the Compare Means analyses. Specifically,

### CHAPTER 11 CHI-SQUARE: NON-PARAMETRIC COMPARISONS OF FREQUENCY

CHAPTER 11 CHI-SQUARE: NON-PARAMETRIC COMPARISONS OF FREQUENCY The hypothesis testing statistics detailed thus far in this text have all been designed to allow comparison of the means of two or more samples

### UNDERSTANDING THE DEPENDENT-SAMPLES t TEST

UNDERSTANDING THE DEPENDENT-SAMPLES t TEST A dependent-samples t test (a.k.a. matched or paired-samples, matched-pairs, samples, or subjects, simple repeated-measures or within-groups, or correlated groups)

### ANOVA Analysis of Variance

ANOVA Analysis of Variance What is ANOVA and why do we use it? Can test hypotheses about mean differences between more than 2 samples. Can also make inferences about the effects of several different IVs,

### Class 19: Two Way Tables, Conditional Distributions, Chi-Square (Text: Sections 2.5; 9.1)

Spring 204 Class 9: Two Way Tables, Conditional Distributions, Chi-Square (Text: Sections 2.5; 9.) Big Picture: More than Two Samples In Chapter 7: We looked at quantitative variables and compared the

### Analysis of Data. Organizing Data Files in SPSS. Descriptive Statistics

Analysis of Data Claudia J. Stanny PSY 67 Research Design Organizing Data Files in SPSS All data for one subject entered on the same line Identification data Between-subjects manipulations: variable to

### Chapter 16 Multiple Choice Questions (The answers are provided after the last question.)

Chapter 16 Multiple Choice Questions (The answers are provided after the last question.) 1. Which of the following symbols represents a population parameter? a. SD b. σ c. r d. 0 2. If you drew all possible

### Outline. Definitions Descriptive vs. Inferential Statistics The t-test - One-sample t-test

The t-test Outline Definitions Descriptive vs. Inferential Statistics The t-test - One-sample t-test - Dependent (related) groups t-test - Independent (unrelated) groups t-test Comparing means Correlation

### MINITAB ASSISTANT WHITE PAPER

MINITAB ASSISTANT WHITE PAPER This paper explains the research conducted by Minitab statisticians to develop the methods and data checks used in the Assistant in Minitab 17 Statistical Software. One-Way

### UNDERSTANDING MULTIPLE REGRESSION

UNDERSTANDING Multiple regression analysis (MRA) is any of several related statistical methods for evaluating the effects of more than one independent (or predictor) variable on a dependent (or outcome)

### SPSS Tests for Versions 9 to 13

SPSS Tests for Versions 9 to 13 Chapter 2 Descriptive Statistic (including median) Choose Analyze Descriptive statistics Frequencies... Click on variable(s) then press to move to into Variable(s): list

### Unit 29 Chi-Square Goodness-of-Fit Test

Unit 29 Chi-Square Goodness-of-Fit Test Objectives: To perform the chi-square hypothesis test concerning proportions corresponding to more than two categories of a qualitative variable To perform the Bonferroni

### Study Guide for the Final Exam

Study Guide for the Final Exam When studying, remember that the computational portion of the exam will only involve new material (covered after the second midterm), that material from Exam 1 will make

### Section 13, Part 1 ANOVA. Analysis Of Variance

Section 13, Part 1 ANOVA Analysis Of Variance Course Overview So far in this course we ve covered: Descriptive statistics Summary statistics Tables and Graphs Probability Probability Rules Probability

### 12: Analysis of Variance. Introduction

1: Analysis of Variance Introduction EDA Hypothesis Test Introduction In Chapter 8 and again in Chapter 11 we compared means from two independent groups. In this chapter we extend the procedure to consider

### Technology Step-by-Step Using StatCrunch

Technology Step-by-Step Using StatCrunch Section 1.3 Simple Random Sampling 1. Select Data, highlight Simulate Data, then highlight Discrete Uniform. 2. Fill in the following window with the appropriate

### SCHOOL OF HEALTH AND HUMAN SCIENCES DON T FORGET TO RECODE YOUR MISSING VALUES

SCHOOL OF HEALTH AND HUMAN SCIENCES Using SPSS Topics addressed today: 1. Differences between groups 2. Graphing Use the s4data.sav file for the first part of this session. DON T FORGET TO RECODE YOUR

### PASS Sample Size Software

Chapter 250 Introduction The Chi-square test is often used to test whether sets of frequencies or proportions follow certain patterns. The two most common instances are tests of goodness of fit using multinomial

### Examining Differences (Comparing Groups) using SPSS Inferential statistics (Part I) Dwayne Devonish

Examining Differences (Comparing Groups) using SPSS Inferential statistics (Part I) Dwayne Devonish Statistics Statistics are quantitative methods of describing, analysing, and drawing inferences (conclusions)

### Quantitative Data Analysis: Choosing a statistical test Prepared by the Office of Planning, Assessment, Research and Quality

Quantitative Data Analysis: Choosing a statistical test Prepared by the Office of Planning, Assessment, Research and Quality 1 To help choose which type of quantitative data analysis to use either before

### General Procedure for Hypothesis Test. Five types of statistical analysis. 1. Formulate H 1 and H 0. General Procedure for Hypothesis Test

Five types of statistical analysis General Procedure for Hypothesis Test Descriptive Inferential Differences Associative Predictive What are the characteristics of the respondents? What are the characteristics

### Statistiek II. John Nerbonne. October 1, 2010. Dept of Information Science j.nerbonne@rug.nl

Dept of Information Science j.nerbonne@rug.nl October 1, 2010 Course outline 1 One-way ANOVA. 2 Factorial ANOVA. 3 Repeated measures ANOVA. 4 Correlation and regression. 5 Multiple regression. 6 Logistic

### How to Conduct a Hypothesis Test

How to Conduct a Hypothesis Test The idea of hypothesis testing is relatively straightforward. In various studies we observe certain events. We must ask, is the event due to chance alone, or is there some

### 1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96

1 Final Review 2 Review 2.1 CI 1-propZint Scenario 1 A TV manufacturer claims in its warranty brochure that in the past not more than 10 percent of its TV sets needed any repair during the first two years

### SPSS: Descriptive and Inferential Statistics. For Windows

For Windows August 2012 Table of Contents Section 1: Summarizing Data...3 1.1 Descriptive Statistics...3 Section 2: Inferential Statistics... 10 2.1 Chi-Square Test... 10 2.2 T tests... 11 2.3 Correlation...

### Data Analysis. Lecture Empirical Model Building and Methods (Empirische Modellbildung und Methoden) SS Analysis of Experiments - Introduction

Data Analysis Lecture Empirical Model Building and Methods (Empirische Modellbildung und Methoden) Prof. Dr. Dr. h.c. Dieter Rombach Dr. Andreas Jedlitschka SS 2014 Analysis of Experiments - Introduction

### Recall this chart that showed how most of our course would be organized:

Chapter 4 One-Way ANOVA Recall this chart that showed how most of our course would be organized: Explanatory Variable(s) Response Variable Methods Categorical Categorical Contingency Tables Categorical

### Testing Hypotheses using SPSS

Is the mean hourly rate of male workers \$2.00? T-Test One-Sample Statistics Std. Error N Mean Std. Deviation Mean 2997 2.0522 6.6282.2 One-Sample Test Test Value = 2 95% Confidence Interval Mean of the

### Research Variables. Measurement. Scales of Measurement. Chapter 4: Data & the Nature of Measurement

Chapter 4: Data & the Nature of Graziano, Raulin. Research Methods, a Process of Inquiry Presented by Dustin Adams Research Variables Variable Any characteristic that can take more than one form or value.

### Chapter 8 Hypothesis Testing Chapter 8 Hypothesis Testing 8-1 Overview 8-2 Basics of Hypothesis Testing

Chapter 8 Hypothesis Testing 1 Chapter 8 Hypothesis Testing 8-1 Overview 8-2 Basics of Hypothesis Testing 8-3 Testing a Claim About a Proportion 8-5 Testing a Claim About a Mean: s Not Known 8-6 Testing

### Variables and Data A variable contains data about anything we measure. For example; age or gender of the participants or their score on a test.

The Analysis of Research Data The design of any project will determine what sort of statistical tests you should perform on your data and how successful the data analysis will be. For example if you decide

### Association Between Variables

Contents 11 Association Between Variables 767 11.1 Introduction............................ 767 11.1.1 Measure of Association................. 768 11.1.2 Chapter Summary.................... 769 11.2 Chi

### individualdifferences

1 Simple ANalysis Of Variance (ANOVA) Oftentimes we have more than two groups that we want to compare. The purpose of ANOVA is to allow us to compare group means from several independent samples. In general,

### One-Way Analysis of Variance

One-Way Analysis of Variance Note: Much of the math here is tedious but straightforward. We ll skim over it in class but you should be sure to ask questions if you don t understand it. I. Overview A. We

### Categorical Variables in Regression: Implementation and Interpretation By Dr. Jon Starkweather, Research and Statistical Support consultant

Interpretation and Implementation 1 Categorical Variables in Regression: Implementation and Interpretation By Dr. Jon Starkweather, Research and Statistical Support consultant Use of categorical variables

### Using Excel for inferential statistics

FACT SHEET Using Excel for inferential statistics Introduction When you collect data, you expect a certain amount of variation, just caused by chance. A wide variety of statistical tests can be applied

### An analysis method for a quantitative outcome and two categorical explanatory variables.

Chapter 11 Two-Way ANOVA An analysis method for a quantitative outcome and two categorical explanatory variables. If an experiment has a quantitative outcome and two categorical explanatory variables that

### COMPARISONS OF CUSTOMER LOYALTY: PUBLIC & PRIVATE INSURANCE COMPANIES.

277 CHAPTER VI COMPARISONS OF CUSTOMER LOYALTY: PUBLIC & PRIVATE INSURANCE COMPANIES. This chapter contains a full discussion of customer loyalty comparisons between private and public insurance companies

### SPSS Guide: Tests of Differences

SPSS Guide: Tests of Differences I put this together to give you a step-by-step guide for replicating what we did in the computer lab. It should help you run the tests we covered. The best way to get familiar

### c. The factor is the type of TV program that was watched. The treatment is the embedded commercials in the TV programs.

STAT E-150 - Statistical Methods Assignment 9 Solutions Exercises 12.8, 12.13, 12.75 For each test: Include appropriate graphs to see that the conditions are met. Use Tukey's Honestly Significant Difference

### 1 Basic ANOVA concepts

Math 143 ANOVA 1 Analysis of Variance (ANOVA) Recall, when we wanted to compare two population means, we used the 2-sample t procedures. Now let s expand this to compare k 3 population means. As with the

### ANSWERS TO EXERCISES AND REVIEW QUESTIONS

ANSWERS TO EXERCISES AND REVIEW QUESTIONS PART FIVE: STATISTICAL TECHNIQUES TO COMPARE GROUPS Before attempting these questions read through the introduction to Part Five and Chapters 16-21 of the SPSS

### Lecture 7: Binomial Test, Chisquare

Lecture 7: Binomial Test, Chisquare Test, and ANOVA May, 01 GENOME 560, Spring 01 Goals ANOVA Binomial test Chi square test Fisher s exact test Su In Lee, CSE & GS suinlee@uw.edu 1 Whirlwind Tour of One/Two

Mgt 540 Research Methods Data Analysis 1 Additional sources Compilation of sources: http://lrs.ed.uiuc.edu/tseportal/datacollectionmethodologies/jin-tselink/tselink.htm http://web.utk.edu/~dap/random/order/start.htm

### Inferential Statistics

Inferential Statistics Sampling and the normal distribution Z-scores Confidence levels and intervals Hypothesis testing Commonly used statistical methods Inferential Statistics Descriptive statistics are

### Multivariate Analysis of Variance. The general purpose of multivariate analysis of variance (MANOVA) is to determine

2 - Manova 4.3.05 25 Multivariate Analysis of Variance What Multivariate Analysis of Variance is The general purpose of multivariate analysis of variance (MANOVA) is to determine whether multiple levels

### About Hypothesis Testing

About Hypothesis Testing TABLE OF CONTENTS About Hypothesis Testing... 1 What is a HYPOTHESIS TEST?... 1 Hypothesis Testing... 1 Hypothesis Testing... 1 Steps in Hypothesis Testing... 2 Steps in Hypothesis

### INTERPRETING THE REPEATED-MEASURES ANOVA

INTERPRETING THE REPEATED-MEASURES ANOVA USING THE SPSS GENERAL LINEAR MODEL PROGRAM RM ANOVA In this scenario (based on a RM ANOVA example from Leech, Barrett, and Morgan, 2005) each of 12 participants

### 3.4 Statistical inference for 2 populations based on two samples

3.4 Statistical inference for 2 populations based on two samples Tests for a difference between two population means The first sample will be denoted as X 1, X 2,..., X m. The second sample will be denoted

### Analysis of Variance ANOVA

Analysis of Variance ANOVA Overview We ve used the t -test to compare the means from two independent groups. Now we ve come to the final topic of the course: how to compare means from more than two populations.

### 1 Nonparametric Statistics

1 Nonparametric Statistics When finding confidence intervals or conducting tests so far, we always described the population with a model, which includes a set of parameters. Then we could make decisions

### e = random error, assumed to be normally distributed with mean 0 and standard deviation σ

1 Linear Regression 1.1 Simple Linear Regression Model The linear regression model is applied if we want to model a numeric response variable and its dependency on at least one numeric factor variable.

### Comparing three or more groups (one-way ANOVA...)

Page 1 of 36 Comparing three or more groups (one-way ANOVA...) You've measured a variable in three or more groups, and the means (and medians) are distinct. Is that due to chance? Or does it tell you the

### Unit 24 Hypothesis Tests about Means

Unit 24 Hypothesis Tests about Means Objectives: To recognize the difference between a paired t test and a two-sample t test To perform a paired t test To perform a two-sample t test A measure of the amount

### Statistics Review PSY379

Statistics Review PSY379 Basic concepts Measurement scales Populations vs. samples Continuous vs. discrete variable Independent vs. dependent variable Descriptive vs. inferential stats Common analyses

### IBM SPSS Statistics 23 Part 4: Chi-Square and ANOVA

IBM SPSS Statistics 23 Part 4: Chi-Square and ANOVA Winter 2016, Version 1 Table of Contents Introduction... 2 Downloading the Data Files... 2 Chi-Square... 2 Chi-Square Test for Goodness-of-Fit... 2 With

### II. DISTRIBUTIONS distribution normal distribution. standard scores

Appendix D Basic Measurement And Statistics The following information was developed by Steven Rothke, PhD, Department of Psychology, Rehabilitation Institute of Chicago (RIC) and expanded by Mary F. Schmidt,

### Chapter 11: Linear Regression - Inference in Regression Analysis - Part 2

Chapter 11: Linear Regression - Inference in Regression Analysis - Part 2 Note: Whether we calculate confidence intervals or perform hypothesis tests we need the distribution of the statistic we will use.

### MULTIPLE REGRESSION WITH CATEGORICAL DATA

DEPARTMENT OF POLITICAL SCIENCE AND INTERNATIONAL RELATIONS Posc/Uapp 86 MULTIPLE REGRESSION WITH CATEGORICAL DATA I. AGENDA: A. Multiple regression with categorical variables. Coding schemes. Interpreting

### Friedman's Two-way Analysis of Variance by Ranks -- Analysis of k-within-group Data with a Quantitative Response Variable

Friedman's Two-way Analysis of Variance by Ranks -- Analysis of k-within-group Data with a Quantitative Response Variable Application: This statistic has two applications that can appear very different,

### CALCULATIONS & STATISTICS

CALCULATIONS & STATISTICS CALCULATION OF SCORES Conversion of 1-5 scale to 0-100 scores When you look at your report, you will notice that the scores are reported on a 0-100 scale, even though respondents

### TABLE OF CONTENTS. About Chi Squares... 1. What is a CHI SQUARE?... 1. Chi Squares... 1. Hypothesis Testing with Chi Squares... 2

About Chi Squares TABLE OF CONTENTS About Chi Squares... 1 What is a CHI SQUARE?... 1 Chi Squares... 1 Goodness of fit test (One-way χ 2 )... 1 Test of Independence (Two-way χ 2 )... 2 Hypothesis Testing

### Simple Tricks for Using SPSS for Windows

Simple Tricks for Using SPSS for Windows Chapter 14. Follow-up Tests for the Two-Way Factorial ANOVA The Interaction is Not Significant If you have performed a two-way ANOVA using the General Linear Model,

### Independent t- Test (Comparing Two Means)

Independent t- Test (Comparing Two Means) The objectives of this lesson are to learn: the definition/purpose of independent t-test when to use the independent t-test the use of SPSS to complete an independent

### Multivariate Analysis of Variance (MANOVA)

Multivariate Analysis of Variance (MANOVA) Aaron French, Marcelo Macedo, John Poulsen, Tyler Waterson and Angela Yu Keywords: MANCOVA, special cases, assumptions, further reading, computations Introduction

### Overview of Non-Parametric Statistics PRESENTER: ELAINE EISENBEISZ OWNER AND PRINCIPAL, OMEGA STATISTICS

Overview of Non-Parametric Statistics PRESENTER: ELAINE EISENBEISZ OWNER AND PRINCIPAL, OMEGA STATISTICS About Omega Statistics Private practice consultancy based in Southern California, Medical and Clinical

### Chicago Booth BUSINESS STATISTICS 41000 Final Exam Fall 2011

Chicago Booth BUSINESS STATISTICS 41000 Final Exam Fall 2011 Name: Section: I pledge my honor that I have not violated the Honor Code Signature: This exam has 34 pages. You have 3 hours to complete this

### Difference of Means and ANOVA Problems

Difference of Means and Problems Dr. Tom Ilvento FREC 408 Accounting Firm Study An accounting firm specializes in auditing the financial records of large firm It is interested in evaluating its fee structure,particularly

### Module 5 Hypotheses Tests: Comparing Two Groups

Module 5 Hypotheses Tests: Comparing Two Groups Objective: In medical research, we often compare the outcomes between two groups of patients, namely exposed and unexposed groups. At the completion of this

### Chapter 3: Nonparametric Tests

B. Weaver (15-Feb-00) Nonparametric Tests... 1 Chapter 3: Nonparametric Tests 3.1 Introduction Nonparametric, or distribution free tests are so-called because the assumptions underlying their use are fewer

### For example, enter the following data in three COLUMNS in a new View window.

Statistics with Statview - 18 Paired t-test A paired t-test compares two groups of measurements when the data in the two groups are in some way paired between the groups (e.g., before and after on the

### 13: Additional ANOVA Topics. Post hoc Comparisons

13: Additional ANOVA Topics Post hoc Comparisons ANOVA Assumptions Assessing Group Variances When Distributional Assumptions are Severely Violated Kruskal-Wallis Test Post hoc Comparisons In the prior

### Effect Size Calculations and Elementary Meta-Analysis

Effect Size Calculations and Elementary Meta-Analysis David B. Wilson, PhD George Mason University August 2011 The End-Game Forest-Plot of Odds-Ratios and 95% Confidence Intervals for the Effects of Cognitive-Behavioral

### Two-Sample T-Tests Assuming Equal Variance (Enter Means)

Chapter 4 Two-Sample T-Tests Assuming Equal Variance (Enter Means) Introduction This procedure provides sample size and power calculations for one- or two-sided two-sample t-tests when the variances of

### Hypothesis Testing - Relationships

- Relationships Session 3 AHX43 (28) 1 Lecture Outline Correlational Research. The Correlation Coefficient. An example. Considerations. One and Two-tailed Tests. Errors. Power. for Relationships AHX43

### Allelopathic Effects on Root and Shoot Growth: One-Way Analysis of Variance (ANOVA) in SPSS. Dan Flynn

Allelopathic Effects on Root and Shoot Growth: One-Way Analysis of Variance (ANOVA) in SPSS Dan Flynn Just as t-tests are useful for asking whether the means of two groups are different, analysis of variance

### Sydney Roberts Predicting Age Group Swimmers 50 Freestyle Time 1. 1. Introduction p. 2. 2. Statistical Methods Used p. 5. 3. 10 and under Males p.

Sydney Roberts Predicting Age Group Swimmers 50 Freestyle Time 1 Table of Contents 1. Introduction p. 2 2. Statistical Methods Used p. 5 3. 10 and under Males p. 8 4. 11 and up Males p. 10 5. 10 and under

### Hypothesis Testing & Data Analysis. Statistics. Descriptive Statistics. What is the difference between descriptive and inferential statistics?

2 Hypothesis Testing & Data Analysis 5 What is the difference between descriptive and inferential statistics? Statistics 8 Tools to help us understand our data. Makes a complicated mess simple to understand.

### Point-Biserial and Biserial Correlations

Chapter 302 Point-Biserial and Biserial Correlations Introduction This procedure calculates estimates, confidence intervals, and hypothesis tests for both the point-biserial and the biserial correlations.

### One-Way Analysis of Variance (ANOVA) Example Problem

One-Way Analysis of Variance (ANOVA) Example Problem Introduction Analysis of Variance (ANOVA) is a hypothesis-testing technique used to test the equality of two or more population (or treatment) means

### Simple Linear Regression Inference

Simple Linear Regression Inference 1 Inference requirements The Normality assumption of the stochastic term e is needed for inference even if it is not a OLS requirement. Therefore we have: Interpretation

### Factorial Analysis of Variance

Chapter 560 Factorial Analysis of Variance Introduction A common task in research is to compare the average response across levels of one or more factor variables. Examples of factor variables are income

### Regression in SPSS. Workshop offered by the Mississippi Center for Supercomputing Research and the UM Office of Information Technology

Regression in SPSS Workshop offered by the Mississippi Center for Supercomputing Research and the UM Office of Information Technology John P. Bentley Department of Pharmacy Administration University of

### First-year Statistics for Psychology Students Through Worked Examples

First-year Statistics for Psychology Students Through Worked Examples 1. THE CHI-SQUARE TEST A test of association between categorical variables by Charles McCreery, D.Phil Formerly Lecturer in Experimental

### Lecture - 32 Regression Modelling Using SPSS

Applied Multivariate Statistical Modelling Prof. J. Maiti Department of Industrial Engineering and Management Indian Institute of Technology, Kharagpur Lecture - 32 Regression Modelling Using SPSS (Refer