1 Chapter 7 One-way ANOVA One-way ANOVA examines equality of population means for a quantitative outcome and a single categorical explanatory variable with any number of levels. The t-test of Chapter 6 looks at quantitative outcomes with a categorical explanatory variable that has only two levels. The one-way Analysis of Variance (ANOVA) can be used for the case of a quantitative outcome with a categorical explanatory variable that has two or more levels of treatment. The term oneway, also called one-factor, indicates that there is a single explanatory variable ( treatment ) with two or more levels, and only one level of treatment is applied at any time for a given subject. In this chapter we assume that each subject is exposed to only one treatment, in which case the treatment variable is being applied between-subjects. For the alternative in which each subject is exposed to several or all levels of treatment (at different times) we use the term within-subjects, but that is covered Chapter 14. We use the term two-way or two-factor ANOVA, when the levels of two different explanatory variables are being assigned, and each subject is assigned to one level of each factor. It is worth noting that the situation for which we can choose between one-way ANOVA and an independent samples t-test is when the explanatory variable has exactly two levels. In that case we always come to the same conclusions regardless of which method we use. The term analysis of variance is a bit of a misnomer. In ANOVA we use variance-like quantities to study the equality or non-equality of population means. So we are analyzing means, not variances. There are some unrelated methods, 171
2 172 CHAPTER 7. ONE-WAY ANOVA such as variance component analysis which have variances as the primary focus for inference. 7.1 Moral Sentiment Example As an example of application of one-way ANOVA consider the research reported in Moral sentiments and cooperation: Differential influences of shame and guilt by de Hooge, Zeelenberg, and M. Breugelmans (Cognition & Emotion,21(5): , 2007). As background you need to know that there is a well-established theory of Social Value Orientations or SVO (see Wikipedia for a brief introduction and references). SVOs represent characteristics of people with regard to their basic motivations. In this study a questionnaire called the Triple Dominance Measure was used to categorize subjects into proself and prosocial orientations. In this chapter we will examine simulated data based on the results for the proself individuals. The goal of the study was to investigate the effects of emotion on cooperation. The study was carried out using undergraduate economics and psychology students in the Netherlands. The sole explanatory variable is induced emotion. This is a nominal categorical variable with three levels: control, guilt and shame. Each subject was randomly assigned to one of the three levels of treatment. Guilt and shame were induced in the subjects by asking them to write about a personal experience where they experienced guilt or shame respectively. The control condition consisted of having the subject write about what they did on a recent weekday. (The validity of the emotion induction was tested by asking the subjects to rate how strongly they were feeling a variety of emotions towards the end of the experiment.) After inducing one of the three emotions, the experimenters had the subjects participate in a one-round computer game that is designed to test cooperation. Each subject initially had ten coins, with each coin worth 0.50 Euros for the subject but 1 Euro for their partner who is presumably connected separately to the computer. The subjects were told that the partners also had ten coins, each worth 0.50 Euros for themselves but 1 Euro for the subject. The subjects decided how many coins to give to the interaction partner, without knowing how many coins the interaction partner would give. In this game, both participants would earn 10 Euros when both offered all coins to the interaction partner (the
3 7.1. MORAL SENTIMENT EXAMPLE 173 cooperative option). If a cooperator gave all 10 coins but their partner gave none, the cooperator could end up with nothing, and the partner would end up with the maximum of 15 Euros. Participants could avoid the possibility of earning nothing by keeping all their coins to themselves which is worth 5 Euros plus 1 Euro for each coin their partner gives them (the selfish option). The number of coins offered was the measure of cooperation. The number of coins offered (0 to 10) is the outcome variable, and is called cooperation. Obviously this outcome is related to the concept of cooperation and is in some senses a good measure of cooperation, but just as obviously, it is not a complete measure of the concept. Cooperation as defined here is a discrete quantitative variable with a limited range of possible values. As explained below, the Analysis of Variance statistical procedure, like the t-test, is based on the assumption of a Gaussian distribution of the outcome at each level of the (categorical) explanatory variable. In this case, it is judged to be a reasonable approximation to treat cooperation as a continuous variable. There is no hard-and-fast rule, but 11 different values might be considered borderline, while, e.g., 5 different values would be hard to justify as possibly consistent with a Gaussian distribution. Note that this is a randomized experiment. The levels of treatment (emotion induced) are randomized and assigned by the experimenter. If we do see evidence that cooperation differs among the groups, we can validly claim that induced emotion causes different degrees of cooperation. If we had only measured the subjects current emotion rather than manipulating it, we could only conclude that emotion is associated with cooperation. Such an association could have other explanations than a causal relationship. E.g., poor sleep the night before could cause more feelings of guilt and more cooperation, without the guilt having any direct effect on cooperation. (See section 8.1 for more on causality.) The data can be found in MoralSent.dat. The data look like this: emotion cooperation Control 3 Control 0 Control 0 Typical exploratory data analyses include a tabulation of the frequencies of the levels of a categorical explanatory variable like emotion. Here we see 39 controls, 42 guilt subjects, and 45 shame subjects. Some sample statistics of cooperation broken down by each level of induced emotion are shown in table 7.1, and side-by-
4 174 CHAPTER 7. ONE-WAY ANOVA Figure 7.1: Boxplots of cooperation by induced emotion. side boxplots shown in figure 7.1. Our initial impression is that cooperation is higher for guilt than either shame or the control condition. The mean cooperation for shame is slightly lower than for the control. In terms of pre-checking model assumptions, the boxplots show fairly symmetric distributions with fairly equal spread (as demonstrated by the comparative IQRs). We see four high outliers for the shame group, but careful thought suggests that this may be unimportant because they are just one unit of measurement (coin) into the outlier region and that region may be pulled in a bit by the slightly narrower IQR of the shame group.
5 7.1. MORAL SENTIMENT EXAMPLE 175 Induced emotion Statistic Std.Error Cooperation Control Mean score 95% Confidence Lower Bound 2.48 Interval for Mean Upper Bound 4.50 Median 3.00 Std. Deviation 3.11 Minimum 0 Maximum 10 Skewness Kurtosis Guilt Mean % Confidence Lower Bound 4.37 Interval for Mean Upper Bound 6.39 Median 6.00 Std. Deviation 3.25 Minimum 0 Maximum 10 Skewness Kurtosis Shame Mean % Confidence Lower Bound 2.89 Interval for Mean Upper Bound 4.66 Median 4.00 Std. Deviation 2.95 Minimum 0 Maximum 10 Skewness Kurtosis Table 7.1: Group statistics for the moral sentiment experiment.
6 176 CHAPTER 7. ONE-WAY ANOVA 7.2 How one-way ANOVA works The model and statistical hypotheses One-way ANOVA is appropriate when the following model holds. We have a single treatment with, say, k levels. Treatment may be interpreted in the loosest possible sense as any categorical explanatory variable. There is a population of interest for which there is a true quantitative outcome for each of the k levels of treatment. The population outcomes for each group have mean parameters that we can label µ 1 through µ k with no restrictions on the pattern of means. The population variances for the outcome for each of the k groups defined by the levels of the explanatory variable all have the same value, usually called σ 2, with no restriction other than that σ 2 > 0. For treatment i, the distribution of the outcome is assumed to follow a Normal distribution with mean µ i and variance σ 2, often written N(µ i, σ 2 ). Our model assumes that the true deviations of observations from their corresponding group mean parameters, called the errors, are independent. In this context, independence indicates that knowing one true deviation would not help us predict any other true deviation. Because it is common that subjects who have a high outcome when given one treatment tend to have a high outcome when given another treatment, using the same subject twice would violate the independence assumption. Subjects are randomly selected from the population, and then randomly assigned to exactly one treatment each. The number of subjects assigned to treatment i (where 1 i k) is called n i if it differs between treatments or just n if all of the treatments have the same number of subjects. For convenience, define N = k i=1 n i, which is the total sample size. (In case you have forgotten, the Greek capital sigma (Σ) stands for summation, i.e., adding. In this case, the notation says that we should consider all values of n i where i is set to 1, 2,..., k, and then add them all up. For example, if we have k = 3 levels of treatment, and the group samples sizes are 12, 11, and 14 respectively, then n 1 = 12, n 2 = 11, n 3 = 14 and N = k i=1 n i = n 1 + n 2 + n 3 = = 37.) Because of the random treatment assignment, the sample mean for any treatment group is representative of the population mean for assignment to that group for the entire population.
7 7.2. HOW ONE-WAY ANOVA WORKS 177 Technically, the sample group means are unbiased estimators of the population group means when treatment is randomly assigned. The meaning of unbiased here is that the true mean of the sampling distribution of any group sample mean equals the corresponding population mean. Further, under the Normality, independence and equal variance assumptions it is true that the sampling distribution of Ȳi is N(µ i, σ 2 /n i ), exactly. The statistical model for which one-way ANOVA is appropriate is that the (quantitative) outcomes for each group are normally distributed with a common variance (σ 2 ). The errors (deviations of individual outcomes from the population group means) are assumed to be independent. The model places no restrictions on the population group means. The term assumption in statistics refers to any specific part of a statistical model. For one-way ANOVA, the assumptions are normality, equal variance, and independence of errors. Correct assignment of individuals to groups is sometimes considered to be an implicit assumption. The null hypothesis is a point hypothesis stating that nothing interesting is happening. For one-way ANOVA, we use H 0 : µ 1 = = µ k, which states that all of the population means are equal, without restricting what the common value is. The alternative must include everything else, which can be expressed as at least one of the k population means differs from all of the others. It is definitely wrong to use H A : µ 1 µ k because some cases, such as µ 1 = 5, µ 2 = 5, µ 3 = 10, are neither covered by H 0 nor this incorrect H A. You can write the alternative hypothesis as H A : Not µ 1 = = µ k or the population means are not all equal. One way to correctly write H A mathematically is H A : i, j : µ i µ j. This null hypothesis is called the overall null hypothesis and is the hypothesis tested by ANOVA, per se. If we have only two levels of our categorical explanatory
8 178 CHAPTER 7. ONE-WAY ANOVA variable, then retaining or rejecting the overall null hypothesis, is all that needs to be done in terms of hypothesis testing. But if we have 3 or more levels (k 3), then we usually need to followup on rejection of the overall null hypothesis with more specific hypotheses to determine for which population group means we have evidence of a difference. This is called contrast testing and discussion of it will be delayed until chapter 13. The overall null hypothesis for one-way ANOVA with k groups is H 0 : µ 1 = = µ k. The alternative hypothesis is that the population means are not all equal The F statistic (ratio) The next step in standard inference is to select a statistic for which we can compute the null sampling distribution and that tends to fall in a different region for the alternative than the null hypothesis. For ANOVA, we use the F-statistic. The single formula for the F-statistic that is shown in most textbooks is quite complex and hard to understand. But we can build it up in small understandable steps. Remember that a sample variance is calculated as SS/df where SS is sum of squared deviations from the mean and df is degrees of freedom (see page 69). In ANOVA we work with variances and also variance-like quantities which are not really the variance of anything, but are still calculated as SS/df. We will call all of these quantities mean squares or MS. i.e., MS = SS/df, which is a key formula that you should memorize. Note that these are not really means, because the denominator is the df, not n. For one-way ANOVA we will work with two different MS values called mean square within-groups, MS within, and mean square between-groups, MS between. We know the general formula for any MS, so we really just need to find the formulas for SS within and SS between, and their corresponding df. The F statistic denominator: MS within MS within is a pure estimate of σ 2 that is unaffected by whether the null or alternative hypothesis is true. Consider figure 7.2 which represents the within-group
9 7.2. HOW ONE-WAY ANOVA WORKS 179 deviations used in the calculation of MS within for a simple two-group experiment with 4 subjects in each group. The extension to more groups and/or different numbers of subjects is straightforward. Ȳ 1 = 4.25 Group 1 Group Ȳ 2 = Figure 7.2: Deviations for within-group sum of squares The deviation for subject j of group i in figure 7.2 is mathematically equal to Y ij Ȳi where Y ij is the observed value for subject j of group i and Ȳi is the sample mean for group i. I hope you can see that the deviations shown (black horizontal lines extending from the colored points to the colored group mean lines) are due to the underlying variation of subjects within a group. The variation has standard deviation σ, so that, e.g., about 2/3 of the times the deviation lines are shorter than σ. Regardless of the truth of the null hypothesis, for each individual group, MS i = SS i /df i is a good estimate of σ 2. The value of MS within comes from a statistically appropriate formula for combining all of the k separate group estimates of σ 2. It is important to know that MS within has N k df. For an individual group, i, SS i = n i j=1(y ij Ȳi) 2 and df i = n i 1. We can use some statistical theory beyond the scope of this course to show that in general, MS within is a good (unbiased) estimate of σ 2 if it is defined as MS within = SS within /df within
10 180 CHAPTER 7. ONE-WAY ANOVA where SS within = k i=1 SS i, and df within = k i=1 df i = k i=1 (n i 1) = N k. MS within is a good estimate of σ 2 (from our model) regardless of the truth of H 0. This is due to the way SS within is defined. SS within (and therefore MS within ) has N-k degrees of freedom with n i 1 coming from each of the k groups. The F statistic numerator: MS between Ȳ 1 = 4.25 Ȳ = Group 1 Group Ȳ 2 = Figure 7.3: Deviations for between-group sum of squares Now consider figure 7.3 which represents the between-group deviations used in the calculation of MS between for the same little 2-group 8-subject experiment as shown in figure 7.2. The single vertical black line is the average of all of the outcomes values in all of the treatment groups, usually called either the overall mean or the grand mean. The colored vertical lines are still the group means. The horizontal black lines are the deviations used for the between-group calculations. For each subject we get a deviation equal to the distance (difference) from that subject s group mean to the overall (grand) mean. These deviations are squared and summed to get SS between, which is then divided by the between-group df, which is k 1, to get MS between. MS between is a good estimate of σ 2 only when the null hypothesis is true. In this case we expect the group means to be fairly close together and close to the
11 7.2. HOW ONE-WAY ANOVA WORKS 181 grand mean. When the alternate hypothesis is true, as in our current example, the group means are farther apart and the value of MS between tends to be larger than σ 2. (We sometimes write this as MS between is an inflated estimate of σ 2.) SS between is the sum of the N squared between-group deviations, where the deviation is the same for all subjects in the same group. The formula is k SS between = n i (Ȳi Ȳ ) 2 i=1 where Ȳ is the grand mean. Because the k unique deviations add up to zero, we are free to choose only k 1 of them, and then the last one is fully determined by the others, which is why df between = k 1 for one-way ANOVA. Because of the way SS between is defined, MS between is a good estimate of σ 2 only if H 0 is true. Otherwise it tends to be larger. SS between (and therefore MS between ) has k 1 degrees of freedom. The F statistic ratio It might seem that we only need MS between to distinguish the null from the alternative hypothesis, but that ignores the fact that we don t usually know the value of σ 2. So instead we look at the ratio F = MS between MS within to evaluate the null hypothesis. Because the denominator is always (under null and alternative hypotheses) an estimate of σ 2 (i.e., tends to have a value near σ 2 ), and the numerator is either another estimate of σ 2 (under the null hypothesis) or is inflated (under the alternative hypothesis), it is clear that the (random) values of the F-statistic (from experiment to experiment) tend to fall around 1.0 when
12 182 CHAPTER 7. ONE-WAY ANOVA the null hypothesis is true and are bigger when the alternative is true. So if we can compute the sampling distribution of the F statistic under the null hypothesis, then we will have a useful statistic for distinguishing the null from the alternative hypotheses, where large values of F argue for rejection of H 0. The F-statistic, defined by F = MS between MS within, tends to be larger if the alternative hypothesis is true than if the null hypothesis is true Null sampling distribution of the F statistic Using the technical condition that the quantities MS between and MS within are independent, we can apply probability and statistics techniques (beyond the scope of this course) to show that the null sampling distribution of the F statistic is that of the F-distribution (see section 3.9.7). The F-distribution is indexed by two numbers called the numerator and denominator degrees of freedom. This indicates that there are (infinitely) many F-distribution pdf curves, and we must specify these two numbers to select the appropriate one for any given situation. Not surprisingly the null sampling distribution of the F-statistic for any given one-way ANOVA is the F-distribution with numerator degrees of freedom equal to df between = k 1 and denominator degrees of freedom equal to df within = N k. Note that this indicates that the kinds of F-statistic values we will see if the null hypothesis is true depends only on the number of groups and the numbers of subjects, and not on the values of the population variance or the population group means. It is worth mentioning that the degrees of freedom are measures of the size of the experiment, where bigger experiments (more groups or more subjects) have bigger df. We can quantify large for the F-statistic, by comparing it to its null sampling distribution which is the specific F-distribution which has degrees of freedom matching the numerator and denominator of the F-statistic.
13 7.2. HOW ONE-WAY ANOVA WORKS 183 Density df=1,10 df=2,10 df=3,10 df=3, F Figure 7.4: A variety of F-distribution pdfs. The F-distribution is a non-negative distribution in the sense that F values, which are squares, can never be negative numbers. The distribution is skewed to the right and continues to have some tiny probability no matter how large F gets. The mean of the distribution is s/(s 2), where s is the denominator degrees of freedom. So if s is reasonably large then the mean is near 1.00, but if s is small, then the mean is larger (e.g., k=2, n=4 per group gives s=3+3=6, and a mean of 6/4=1.5). Examples of F-distributions with different numerator and denominator degrees of freedom are shown in figure 7.4. These curves are probability density functions, so the regions on the x-axis where the curve is high are the values most likely to occur. And the area under the curve between any two F values is equal to the probability that a random variable following the given distribution will fall between those values. Although very low F values are more likely for, say, the
14 184 CHAPTER 7. ONE-WAY ANOVA Density Observed F statistic=2.0 shaded area is F Figure 7.5: The F(3,10) pdf and the p-value for F=2.0. F(1,10) distribution than the F(3,10) distribution, very high values are also more common for the F(1,10) than the F(3,10) values, though this may be hard to see in the figure. The bigger the numerator and/or denominator df, the more concentrated the F values will be around Inference: hypothesis testing There are two ways to use the null sampling distribution of F in one-way ANOVA: to calculate a p-value or to find the critical value (see below). A close up of the F-distribution with 3 and 10 degrees of freedom is shown in figure 7.5. This is the appropriate null sampling distribution of an F-statistic for an experiment with a quantitative outcome and one categorical explanatory variable (factor) with k=4 levels (each subject gets one of four different possible treatments) and with 14 subjects divided among the 4 groups. A vertical line marks an F-statistic of 2.0 (the observed value from some experiment). The p- value for this result is the chance of getting an F-statistic greater than or equal to
15 7.2. HOW ONE-WAY ANOVA WORKS 185 Density F critical for alpha= F Figure 7.6: The F(3,10) pdf and its alpha=0.05 critical value. 2.0 when the null hypothesis is true, which is the shaded area. The total area is always 1.0, and the shaded area is in this example, so the p-value is (not significant at the usual 0.05 alpha level). Figure 7.6 shows another close up of the F-distribution with 3 and 10 degrees of freedom. We will use this figure to define and calculate the F-critical value. For a given alpha (significance level), usually 0.05, the F-critical value is the F value above which 100α% of the null sampling distribution occurs. For experiments with 3 and 10 df, and using α = 0.05, the figure shows that the F-critical value is Note that this value can be obtained from a computer before the experiment is run, as long as we know how many subjects will be studied and how many levels the explanatory variable has. Then when the experiment is run, we can calculate the observed F-statistic and compare it to F-critical. If the statistic is smaller than the critical value, we retain the null hypothesis because the p-value must be bigger than α, and if the statistic is equal to or bigger than the critical value, we reject the null hypothesis because the p-value must be equal to or smaller than α.
16 186 CHAPTER 7. ONE-WAY ANOVA Inference: confidence intervals It is often worthwhile to express what we have learned from an experiment in terms of confidence intervals. In one-way ANOVA it is possible to make confidence intervals for population group means or for differences in pairs of population group means (or other more complex comparisons). We defer discussion of the latter to chapter 13. Construction of a confidence interval for a population group means is usually done as an appropriate plus or minus amount around a sample group mean. We use MS within as an estimate of σ 2, and then for group i, the standard error of the mean is MS within /n i. As discussed in section 6.2.7, the multiplier for the standard error of the mean is the so called quantile of the t-distribution which defines a central area equal to the desired confidence level. This comes from a computer or table of t-quantiles. For a 95% CI this is often symbolized as t 0.025,df where df is the degrees of freedom of MS within, (N k). Construct the CI as the sample mean plus or minus (SEM times the multiplier). In a nutshell: In one-way ANOVA we calculate the F-statistic as the ratio MS between /MS within. Then the p-value is calculated as the area under the appropriate null sampling distribution of F that is bigger than the observed F-statistic. We reject the null hypothesis if p α. 7.3 Do it in SPSS To run a one-way ANOVA in SPSS, use the Analyze menu, select Compare Means, then One-Way ANOVA. Add the quantitative outcome variable to the Dependent List, and the categorical explanatory variable to the Factor box. Click OK to get the output. The dialog box for One-Way ANOVA is shown in figure 7.7. You can also use the Options button to perform descriptive statistics by group, perform a variance homogeneity test, or make a means plot.
17 7.4. READING THE ANOVA TABLE 187 Figure 7.7: One-Way ANOVA dialog box. You can use the Contrasts button to specify particular planned contrasts among the levels or you can use the Post-Hoc button to make unplanned contrasts (corrected for multiple comparisons), usually using the Tukey procedure for all pairs or the Dunnett procedure when comparing each level to a control level. See chapter 13 for more information. 7.4 Reading the ANOVA table The ANOVA table is the main output of an ANOVA analysis. It always has the source of variation labels in the first column, plus additional columns for sum of squares, degrees of freedom, means square, F, and the p-value (labeled Sig. in SPSS). For one-way ANOVA, there are always rows for Between Groups variation and Within Groups variation, and often a row for Total variation. In one-way ANOVA there is only a single F statistic (MS between /MS within ), and this is shown on the Between Groups row. There is also only one p-value, because there is only one (overall) null hypothesis, namely H 0 : µ 1 = = µ k, and because the p-value comes from comparing the (single) F value to its null sampling distribution. The calculation of MS for the total row is optional. Table 7.2 shows the results for the moral sentiment experiment. There are several important aspects to this table that you should understand. First, as discussed above, the Between Groups lines refer to the variation of the group means around the grand mean, and the Within Groups line refers to the variation
18 188 CHAPTER 7. ONE-WAY ANOVA Sum of Squares df Mean Square F Sig. Between Groups Within Groups Total Table 7.2: ANOVA for the moral sentiment experiment. of the subjects around their group means. The Total line refers to variation of the individual subjects around the grand mean. The Mean Square for the Total line is exactly the same as the variance of all of the data, ignoring the group assignments. In any ANOVA table, the df column refers to the number of degrees of freedom in the particular SS defined on the same line. The MS on any line is always equal to the SS/df for that line. F-statistics are given on the line that has the MS that is the numerator of the F-statistic (ratio). The denominator comes from the MS of the Within Groups line for one-way ANOVA, but this is not always true for other types of ANOVA. It is always true that there is a p-value for each F-statistic, and that the p-value is the area under the null sampling distribution of that F- statistic that is above the (observed) F value shown in the table. Also, we can always tell which F-distribution is the appropriate null sampling distribution for any F-statistic, by finding the numerator and denominator df in the table. An ANOVA is a breakdown of the total variation of the data, in the form of SS and df, into smaller independent components. For the one-way ANOVA, we break down the deviations of individual values from the overall mean of the data into deviations of the group means from the overall mean, and then deviations of the individuals from their group means. The independence of these sources of deviation results in additivity of the SS and df columns (but not the MS column). So we note that SS Total = SS Between +SS Within and df Total = df Between +df Within. This fact can be used to reduce the amount of calculation, or just to check that the calculation were done and recorded correctly. Note that we can calculate MS Total = /125 = which is the variance of all of the data (thrown together and ignoring the treatment groups). You can see that MS Total is certainly not equal to MS Between + MS Within. Another use of the ANOVA table is to learn about an experiment when it is not full described (or to check that the ANOVA was performed and recorded
19 7.5. ASSUMPTION CHECKING 189 correctly). Just from this one-way ANOVA table, we can see that there were 3 treatment groups (because df Between is one less than the number of groups). Also, we can calculate that there were 125+1=126 subjects in the experiment. Finally, it is worth knowing that MS within is an estimate of σ 2, the variance of outcomes around their group mean. So we can take the square root of MS within to get an estimate of σ, the standard deviation. Then we know that the majority (about 2 ) of the measurements for each group are within σ of the group mean and 3 most (about 95%) are within 2σ, assuming a Normal distribution. In this example the estimate of the s.d. is 9.60 = 3.10, so individual subject cooperation values more than 2(3.10)=6.2 coins from their group means would be uncommon. You should understand the structure of the one-way ANOVA table including that MS=SS/df for each line, SS and df are additive, F is the ratio of between to within group MS, the p-value comes from the F-statistic and its presumed (under model assumptions) null sampling distribution, and the number of treatments and number of subjects can be calculated from degrees of freedom. 7.5 Assumption checking Except for the skewness of the shame group, the skewness and kurtosis statistics for all three groups are within 2SE of zero (see Table 7.1), and that one skewness is only slightly beyond 2SE from zero. This suggests that there is no evidence against the Normality assumption. The close similarity of the three group standard deviations suggests that the equal variance assumption is OK. And hopefully the subjects are totally unrelated, so the independent errors assumption is OK. Therefore we can accept that the F-distribution used to calculate the p-value from the F-statistic is the correct one, and we believe the p-value. 7.6 Conclusion about moral sentiments With p = < 0.05, we reject the null hypothesis that all three of the group population means of cooperation are equal. We therefore conclude that differences
20 190 CHAPTER 7. ONE-WAY ANOVA in mean cooperation are caused by the induced emotions, and that among control, guilt, and shame, at least two of the population means differ. Again, we defer looking at which groups differ to chapter 13. (A complete analysis would also include examination of residuals for additional evaluation of possible non-normality or unequal spread.) The F-statistic of one-way ANOVA is easily calculated by a computer. The p-value is calculated from the F null sampling distribution with matching degrees of freedom. But only if we believe that the assumptions of the model are (approximately) correct should we believe that the p-value was calculated from the correct sampling distribution, and it is then valid.
Chapter 11 Two-Way ANOVA An analysis method for a quantitative outcome and two categorical explanatory variables. If an experiment has a quantitative outcome and two categorical explanatory variables that
Statistics: Rosie Cornish. 200. 1.5 Oneway Analysis of Variance 1 Introduction Oneway analysis of variance (ANOVA) is used to compare several means. This method is often used in scientific or medical experiments
Chapter 4 One-Way ANOVA Recall this chart that showed how most of our course would be organized: Explanatory Variable(s) Response Variable Methods Categorical Categorical Contingency Tables Categorical
Testing Group Differences using T-tests, ANOVA, and Nonparametric Measures Jamie DeCoster Department of Psychology University of Alabama 348 Gordon Palmer Hall Box 870348 Tuscaloosa, AL 35487-0348 Phone:
t-tests in Excel By Mark Harmon Copyright 2011 Mark Harmon No part of this publication may be reproduced or distributed without the express permission of the author. email@example.com www.excelmasterseries.com
Biostatistics: DESCRIPTIVE STATISTICS: 2, VARIABILITY 1. Introduction Besides arriving at an appropriate expression of an average or consensus value for observations of a population, it is important to
1 One-Way ANOVA using SPSS 11.0 This section covers steps for testing the difference between three or more group means using the SPSS ANOVA procedures found in the Compare Means analyses. Specifically,
One-Way Analysis of Variance Note: Much of the math here is tedious but straightforward. We ll skim over it in class but you should be sure to ask questions if you don t understand it. I. Overview A. We
INTERPRETING THE ONE-WAY ANALYSIS OF VARIANCE (ANOVA) As with other parametric statistics, we begin the one-way ANOVA with a test of the underlying assumptions. Our first assumption is the assumption of
MINITAB ASSISTANT WHITE PAPER This paper explains the research conducted by Minitab statisticians to develop the methods and data checks used in the Assistant in Minitab 17 Statistical Software. One-Way
LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING In this lab you will explore the concept of a confidence interval and hypothesis testing through a simulation problem in engineering setting.
Chapter 6 The t-test and Basic Inference Principles The t-test is used as an example of the basic principles of statistical inference. One of the simplest situations for which we might design an experiment
ANOVA ANOVA Analysis of Variance Chapter 6 A procedure for comparing more than two groups independent variable: smoking status non-smoking one pack a day > two packs a day dependent variable: number of
Chapter 9 Simple Linear Regression An analysis appropriate for a quantitative outcome and a single quantitative explanatory variable. 9.1 The model behind linear regression When we are examining the relationship
Chapter 5 Analysis of variance SPSS Analysis of variance Data file used: gss.sav How to get there: Analyze Compare Means One-way ANOVA To test the null hypothesis that several population means are equal,
Unit 31: One-Way ANOVA Summary of Video A vase filled with coins takes center stage as the video begins. Students will be taking part in an experiment organized by psychology professor John Kelly in which
Research methods - II 3 2. Simple Linear Regression Simple linear regression is a technique in parametric statistics that is commonly used for analyzing mean response of a variable Y which changes according
Data Analysis Tools This section of the notes is meant to introduce you to many of the tools that are provided by Excel under the Tools/Data Analysis menu item. If your computer does not have that tool
Chapter 7 Section 7.1: Inference for the Mean of a Population Now let s look at a similar situation Take an SRS of size n Normal Population : N(, ). Both and are unknown parameters. Unlike what we used
Contents 10 Chi Square Tests 703 10.1 Introduction............................ 703 10.2 The Chi Square Distribution.................. 704 10.3 Goodness of Fit Test....................... 709 10.4 Chi Square
This document contains general guidelines for the reporting of statistics in psychology research. The details of statistical reporting vary slightly among different areas of science and also among different
1: Analysis of Variance Introduction EDA Hypothesis Test Introduction In Chapter 8 and again in Chapter 11 we compared means from two independent groups. In this chapter we extend the procedure to consider
Tutorial The F distribution and the basic principle behind ANOVAs Bodo Winter 1 Updates: September 21, 2011; January 23, 2014; April 24, 2014; March 2, 2015 This tutorial focuses on understanding rather
Data used in this guide: studentp.sav (http://people.ysu.edu/~gchang/stat/studentp.sav) Organize and Display One Quantitative Variable (Descriptive Statistics, Boxplot & Histogram) 1. Move the mouse pointer
Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates
Two Related Samples t Test In this example 1 students saw five pictures of attractive people and five pictures of unattractive people. For each picture, the students rated the friendliness of the person
Introduction to Hypothesis Testing CHAPTER 8 LEARNING OBJECTIVES After reading this chapter, you should be able to: 1 Identify the four steps of hypothesis testing. 2 Define null hypothesis, alternative
KSTAT MINI-MANUAL Decision Sciences 434 Kellogg Graduate School of Management Kstat is a set of macros added to Excel and it will enable you to do the statistics required for this course very easily. To
One-Way Analysis of Variance (ANOVA) Example Problem Introduction Analysis of Variance (ANOVA) is a hypothesis-testing technique used to test the equality of two or more population (or treatment) means
Individuals: The objects that are described by a set of data. They may be people, animals, things, etc. (Also referred to as Cases or Records) Variables: The characteristics recorded about each individual.
277 CHAPTER VI COMPARISONS OF CUSTOMER LOYALTY: PUBLIC & PRIVATE INSURANCE COMPANIES. This chapter contains a full discussion of customer loyalty comparisons between private and public insurance companies
Lecture Notes Module 1 Study Populations A study population is a clearly defined collection of people, animals, plants, or objects. In psychological research, a study population usually consists of a specific
Lecture 2: Descriptive Statistics and Exploratory Data Analysis Further Thoughts on Experimental Design 16 Individuals (8 each from two populations) with replicates Pop 1 Pop 2 Randomly sample 4 individuals
Glo bal Leadership M BA BUSINESS STATISTICS FINAL EXAM Name: INSTRUCTIONS 1. Do not open this exam until instructed to do so. 2. Be sure to fill in your name before starting the exam. 3. You have two hours
UNDERSTANDING THE e have seen how the one-way ANOVA can be used to compare two or more sample means in studies involving a single independent variable. This can be extended to two independent variables
A Basic Guide to Analyzing Individual Scores Data with SPSS Step 1. Clean the data file Open the Excel file with your data. You may get the following message: If you get this message, click yes. Delete
Case Study in Data Analysis Does a drug prevent cardiomegaly in heart failure? Harvey Motulsky firstname.lastname@example.org This is the first case in what I expect will be a series of case studies. While I mention
Chapter 9 Two-Sample Tests Paired t Test (Correlated Groups t Test) Effect Sizes and Power Paired t Test Calculation Summary Independent t Test Chapter 9 Homework Power and Two-Sample Tests: Paired Versus
1 Simple ANalysis Of Variance (ANOVA) Oftentimes we have more than two groups that we want to compare. The purpose of ANOVA is to allow us to compare group means from several independent samples. In general,
UNDERSTANDING The independent-samples t test evaluates the difference between the means of two independent or unrelated groups. That is, we evaluate whether the means for two independent groups are significantly
Main Effects & Interactions page 1 Main Effects and Interactions So far, we ve talked about studies in which there is just one independent variable, such as violence of television program. You might randomly
Step 1: Regression step-by-step using Microsoft Excel Notes prepared by Pamela Peterson Drake, James Madison University Type the data into the spreadsheet The example used throughout this How to is a regression
Mgt 540 Research Methods Data Analysis 1 Additional sources Compilation of sources: http://lrs.ed.uiuc.edu/tseportal/datacollectionmethodologies/jin-tselink/tselink.htm http://web.utk.edu/~dap/random/order/start.htm
15. Analysis of Variance A. Introduction B. ANOVA Designs C. One-Factor ANOVA (Between-Subjects) D. Multi-Factor ANOVA (Between-Subjects) E. Unequal Sample Sizes F. Tests Supplementing ANOVA G. Within-Subjects
Chapter 565 Randomized Block Analysis of Variance Introduction This module analyzes a randomized block analysis of variance with up to two treatment factors and their interaction. It provides tables of
HYPOTHESIS TESTING: CONFIDENCE INTERVALS, T-TESTS, ANOVAS, AND REGRESSION HOD 2990 10 November 2010 Lecture Background This is a lightning speed summary of introductory statistical methods for senior undergraduate
Profile Analysis Introduction Profile analysis is the multivariate equivalent of repeated measures or mixed ANOVA. Profile analysis is most commonly used in two cases: ) Comparing the same dependent variables
Notation: Notation and Equations for Regression Lecture 11/4 m: The number of predictor variables in a regression Xi: One of multiple predictor variables. The subscript i represents any number from 1 through
Why Taking This Course? Course Introduction, Descriptive Statistics and Data Visualization GENOME 560, Spring 2012 Data are interesting because they help us understand the world Genomics: Massive Amounts
ABSORBENCY OF PAPER TOWELS 15. Brief Version of the Case Study 15.1 Problem Formulation 15.2 Selection of Factors 15.3 Obtaining Random Samples of Paper Towels 15.4 How will the Absorbency be measured?
2 Making Connections: The Two-Sample t-test, Regression, and ANOVA In theory, there s no difference between theory and practice. In practice, there is. Yogi Berra 1 Statistics courses often teach the two-sample
PholC60 September 001 DATA INTERPRETATION AND STATISTICS Books A easy and systematic introductory text is Essentials of Medical Statistics by Betty Kirkwood, published by Blackwell at about 14. DESCRIPTIVE
Babraham Bioinformatics Introduction to Statistics with GraphPad Prism (5.01) Version 1.1 Introduction to Statistics with GraphPad Prism 2 Licence This manual is 2010-11, Anne Segonds-Pichon. This manual
Mathematics Fairfield Public Schools AP Statistics AP Statistics BOE Approved 04/08/2014 1 AP STATISTICS Critical Areas of Focus AP Statistics is a rigorous course that offers advanced students an opportunity
THE BASICS OF DATA MANAGEMENT AND ANALYSIS A USER GUIDE January 26, 2009 The Faculty Center for Teaching and Learning THE BASICS OF DATA MANAGEMENT AND ANALYSIS Table of Contents Table of Contents... i
MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS MSR = Mean Regression Sum of Squares MSE = Mean Squared Error RSS = Regression Sum of Squares SSE = Sum of Squared Errors/Residuals α = Level of Significance
5. Linear Regression Outline.................................................................... 2 Simple linear regression 3 Linear model............................................................. 4
Descriptive Statistics Primer Descriptive statistics Central tendency Variation Relative position Relationships Calculating descriptive statistics Descriptive Statistics Purpose to describe or summarize
1 Final Review 2 Review 2.1 CI 1-propZint Scenario 1 A TV manufacturer claims in its warranty brochure that in the past not more than 10 percent of its TV sets needed any repair during the first two years
Chapter 6 Testing for Lack of Fit How can we tell if a model fits the data? If the model is correct then ˆσ 2 should be an unbiased estimate of σ 2. If we have a model which is not complex enough to fit
CHAPTER 13 Experimental Design and Analysis of Variance CONTENTS STATISTICS IN PRACTICE: BURKE MARKETING SERVICES, INC. 13.1 AN INTRODUCTION TO EXPERIMENTAL DESIGN AND ANALYSIS OF VARIANCE Data Collection
Chicago Booth BUSINESS STATISTICS 41000 Final Exam Fall 2011 Name: Section: I pledge my honor that I have not violated the Honor Code Signature: This exam has 34 pages. You have 3 hours to complete this
Comparing Means in Two Populations Overview The previous section discussed hypothesis testing when sampling from a single population (either a single mean or two means from the same population). Now we
Stat 411/511 THE RANDOMIZATION TEST Oct 16 2015 Charlotte Wickham stat511.cwick.co.nz Today Review randomization model Conduct randomization test What about CIs? Using a t-distribution as an approximation
Chapter 4 Exploratory Data Analysis A first look at the data. As mentioned in Chapter 1, exploratory data analysis or EDA is a critical first step in analyzing the data from an experiment. Here are the
Babraham Bioinformatics Introduction to Statistics with SPSS (15.0) Version 2.3 (public) Introduction to Statistics with SPSS 2 Table of contents Introduction... 3 Chapter 1: Opening SPSS for the first
Health Sciences M.Sc. Programme Applied Biostatistics Week 4: Standard Error and Confidence Intervals Sampling Most research data come from subjects we think of as samples drawn from a larger population.
How To Run Statistical Tests in Excel Microsoft Excel is your best tool for storing and manipulating data, calculating basic descriptive statistics such as means and standard deviations, and conducting
Data Analysis in SPSS Jamie DeCoster Department of Psychology University of Alabama 348 Gordon Palmer Hall Box 870348 Tuscaloosa, AL 35487-0348 Heather Claypool Department of Psychology Miami University
Analyzing Quantitative Assessment Data with Excel October 2, 2014 Jeremy Penn, Ph.D. Director When to use Excel You want to quickly summarize or analyze your assessment data You want to create basic visual
Final Exam Practice Problem Answers The following data set consists of data gathered from 77 popular breakfast cereals. The variables in the data set are as follows: Brand: The brand name of the cereal
Bill Burton Albert Einstein College of Medicine email@example.com April 28, 2014 EERS: Managing the Tension Between Rigor and Resources 1 Calculate counts, means, and standard deviations Produce
3: Summary Statistics Notation Let s start by introducing some notation. Consider the following small data set: 4 5 30 50 8 7 4 5 The symbol n represents the sample size (n = 0). The capital letter X denotes
MULTIPLE LINEAR REGRESSION ANALYSIS USING MICROSOFT EXCEL by Michael L. Orlov Chemistry Department, Oregon State University (1996) INTRODUCTION In modern science, regression analysis is a necessary part
Data Mining Techniques Chapter 5: The Lure of Statistics: Data Mining Using Familiar Tools Occam s razor.......................................................... 2 A look at data I.........................................................
AP Statistics Topic 9 ~ Measures of Spread Activity 9 : Baseball Lineups The table to the right contains data on the ages of the two teams involved in game of the 200 National League Division Series. Is
Parkland College A with Honors Projects Honors Program 2014 Calculating P-Values Isela Guerra Parkland College Recommended Citation Guerra, Isela, "Calculating P-Values" (2014). A with Honors Projects.
Simple Linear Regression Inference 1 Inference requirements The Normality assumption of the stochastic term e is needed for inference even if it is not a OLS requirement. Therefore we have: Interpretation
Appendix D Basic Measurement And Statistics The following information was developed by Steven Rothke, PhD, Department of Psychology, Rehabilitation Institute of Chicago (RIC) and expanded by Mary F. Schmidt,