INTERPRETING THE REPEATED-MEASURES ANOVA

Similar documents
INTERPRETING THE ONE-WAY ANALYSIS OF VARIANCE (ANOVA)

Profile analysis is the multivariate equivalent of repeated measures or mixed ANOVA. Profile analysis is most commonly used in two cases:

Chapter 5 Analysis of variance SPSS Analysis of variance

EPS 625 INTERMEDIATE STATISTICS FRIEDMAN TEST

SPSS Tests for Versions 9 to 13

Multivariate Analysis of Variance. The general purpose of multivariate analysis of variance (MANOVA) is to determine

Chapter 2 Probability Topics SPSS T tests

Independent t- Test (Comparing Two Means)

ANOVA ANOVA. Two-Way ANOVA. One-Way ANOVA. When to use ANOVA ANOVA. Analysis of Variance. Chapter 16. A procedure for comparing more than two groups

THE KRUSKAL WALLLIS TEST

10. Comparing Means Using Repeated Measures ANOVA

Data Analysis in SPSS. February 21, If you wish to cite the contents of this document, the APA reference for them would be

Multivariate Analysis of Variance (MANOVA)

One-Way ANOVA using SPSS SPSS ANOVA procedures found in the Compare Means analyses. Specifically, we demonstrate

UNDERSTANDING THE DEPENDENT-SAMPLES t TEST

Study Guide for the Final Exam

Chapter 7 Section 7.1: Inference for the Mean of a Population

SPSS Guide How-to, Tips, Tricks & Statistical Techniques

Multivariate analyses

Reporting Statistics in Psychology

The Statistics Tutor s Quick Guide to

Additional sources Compilation of sources:

Introduction to Analysis of Variance (ANOVA) Limitations of the t-test

SPSS Advanced Statistics 17.0

Chapter 7. Comparing Means in SPSS (t-tests) Compare Means analyses. Specifically, we demonstrate procedures for running Dependent-Sample (or

T-test & factor analysis

SPSS Resources. 1. See website (readings) for SPSS tutorial & Stats handout

IBM SPSS Advanced Statistics 20

Simple Tricks for Using SPSS for Windows

Multivariate Analysis of Variance (MANOVA)

SCHOOL OF HEALTH AND HUMAN SCIENCES DON T FORGET TO RECODE YOUR MISSING VALUES

A COMPARISON OF LOW PERFORMING STUDENTS ACHIEVEMENTS IN FACTORING CUBIC POLYNOMIALS USING THREE DIFFERENT STRATEGIES

SPSS Explore procedure

One-Way Analysis of Variance

Projects Involving Statistics (& SPSS)

1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96

HYPOTHESIS TESTING: CONFIDENCE INTERVALS, T-TESTS, ANOVAS, AND REGRESSION

The Dummy s Guide to Data Analysis Using SPSS

UNDERSTANDING THE TWO-WAY ANOVA

UNDERSTANDING ANALYSIS OF COVARIANCE (ANCOVA)

Research Methods & Experimental Design

COMPARISONS OF CUSTOMER LOYALTY: PUBLIC & PRIVATE INSURANCE COMPANIES.

STATISTICS FOR PSYCHOLOGISTS

UNDERSTANDING THE INDEPENDENT-SAMPLES t TEST

Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression

1 Overview and background

Post-hoc comparisons & two-way analysis of variance. Two-way ANOVA, II. Post-hoc testing for main effects. Post-hoc testing 9.

POLYNOMIAL AND MULTIPLE REGRESSION. Polynomial regression used to fit nonlinear (e.g. curvilinear) data into a least squares linear regression model.

Multivariate Analysis of Variance (MANOVA): I. Theory

Main Effects and Interactions

Outline. Topic 4 - Analysis of Variance Approach to Regression. Partitioning Sums of Squares. Total Sum of Squares. Partitioning sums of squares

Simple linear regression

Two Related Samples t Test

Recall this chart that showed how most of our course would be organized:

Chapter 15. Mixed Models Overview. A flexible approach to correlated data.

Descriptive Statistics

This can dilute the significance of a departure from the null hypothesis. We can focus the test on departures of a particular form.

January 26, 2009 The Faculty Center for Teaching and Learning

Biostatistics: DESCRIPTIVE STATISTICS: 2, VARIABILITY

Business Statistics. Successful completion of Introductory and/or Intermediate Algebra courses is recommended before taking Business Statistics.

Analysis of Data. Organizing Data Files in SPSS. Descriptive Statistics

An analysis method for a quantitative outcome and two categorical explanatory variables.

Chapter 7. One-way ANOVA

Final Exam Practice Problem Answers

SPSS Guide: Regression Analysis

Part 2: Analysis of Relationship Between Two Variables

Factors affecting online sales

An introduction to IBM SPSS Statistics

Factor Analysis. Principal components factor analysis. Use of extracted factors in multivariate dependency models

Analysing Questionnaires using Minitab (for SPSS queries contact -)

Chapter 13 Introduction to Linear Regression and Correlation Analysis

Analysis of Variance. MINITAB User s Guide 2 3-1

IBM SPSS Statistics 20 Part 4: Chi-Square and ANOVA

Mixed 2 x 3 ANOVA. Notes

individualdifferences

Linear Models in STATA and ANOVA

DISCRIMINANT FUNCTION ANALYSIS (DA)

Chapter 4 and 5 solutions

How To Test For Significance On A Data Set

A full analysis example Multiple correlations Partial correlations

Course Text. Required Computing Software. Course Description. Course Objectives. StraighterLine. Business Statistics

Lesson 1: Comparison of Population Means Part c: Comparison of Two- Means

Outline. Definitions Descriptive vs. Inferential Statistics The t-test - One-sample t-test

Principles of Hypothesis Testing for Public Health

xtmixed & denominator degrees of freedom: myth or magic

Simple Linear Regression, Scatterplots, and Bivariate Correlation

Multiple-Comparison Procedures

8. Comparing Means Using One Way ANOVA

Section 13, Part 1 ANOVA. Analysis Of Variance

Section Format Day Begin End Building Rm# Instructor. 001 Lecture Tue 6:45 PM 8:40 PM Silver 401 Ballerini

Directions for using SPSS

(and sex and drugs and rock 'n' roll) ANDY FIELD

Confidence Intervals for Cp

Confidence Intervals for the Difference Between Two Means

Repeated Measures ANOVA

Multiple Regression: What Is It?

Module 5: Multiple Regression Analysis

Multiple Linear Regression

Two-sample t-tests. - Independent samples - Pooled standard devation - The equal variance assumption

Lin s Concordance Correlation Coefficient

Transcription:

INTERPRETING THE REPEATED-MEASURES ANOVA USING THE SPSS GENERAL LINEAR MODEL PROGRAM RM ANOVA In this scenario (based on a RM ANOVA example from Leech, Barrett, and Morgan, 2005) each of 12 participants has evaluated four s (e.g., four brands of DVD players) on 1-7 (1 = very low quality to 7 = very high quality) Likert-type scales. There is one Independent Variable with four (4) test conditions (P1, P2, P3, and P4) for this scenario, where P1-4 represents the four different s. The Dependent Variable is a rating of preference by consumers about the s. Each participant rated each of the four s and as such, the analysis to determine if a difference in preference exists is the RM ANOVA. Frequencies Statistics N Mean Std. Deviation Skewness Std. Error of Skewness Valid Missing p1 p2 p3 p4 12 12 12 12 0 0 0 0 4.67 3.58 3.83 3.00 1.923 1.929 1.642 1.651 -.628.259 -.274.291.637.637.637.637 Using the data from the above table, we are able to test the assumption of normality for the four test occasions. We need to calculate the standardized skewness for each level by taking the skewness (statistic) value divided by its standard error of skewness value. Then compare that value against +3.29. Values in excess of +3.29 are a concern. That is, if the standardized skewness exceeds +3.29 we conclude that there is a significant departure from normality, meaning the assumption of normality has not been met..628 For this set of data, we find: P1: =. 9859.637.259.274 P2: =. 4066 P3: =. 4301.637.637.291 P4: =. 4568.637 Since non of the standardized skewness values exceeded +3.29. we can conclude that the assumption of normality has been meet for these data.

General Linear Model This table identifies the four levels of the within subjects repeated measures independent variable,. For each level (P1 to P4), there is a rating of 1-7, which is the dependent variable. Within-Subjects Factors 1 2 3 4 Dependent Variable p1 p2 p3 p4 This next table provides the descriptive statistics (Means, Standard Deviations, and Ns) for the average rating for each of the four levels Descriptive Statistics Product 1 Product 2 Product 3 Product 4 Mean Std. Deviation N 4.67 1.923 12 3.58 1.929 12 3.83 1.642 12 3.00 1.651 12 PAGE 2

The next table shows four similar multivariate tests of the within subjects effect. These are actually a form of MANOVA (Multivariate Analysis of Variance). In this case, all four tests have the same Fs and are significant. If the sphericity assumption is violated, a multivariate test could be used (such as one of the procedures shown below), which corrects the degrees of freedom. Typically, we would report the Wilk s Lambda () line of information which indicates significance among the four test conditions (levels) and the multivariate partial eta-squared. This table presents four similar multivariate tests of the within-subjects effect (i.e., whether the four s are rated equally). Wilk s Lambda is a commonly used multivariate test. Notice that in this case, the Fs, dfs, and significance levels are the same: F(3, 9) = 19.065, p <.001. The significant F means that there is a difference somewhere in how the s are rated. The multivariate tests can be used whether or not sphericity is violated. However, if epsilons are high, indicating that one is close to achieving sphericity, the multivariate tests may be less powerful (less likely to indicate statistical significance) than the corrected univariate repeated-measures ANOVA. Effect Pillai's Trace Wilks' Lambda Hotelling's Trace Roy's Largest Root a. Exact statistic b. Design: Intercept Within Subjects Design: Multivariate Tests b Value F Hypothesis df Error df Sig. Partial Eta Squared.864 19.065 a 3.000 9.000.000.864.136 19.065 a 3.000 9.000.000.864 6.355 19.065 a 3.000 9.000.000.864 6.355 19.065 a 3.000 9.000.000.864 PAGE 3

This next table shows the test of an assumption of the univariate approach to repeated-measures ANOVA known as sphericity. As is commonly the case, the Mauchly statistic is significant and, thus the assumption is violated. This is shown by the Sig. (p) value of.001 is less than the a priori alpha level of significance (.05). The epsilons, which are measures of degree of sphericity, are less than 1.0, indicating that the sphericity assumption is violated. The "lower-bound" indicates the lowest value that epsilon could be. The highest epsilon possible is always 1.0. When sphericity is violated, you can either use the multivariate results or use epsilon values to adjust the numerator and denominator degrees of freedom. Typically, when epsilons are less than.75, use the Greenhouse-Geisser epsilon, but use Huynh-Feldt if epsilon >.75. This table shows that the Mauchly s Test of Sphericity is significant, which indicates that these data violate the sphericity assumption of the univariate approach to repeated-measures ANOVA. Thus, we should use either the multivariate approach, use the appropriate non-parametric test (Friedman), or correct the univariate approach with the Greenhouse-Geisser adjustment or other similar correction. Within Subjects Effect Mauchly's W Mauchly's Test of Sphericity b Approx. Chi-Square df Sig. Greenhou Epsilon a se-geisser Huynh-Feldt Lower-bound.101 22.253 5.001.544.626.333 Tests the null hypothesis that the error covariance matrix of the orthonormalized transformed dependent variables is proportional to an identity matrix. a. May be used to adjust the degrees of freedom for the averaged tests of significance. Corrected tests are displayed in the Tests of Within-Subjects Effects table. b. Design: Intercept Within Subjects Design: PAGE 4

In the next table, note that 3 and 33 would be the dfs to use if sphericity were not violated. Because the sphericity assumption is violated, we will use the Greenhouse-Geisser correction, which multiplies 3 and 33 by epsilon, which in this case is.544, yielding dfs of 1.632 and 17.953. You can see in the Tests of Within-Subjects Effects table that these corrections reduce the degrees of freedom by multiplying them by Epsilon. In this case, 3.544 = 1.632 and 33.544 = 17.953. Even with this adjustment, the Within-Subjects Effects (of Product) is significant, F(1.632, 17.952) = 23.629, p <.001, as were the multivariate tests. This means that the ratings of the four s are significantly different. However, the overall (Product) F does not tell you which pairs of s have significantly different means. We also see that the omnibus effect (measure of association) for this analysis (using the Partial Eta-Squared) is.682, which indicates that approximately 68% of the total variance in the dependent variable is accounted for by the variance in the independent variable. Tests of Within-Subjects Effects Source Error() Sphericity Assumed Greenhouse-Geisser Huynh-Feldt Lower-bound Sphericity Assumed Greenhouse-Geisser Huynh-Feldt Lower-bound Type III Sum of Squares df Mean Square F Sig. 17.229 3 5.743 23.629.000.682 17.229 1.632 10.556 23.629.000.682 17.229 1.877 9.178 23.629.000.682 17.229 1.000 17.229 23.629.001.682 8.021 33.243 8.021 17.953.447 8.021 20.649.388 8.021 11.000.729 Partial Eta Squared PAGE 5

FYI: SPSS has several tests of within-subjects contrasts such as the example shown below These contrasts can serve as a viable method of interpreting the pairwise contrasts of the repeated-measures main effect. Tests of Within-Subjects Contrasts Source Error() Linear Quadratic Cubic Linear Quadratic Cubic Type III Sum Partial Eta of Squares df Mean Square F Sig. Squared 13.538 1 13.538 26.532.000.707.188 1.188 3.667.082.250 3.504 1 3.504 20.883.001.655 5.613 11.510.563 11.051 1.846 11.168 The following table shows the Tests of Between Subjects Effects since we do not have a between-subjects/groups variable we will ignore it for this scenario. Transformed Variable: Average Source Intercept Error Tests of Between-Subjects Effects Type III Sum Partial Eta of Squares df Mean Square F Sig. Squared 682.521 1 682.521 56.352.000.837 133.229 11 12.112 PAGE 6

The following is an illustration of the profile plots for the four s. It can be used to assist in interpretation of the output. Profile Plots Estimated Marginal Means of MEASURE_1 5.0 Estimated Marginal Means 4.5 4.0 3.5 3.0 1 2 3 4 PAGE 7

Because we found a significant within-subjects main effect we will conduct the Fisher s Protected t test as our post hoc multiple comparisons procedure. This test uses a Bonferroni adjustment to control for Type I error that is, alpha divided by the number of pairwise comparisons. Fisher s Protected t Test (Dependent t Test) Syntax T-TEST PAIRS = p1 p1 p1 p2 p2 p3 WITH p2 p3 p4 p3 p4 p4 (PAIRED) /CRITERIA = CI(.95) /MISSING = ANALYSIS. T-Test Descriptive statistics (mean, standard deviation, standard error of the mean, and sample size) for each of the pairs are shown in the next table. ed Samples Statistics 1 2 3 4 5 6 Product 1 Product 2 Product 1 Product 3 Product 1 Product 4 Product 2 Product 3 Product 2 Product 4 Product 3 Product 4 Std. Error Mean N Std. Deviation Mean 4.67 12 1.923.555 3.58 12 1.929.557 4.67 12 1.923.555 3.83 12 1.642.474 4.67 12 1.923.555 3.00 12 1.651.477 3.58 12 1.929.557 3.83 12 1.642.474 3.58 12 1.929.557 3.00 12 1.651.477 3.83 12 1.642.474 3.00 12 1.651.477 NOTE: ed (bivariate) correlation for each of the pairs not used in interpreting pairwise comparisons of mean differences As such it has been removed from this handout to save space PAGE 8

The following table shows the ed Samples Test that will be used as Fisher s Protected t Test to test for pairwise comparisons. We will use the Bonferroni adjustment to control for Type I error. Keep in mind there are other methods available to control for Type I error. For our example, since there are six (6) comparisons, we will use an alpha level of.0083 (/6 =.05/6 =.0083) to determine significance. As we can see, 1 (P1 vs. P2), 2 (P1 vs. P3), 3 (P1 vs. P4), 5 (P2 vs. P4), and 6 (P3 vs. P4) were all significant at the.0083 alpha level, indicating a significant pairwise difference. An examination of the means is now in order to determine which s had a significantly higher rating [P1 (M = 4.67) significantly greater than P2 (M = 3.58), P3 (M = 3.83), and P4 (M = 3.00) and P4 significantly less than P2 and P3]. We will also need to follow-up with the calculation of Effect Size. For the ES, keep in mind we need to use the error term (MS Res ) from the Repeated-Measures ANOVA (MS Res =.447) in our calculation. ed Samples Test 1 2 3 4 5 6 Product 1 - Product 2 Product 1 - Product 3 Product 1 - Product 4 Product 2 - Product 3 Product 2 - Product 4 Product 3 - Product 4 Mean Std. Deviation ed Differences 95% Confidence Interval of the Std. Error Difference Mean Lower Upper t df Sig. (2-tailed) 1.083.669.193.659 1.508 5.613 11.000.833.835.241.303 1.364 3.458 11.005 1.667.985.284 1.041 2.292 5.863 11.000 -.250.622.179 -.645.145-1.393 11.191.583.515.149.256.911 3.924 11.002.833.389.112.586 1.081 7.416 11.000 X i X k X i X k X i X k ES = = = 1 (M Diff. = 1.083) ES = 1.62 2 (M Diff. =.833) ES = 1.25 MS.447.668580586 Re s 3 (M Diff. = 1.667) ES = 2.49 5 (M Diff. =.583) ES =.87 6 (M Diff. =.833) ES = 1.25 PAGE 9