Chapter 14: Repeated Measures Analysis of Variance (ANOVA)



Similar documents
1.5 Oneway Analysis of Variance

ANOVA ANOVA. Two-Way ANOVA. One-Way ANOVA. When to use ANOVA ANOVA. Analysis of Variance. Chapter 16. A procedure for comparing more than two groups

One-Way ANOVA using SPSS SPSS ANOVA procedures found in the Compare Means analyses. Specifically, we demonstrate

Section 13, Part 1 ANOVA. Analysis Of Variance

One-Way Analysis of Variance (ANOVA) Example Problem

EXCEL Analysis TookPak [Statistical Analysis] 1. First of all, check to make sure that the Analysis ToolPak is installed. Here is how you do it:

Lesson 1: Comparison of Population Means Part c: Comparison of Two- Means

INTERPRETING THE ONE-WAY ANALYSIS OF VARIANCE (ANOVA)

Class 19: Two Way Tables, Conditional Distributions, Chi-Square (Text: Sections 2.5; 9.1)

Chapter 5 Analysis of variance SPSS Analysis of variance

Analysis of Variance ANOVA

SPSS Guide: Regression Analysis

One-Way Analysis of Variance (ANOVA)

Two-Way ANOVA Lab: Interactions

UNDERSTANDING THE TWO-WAY ANOVA

Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression

ABSORBENCY OF PAPER TOWELS

13: Additional ANOVA Topics. Post hoc Comparisons

Chapter 9. Two-Sample Tests. Effect Sizes and Power Paired t Test Calculation

CHAPTER 11 CHI-SQUARE AND F DISTRIBUTIONS

Study Guide for the Final Exam

Main Effects and Interactions

Randomized Block Analysis of Variance

The Chi-Square Test. STAT E-50 Introduction to Statistics

Chapter 7. One-way ANOVA

One-Way Analysis of Variance

Chapter 7 Section 1 Homework Set A

Association Between Variables

Chapter 7 Section 7.1: Inference for the Mean of a Population

Post-hoc comparisons & two-way analysis of variance. Two-way ANOVA, II. Post-hoc testing for main effects. Post-hoc testing 9.

1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96

Recall this chart that showed how most of our course would be organized:

The ANOVA for 2x2 Independent Groups Factorial Design

UNDERSTANDING ANALYSIS OF COVARIANCE (ANCOVA)

Data Analysis Tools. Tools for Summarizing Data

Introduction to Analysis of Variance (ANOVA) Limitations of the t-test

" Y. Notation and Equations for Regression Lecture 11/4. Notation:

Comparing Means in Two Populations

Lecture Notes Module 1

Testing Group Differences using T-tests, ANOVA, and Nonparametric Measures

15. Analysis of Variance

Chapter 7. Comparing Means in SPSS (t-tests) Compare Means analyses. Specifically, we demonstrate procedures for running Dependent-Sample (or

Introduction to Quantitative Methods

This chapter will demonstrate how to perform multiple linear regression with IBM SPSS

Non-Inferiority Tests for Two Proportions

12: Analysis of Variance. Introduction

One-Way Analysis of Variance: A Guide to Testing Differences Between Multiple Groups

research/scientific includes the following: statistical hypotheses: you have a null and alternative you accept one and reject the other

CALCULATIONS & STATISTICS

An analysis method for a quantitative outcome and two categorical explanatory variables.

Testing Research and Statistical Hypotheses

Part 3. Comparing Groups. Chapter 7 Comparing Paired Groups 189. Chapter 8 Comparing Two Independent Groups 217

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

Mixed 2 x 3 ANOVA. Notes

Friedman's Two-way Analysis of Variance by Ranks -- Analysis of k-within-group Data with a Quantitative Response Variable

Two-Way Analysis of Variance (ANOVA)

Calculating P-Values. Parkland College. Isela Guerra Parkland College. Recommended Citation

Session 7 Bivariate Data and Analysis

Playing with Numbers

KSTAT MINI-MANUAL. Decision Sciences 434 Kellogg Graduate School of Management

NCSS Statistical Software

Permutation Tests for Comparing Two Populations

Guide to Microsoft Excel for calculations, statistics, and plotting data

6.4 Normal Distribution

LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING

Multivariate Analysis of Variance (MANOVA)

Two-sample hypothesis testing, II /16/2004

The F distribution and the basic principle behind ANOVAs. Situating ANOVAs in the world of statistical tests

Experimental Designs (revisited)

General Regression Formulae ) (N-2) (1 - r 2 YX

Solutions to Homework 10 Statistics 302 Professor Larget

Outline. Definitions Descriptive vs. Inferential Statistics The t-test - One-sample t-test

Unit 26 Estimation with Confidence Intervals

2 Sample t-test (unequal sample sizes and unequal variances)

ANALYSIS OF TREND CHAPTER 5

MULTIPLE REGRESSION WITH CATEGORICAL DATA

Analysis of Data. Organizing Data Files in SPSS. Descriptive Statistics

AP: LAB 8: THE CHI-SQUARE TEST. Probability, Random Chance, and Genetics

Hypothesis testing - Steps

MATH 140 Lab 4: Probability and the Standard Normal Distribution

1 Basic ANOVA concepts

DEPARTMENT OF PSYCHOLOGY UNIVERSITY OF LANCASTER MSC IN PSYCHOLOGICAL RESEARCH METHODS ANALYSING AND INTERPRETING DATA 2 PART 1 WEEK 9

Statistiek II. John Nerbonne. October 1, Dept of Information Science

Final Exam Practice Problem Answers

Bivariate Statistics Session 2: Measuring Associations Chi-Square Test

Introduction to Hypothesis Testing. Hypothesis Testing. Step 1: State the Hypotheses

How to calculate an ANOVA table

Using Microsoft Excel to Analyze Data

Multiple-Comparison Procedures

A Statistical Analysis of Popular Lottery Winning Strategies

The Analysis of Variance ANOVA

SPSS Explore procedure

Independent samples t-test. Dr. Tom Pierce Radford University

A STUDY OF WHETHER HAVING A PROFESSIONAL STAFF WITH ADVANCED DEGREES INCREASES STUDENT ACHIEVEMENT MEGAN M. MOSSER. Submitted to

Statistics Review PSY379

t Tests in Excel The Excel Statistical Master By Mark Harmon Copyright 2011 Mark Harmon

Two-sample inference: Continuous data

Using Excel for inferential statistics

PSYC 381 Statistics Arlo Clark-Foos, Ph.D.

Transcription:

Chapter 14: Repeated Measures Analysis of Variance (ANOVA) First of all, you need to recognize the difference between a repeated measures (or dependent groups) design and the between groups (or independent groups) design. In an independent groups design, each participant is exposed to only one of the treatment levels and then provides one response on the dependent variable. However, in a repeated measures design, each participant is exposed to every treatment level and provides a response on the dependent variable after each treatment. Thus, if a participant has provided more than one score on the dependent variable, you know that you're dealing with a repeated measures design. Comparing the Independent Groups ANOVA and the Repeated Measures ANOVA The fact that the scores in each treatment condition come from the same participants has an important impact on the between-treatment variability found in the MS Between (MS Treatment ). In an independent groups design, the variability in the MS Between arises from three sources: treatment effects, individual differences, and random variability. Imagine, for instance, a single-factor independent groups design with three levels of the factor. As seen below, the three group means vary. a 1 a 2 a 3 3 7 9 5 6 8 2 9 9 6 7 7 4 8 9 3 7 8 Mean 3.83 7.33 8.33 As you should recall, the variability among the group means determines the MS Between. In this case, MS Between = 33.5, which is the variance of the group means (5.583) times the sample size (6). Why do the group means differ? One source of variability individual differences emerges because the scores in each group come from different people. Thus, even with random assignment to conditions, the group means could differ from one another because of individual differences. And the more variability due to individual differences in the population, the greater the variability both within groups and between groups. Another source of variability random effects should play a fairly small role. Nonetheless, because there will be some random variability, it could influence the three group means. Finally, you should imagine that your treatment will have an impact on the means, which is the treatment effect that you set out to examine in your experiment. Given the sources of variability in the MS Between, you need to construct a MS Error that involves individual differences and random variability. Thus, your F ratio would be: F = Treatment Effect + Individual Differences +Random Variability Individual Differences +Random Variability Ch. 14 Repeated Measures ANOVA - 1

When treatment effects are absent, your F ratio would be roughly 1.0. As the treatment effects increased, your F ratio would grow larger than 1.0. In the case of these data, the F-ratio would be fairly large, as seen in the StatView source table below: ANOVA Table for Score DF Sum of Squares Mean Square F-Value P-Value Lambda Power A 2 67.000 33.500 25.769 <.0001 51.538 1.000 Residual 15 19.500 1.300 Means Table for Score Effect: A a1 a2 a3 Count Mean Std. Dev. Std. Err. 6 3.833 1.472.601 6 7.333 1.033.422 6 8.333.816.333 Imagine, now, that you have the same three conditions and the same 18 scores, but now presume that they come from only 6 participants in a repeated measures design. Even though the MS Between would be identical, in a repeated measures design that variability is not influenced by individual differences. Thus, the MS Between of 33.5 would come from treatment effects and random effects. In order to construct an appropriate F ratio, you now need to develop an error term that contains only random variability. The logic of the procedure we will use is to take the error term that would be constructed were these data from an independent groups design (and would include individual differences and random variability) and remove the portion due to individual differences, which leaves behind the random variability that we want in our error term. Conceptually, then, our F ratio would be comprised of the components seen below: F = Treatment Effect + Random Variability Random Variability Remember, however, that even though the components in the numerator of the F ratio differ in the independent groups and repeated measures ANOVAs, the computations are identical. That is, regardless of the nature of the design, the formula for SS Between is: SS Treatment = Â T 2 n - G2 N Ch. 14 Repeated Measures ANOVA - 2

And the formula for df Between is: df Treatment = k -1 Furthermore, you ll still need to compute the SS Error for the independent groups ANOVA (which is just the sum of the SS for each condition) and the df Error for the independent groups ANOVA (which is just n-1 for each condition times the number of conditions). However, because this old error term contains both individual differences and random variability, we need to estimate and remove the contribution of individual differences. We estimate the contribution of individual differences using the same logic as we use when computing the variability among treatments. That is, we treat each participant as the level of a factor (think of the factor as Subject or Participant ). If you think of the computation this way, you ll immediately notice that the formulas for SS Between and SS Subject are identical, with the SS Between working on columns while the SS Subject works on rows. The actual formula would be: SS Subject = P 2 k - G2 N If you ll look at our data again, to complete your computation you would need to sum across each of the participants and then square those sums before adding them and dividing by the number of treatments. Your computation of SS Subject would be: Â a 1 a 2 a 3 P 3 7 9 19 5 6 8 19 2 9 9 20 6 7 7 20 4 8 9 21 3 7 8 18 Mean 3.83 7.33 8.33 SS Subject = 192 +19 2 + 20 2 + 20 2 + 21 2 +18 2 3-1172 18 = 2287-760.5 =1.83 3 You would then enter the SS Subject into the source table and subtract it from the SS Within (which is the error term from the independent groups design). As seen in the source table below, when you subtract that SS Subject, you are left with SS Error = 17.67. The SS in the denominator of the repeated measures design will always be less than that found in an independent groups design for the same scores. Ch. 14 Repeated Measures ANOVA - 3

Source SS df MS F Between 67 2 33.5 18.93 Within Groups 19.5 15 Subject 1.83 5 Error 17.67 10 1.77 Total 86.5 17 Of course, you need to apply the same procedure to the degrees of freedom. The df WithinGroups for the independent groups design must be reduced by the df Subject. The df Subject is simply: df Subjects = n -1 Just as you should note the parallel between the SS Between and the SS Subject, you should also note the parallel between the df Between and the df Subject. Because you remove the df Subject, the df in the error term for the repeated measures design will always be less than the df in the error term for an independent groups design for the same scores. Furthermore, it will always be true that the df Error in a repeated measures design is the product of the df Between and the df Subject. For completeness, below is the source table that StatView would generate for these data using a repeated measures ANOVA: ANOVA Table for A Subject Category for A Category for A * Subject DF Sum of Squares Mean Square F-Value P-Value Lambda Power 5 1.833.367 2 67.000 33.500 18.962.0004 37.925.999 10 17.667 1.767 Means Table for A Effect: Category for A a1 a2 a3 Count Mean Std. Dev. Std. Err. 6 3.833 1.472.601 6 7.333 1.033.422 6 8.333.816.333 You should note the differences between the source tables that you would generate doing the analyses as shown in your Gravetter & Wallnau textbook and that generated by StatView. First of all, the SS and df columns are reversed. But more important, you need to note that the first row is the Subject effect, the second row is the Treatment Effect (called Category for A) and the third row is the Error Effect (Random), which appears as Category for A * Subject. Thus, the F ratio appears in the second row, but is the expected ratio of the MS Between and the MS Error. Ch. 14 Repeated Measures ANOVA - 4

You should also note a perplexing result. Generally speaking, the repeated measures design is more powerful than the independent groups design. Thus, you should expect that the F ratio would be larger for the repeated measures design than it is for the independent groups design. For these data, however, that s not the case. Note that for the independent groups ANOVA, F = 25.8 and for the repeated measures ANOVA, F = 18.9. (For the repeated measures analysis, the difference between the StatView F and the calculator-computed F is due to rounding error.) What happened? Think, first of all, of the formula for the F ratio. The numerator is identical, whether the analysis is for an independent groups design or a repeated measures design. So for any difference in the F ratio to emerge, it has to come from the denominator. Generally speaking, as seen in the formula below, larger F ratios would come from larger df Error and smaller SS Error. F = MS Treatment SS Error df Error But, for identical data, the df Error will always be smaller for a repeated measures analysis! So, how does the increased power emerge? Again, for identical data, it s also true that the SS Error will always be smaller for a repeated measures analysis. As long as the SS Subject is substantial, the F ratio will be larger for the repeated measures analysis. For these data, however, the SS Subject is actually fairly small, resulting in a smaller F ratio. Thus, the power of the repeated measures design emerges from the presumption that people will vary. That is, you re betting on substantial individual differences. As you look at the people around you, that presumption is not all that unreasonable. Use the source table below to determine the break-even point for this data set. What SS Subject would need to be present to give you the exact same F ratio as for the independent groups ANOVA? Source SS df MS F Between 67 2 33.5 25.8 Within Groups 19.5 15 Subject 5 Error 10 Total 86.5 17 So, as long as you had more than that level of SS Subject you would achieve a larger F ratio using the repeated measures design. Testing the Null Hypothesis and Post Hoc Tests for Repeated Measures ANOVAs You would set up and test the null hypothesis for a repeated measures design just as you would for an independent groups design. That is, for this example, the null and alternative hypotheses would be identical for the two designs: Ch. 14 Repeated Measures ANOVA - 5

H 0 : m 1 = m 2 = m 3 H 1 : Not H 0 To test the null hypothesis for a repeated measures design, you would look up the F Critical with the df Between and the df Error found in your source table. That is, for this example, F Crit (2,10) = 4.10. If you reject H 0, as you would in this case, you would then need to compute a post hoc test to determine exactly which of the conditions differed. Again, the computation of Tukey s HSD would parallel the procedure you used for an independent groups analysis. In this case, for the independent groups design, your Tukey s HSD would be: HSD = 3.67 1.3 6 =1.71 For the repeated measures design, your Tukey s HSD would be: HSD = 3.88 1.77 6 = 2.1 Ordinarily, of course, your HSD would be smaller for the repeated measures design, due to the typical reduction in the MS Error. For this particular data set, given the lack of individual differences, that s not the case. A Computational Example RESEARCH QUESTION: Does behavior modification (response-cost technique) reduce the outbursts of unruly children? EXPERIMENT: Randomly select 6 participants, who are tested before treatment, then one week, one month, and six months after treatment. The IV is the duration of the treatment. The DV is the number of unruly acts observed. STATISTICAL HYPOTHESES: H 0 : m Before = m 1Week = m 1Month = m 6Months H 1 : Not H 0 DECISION RULE: If F Obt F Crit, Reject H 0. F Crit (3,15) = 3.29 Ch. 14 Repeated Measures ANOVA - 6

DATA: Before 1 Week 1 Month 6 Months P P1 8 2 1 1 12 P2 4 1 1 0 6 P3 6 1 0 2 10 P4 8 3 4 1 16 P5 7 4 3 2 16 P6 6 2 1 1 10 X 6.5 2.3 1.5 1 SUM T (SX) 39 14 10 7 70 SX 2 265 38 28 11 342 SS 11.5 5.3 11.3 2.8 30.9 SOURCE TABLE: Between SOURCE SS Formula SS df MS F  T 2 n - G2 N Within grps SSS in each group Between subjs  P 2 k - G2 N Error (SS Within Groups SS Between subjects ) Total G N 2  X - 2 DECISION: POST HOC TEST: INTERPRETATION: EFFECT SIZE: Ch. 14 Repeated Measures ANOVA - 7

Suppose that you continued to assess the amount of unruly behavior in the children after the treatment was withdrawn. You assess the number of unruly acts after 12 months, 18 months, 24 months and 30 months. Suppose that you obtain the following data. What could you conclude? 12 Months 18 Months 24 Months 30 Months P P1 1 2 2 5 10 P2 2 2 3 4 11 P3 1 3 3 4 11 P4 3 4 4 6 17 P5 2 2 3 5 12 P6 1 2 4 4 11 T (SX) 10 15 19 28 72 SX 2 20 41 63 134 Between SOURCE SS Formula SS df MS F  T 2 n - G2 N Within grps SSS in each group Between subjs  P 2 k - G2 N Error (SS Within Groups SS Between subjects ) Total G N 2  X - 2 DECISION: POST HOC TEST: INTERPRETATION: EFFECT SIZE: Ch. 14 Repeated Measures ANOVA - 8

An Example to Compare Independent Groups and Repeated Measures ANOVAs Independent Groups ANOVA A 1 A 2 A 3 A 4 1 2 3 4 1 3 4 5 2 3 4 6 4 3 5 6 T (SX) 8 11 16 21 56 (G) SX 2 22 31 66 113 SS 6.75 2 2.75 11.5 s 2 2.25.67.92 SOURCE SS df MS F Between Error Total Repeated Measures ANOVA A 1 A 2 A 3 A 4 Exactly the same as above SOURCE SS df MS F Between Within Groups Between Subjs Error Total Ch. 14 Repeated Measures ANOVA - 9

Repeated Measures Analyses: The Error Term In a repeated measures analysis, the MS Error is actually the interaction between participants and treatment. However, that won't make much sense to you until we've talked about two-factor ANOVA. For now, we'll simply look at the data that would produce different kinds of error terms in a repeated measures analysis, to give you a clearer understanding of the factors that influence the error term. These examples are derived from the example in your textbook (G&W, p. 464). Imagine a study in which rats are given each of three types of food rewards (2, 4, or 6 grams) when they complete a maze. The DV is the time to complete the maze. As you can see in the graph below, Participant1 is the fastest and Participant6 is the slowest. The differences in average performance represent individual differences. If the 6 lines were absolutely parallel, the MS Error would be 0, so an F ratio could not be computed. So, I've tweaked the data to be sure that the lines were not perfectly parallel. Nonetheless, if performance was as illustrated below, the MS Error would be quite small. The data are seen below in tabular form and then in graphical form. 2 grams 4 grams 6 grams P P1 1.0 1.5 2.0 4.5 P2 2.0 2.5 3.5 8.0 P3 3.0 3.5 5.0 11.5 P4 4.0 5.0 6.0 15.0 P5 5.0 6.5 7.0 18.5 P6 6.0 7.5 9.0 22.5 Mean 3.5 4.42 5.42 s 2 3.5 5.44 6.24 10 Small MSError Participant1 Participant2 Participant3 Participant4 Participant5 Participant6 8 Speed of Response 6 4 2 0 2 4 6 Amount of Reward (grams) The ANOVA on these data would be as seen below. Note that the F-ratio would be significant. Ch. 14 Repeated Measures ANOVA - 10

ANOVA Table for Reward Subject Category for Reward Category for Reward * Subject DF Sum of Squares Mean Square F-Value P-Value Lambda Power 5 74.444 14.889 2 11.028 5.514 37.453 <.0001 74.906 1.000 10 1.472.147 Means Table for Reward Effect: Category for Reward Reward 2g Reward 4g Reward 6g Count Mean Std. Dev. Std. Err. 6 3.500 1.871.764 6 4.417 2.333.952 6 5.417 2.498 1.020 Moderate MS Error Next, keeping all the data the same (so SS Total would be unchanged), and only rearranging data within a treatment (so that the s 2 for each treatment would be unchanged), I've created greater interaction between participants and treatment. Note that the participant means would now be closer together, which means that the SS Subject is smaller. In the data table below, you'll note that the sums across participants (P) are more similar than in the earlier example. 2 grams 4 grams 6 grams P P1 1.0 1.5 3.5 6.0 P2 2.0 3.5 5.0 10.5 P3 3.0 2.5 2.0 7.5 P4 4.0 6.5 6.0 16.5 P5 5.0 5.0 9.0 19.0 P6 6.0 7.5 7.0 20.5 Mean 3.5 4.42 5.42 s 2 3.5 5.44 6.24 10 Moderate MSError Participant1 Participant2 Participant3 Participant4 Participant5 Participant6 8 Speed of Response 6 4 2 0 2 4 6 Amount of Reward Note that the F-ratio is still significant, though it is much reduced. Note, also, that the MS Treatment is the same as in the earlier example. Ch. 14 Repeated Measures ANOVA - 11

ANOVA Table for Reward Subject Category for Reward Category for Reward * Subject DF Sum of Squares Mean Square F-Value P-Value Lambda Power 5 63.111 12.622 2 11.028 5.514 4.306.0448 8.612.606 10 12.806 1.281 Means Table for Reward Effect: Category for Reward Reward 2g Reward 4g Reward 6g Count Mean Std. Dev. Std. Err. 6 3.500 1.871.764 6 4.417 2.333.952 6 5.417 2.498 1.020 Large MS Error Next, using the same procedure, I'll rearrange the scores even more, which will produce an even larger MS Error. Note, again, that the SS Subject grows smaller (as the Participant means grow closer to one another) and the SS Error grows larger. 2 grams 4 grams 6 grams P P1 1.0 3.5 6.0 10.5 P2 2.0 6.5 9.0 17.5 P3 3.0 7.5 3.5 14.0 P4 4.0 1.5 5.0 10.5 P5 5.0 2.5 7.0 14.5 P6 6.0 5.0 2.0 13.0 Mean 3.5 4.42 5.42 s 2 3.5 5.44 6.24 10 Large MSError Participant1 Participant2 Participant3 Participant4 Participant5 Participant6 8 Speed of Response 6 4 2 0 2 4 6 Amount of Reward Ch. 14 Repeated Measures ANOVA - 12

ANOVA Table for Reward Subject Category for Reward Category for Reward * Subject DF Sum of Squares Mean Square F-Value P-Value Lambda Power 5 11.778 2.356 2 11.028 5.514.860.4524 1.719.155 10 64.139 6.414 Means Table for Reward Effect: Category for Reward Reward 2g Reward 4g Reward 6g Count Mean Std. Dev. Std. Err. 6 3.500 1.871.764 6 4.417 2.333.952 6 5.417 2.498 1.020 Varying Individual Differences It is possible to keep the MS Error constant, while increasing the MS Subject, as the two examples below illustrate. As you see in the first example, the SS Subject is fairly small and the MS Error is quite small. 10 Small Individual Differences Participant1 Participant2 Participant3 Participant4 Participant5 Participant6 8 Speed of Response 6 4 2 0 2 4 6 Amount of Reward (grams) ANOVA Table for Reward Subject Category for Reward Category for Reward * Subject DF Sum of Squares Mean Square F-Value P-Value Lambda Power 5 54.125 10.825 2 15.250 7.625 305.000 <.0001 610.000 1.000 10.250.025 Means Table for Reward Effect: Category for Reward Reward 2g Reward 4g Reward 6g Count Mean Std. Dev. Std. Err. 6 4.500 1.871.764 6 5.500 1.871.764 6 6.750 1.969.804 Next, I've decreased the first two participants' scores by a constant amount and increased the last two participants' scores by a constant amount. Because the interaction between participant and treatment is the same, the MS Error is unchanged. However, because the means for the 6 participants are more different than before, the SS Subject increases. Ch. 14 Repeated Measures ANOVA - 13

12 Moderate Individual Differences Participant1 Participant2 Participant3 Participant4 Participant5 Participant6 10 Speed of Response 8 6 4 2 0 2 4 6 Amount of Reward (grams) ANOVA Table for Reward Subject Category for Reward Category for Reward * Subject DF Sum of Squares Mean Square F-Value P-Value Lambda Power 5 114.125 22.825 2 15.250 7.625 305.000 <.0001 610.000 1.000 10.250.025 Means Table for Reward Effect: Category for Reward Reward 2g Reward 4g Reward 6g Count Mean Std. Dev. Std. Err. 6 4.500 2.739 1.118 6 5.500 2.739 1.118 6 6.750 2.806 1.146 Ch. 14 Repeated Measures ANOVA - 14

StatView for Repeated Measures ANOVA: G&W 465 First, enter as many columns (variables) as you have levels of your independent variable. Below left are the data, with each column containing scores for a particular level of the IV. The next step is to highlight all 3 columns and then click on the Compact button. You ll then get the window seen below on the right, which allows you to name the IV (as a compact variable). Note that your data window will now reflect the compacting process, with Stimulus appearing above the 3 columns. To produce the analysis, choose Repeated Measures ANOVA from the Analyze menu. Move your compacted variable to the Repeated measure box on the left, as seen below left. Then, click on OK to produce the actual analysis, seen below right. ANOVA Table for Stimulus Subject Category for Stimulus Category for Stimulus * Subject DF Sum of Squares Mean Square F-Value P-Value Lambda Power 4 6.000 1.500 2 30.000 15.000 4.286.0543 8.571.568 8 28.000 3.500 Means Table for Stimulus Effect: Category for Stimulus Neutral Pleasant Aversive Count Mean Std. Dev. Std. Err. 5 3.000.707.316 5 6.000 2.121.949 5 3.000 1.871.837 Note that these results are not quite significant, so you would not ordinarily compute a post hoc test. Nonetheless, just to show you how to compute a post hoc test, choose Tukey/Kramer from the Post-hoc tests found on the left, under ANOVA. That will produce the table seen below. It s no surprise that none of the comparisons are significant, given that the overall ANOVA did not produce any significant results. Tukey/Kramer for Stimulus Effect: Category for Stimulus Significance Level: 5 % Neutral, Pleasant Neutral, Aversive Pleasant, Aversive Mean Diff. Crit. Diff. -3.000 3.380 0.000 3.380 3.000 3.380 Ch. 14 Repeated Measures ANOVA - 15

Practice Problems Drs. Dewey, Stink, & Howe were interested in memory for various odors. They conducted a study in which 6 participants were exposed to 10 common food odors (orange, onion, etc.) and 10 common non-food odors (motor oil, skunk, etc.) to see if people are better at identifying one type of odorant or the other. The 20 odors were presented in a random fashion, so that both classes of odors occurred equally often at the beginning of the list, at the end of the list, etc. (Thus, this randomization is a strategy that serves the same function as counterbalancing.) The dependent variable is the number of odors of each class correctly identified by each participant. The data are seen below. Analyze the data and fully interpret the results of this study. Food Odors Non-Food Odors 7 4 8 6 6 4 9 7 7 5 5 3 SX (T) 42 29 SX 2 304 151 SS 10 10.8 Ch. 14 Repeated Measures ANOVA - 16

Suppose that Dr. Belfry was interested in conducting a study about the auditory capabilities of bats, looking at bats abilities to avoid wires of varying thickness as they traverse a maze. The DV is the number of times that the bat touches the wires. (Thus, higher numbers indicate an inability to detect the wire.) Complete the source table below and fully interpret the results. Ch. 14 Repeated Measures ANOVA - 17

Dr. Richard Noggin is interested in the effect of different types of persuasive messages on a person s willingness to engage in socially conscious behaviors. To that end, he asks his participants to listen to each of four different types of messages (Fear Invoking, Appeal to Conscience, Guilt, and Information Laden). After listening to each message, the participant rates how effective the message was on a scale of 1-7 (1 = very ineffective and 7 = very effective). Complete the source table and analyze the data as completely as you can. Ch. 14 Repeated Measures ANOVA - 18

Dr. Beau Peep believes that pupil size increases during emotional arousal. He was interested in testing if the increase in pupil size was a function of the type of arousal (pleasant vs. aversive). A random sample of 5 participants is selected for the study. Each participant views all three stimuli: neutral, pleasant, and aversive photographs. The neutral photograph portrays a plain brick building. The pleasant photograph consists of a young man and woman sharing a large ice cream cone. Finally, the aversive stimulus is a graphic photograph of an automobile accident. Upon viewing each photograph, the pupil size is measured in millimeters. An incomplete source table resulting from analysis of these data is seen below. Complete the source table and analyze the data as completely as possible. Means Table for Stimulus Effect: Category for Stimulus Neutral Pleasant Aversive Count Mean Std. Dev. Std. Err. 5 2.600.548.245 5 6.400 1.517.678 5 4.400 1.140.510 Ch. 14 Repeated Measures ANOVA - 19

Suppose you are interested in studying the impact of duration of exposure to faces on the ability of people to recognize faces. To finesse the issue of the actual durations used, I'll call them Short, Medium, and Long durations. Participants are first exposed to a set of 30 faces for one duration and then tested on their memory for those faces. Then they are exposed to another set of 30 faces for a different duration and then tested. Finally, they are given a final set of 30 faces for the final duration and then tested. The DV for this analysis is the percent Hits (saying Old to an Old item). Suppose that the results of the experiment come out as seen below. Complete the analysis and interpret the results as completely as you can. If the results turned out as seen below, what would they mean to you? [15 pts] Means Table for Duration Effect: Category for Duration Short Medium Long Count Mean Std. Dev. Std. Err. 24 43.833 7.257 1.481 24 47.792 7.342 1.499 24 49.917 6.978 1.424 Ch. 14 Repeated Measures ANOVA - 20