Chapter 14: Repeated Measures Analysis of Variance (ANOVA)

Size: px
Start display at page:

Download "Chapter 14: Repeated Measures Analysis of Variance (ANOVA)"

Transcription

1 Chapter 14: Repeated Measures Analysis of Variance (ANOVA) First of all, you need to recognize the difference between a repeated measures (or dependent groups) design and the between groups (or independent groups) design. In an independent groups design, each participant is exposed to only one of the treatment levels and then provides one response on the dependent variable. However, in a repeated measures design, each participant is exposed to every treatment level and provides a response on the dependent variable after each treatment. Thus, if a participant has provided more than one score on the dependent variable, you know that you're dealing with a repeated measures design. Comparing the Independent Groups ANOVA and the Repeated Measures ANOVA The fact that the scores in each treatment condition come from the same participants has an important impact on the between-treatment variability found in the MS Between (MS Treatment ). In an independent groups design, the variability in the MS Between arises from three sources: treatment effects, individual differences, and random variability. Imagine, for instance, a single-factor independent groups design with three levels of the factor. As seen below, the three group means vary. a 1 a 2 a Mean As you should recall, the variability among the group means determines the MS Between. In this case, MS Between = 33.5, which is the variance of the group means (5.583) times the sample size (6). Why do the group means differ? One source of variability individual differences emerges because the scores in each group come from different people. Thus, even with random assignment to conditions, the group means could differ from one another because of individual differences. And the more variability due to individual differences in the population, the greater the variability both within groups and between groups. Another source of variability random effects should play a fairly small role. Nonetheless, because there will be some random variability, it could influence the three group means. Finally, you should imagine that your treatment will have an impact on the means, which is the treatment effect that you set out to examine in your experiment. Given the sources of variability in the MS Between, you need to construct a MS Error that involves individual differences and random variability. Thus, your F ratio would be: F = Treatment Effect + Individual Differences +Random Variability Individual Differences +Random Variability Ch. 14 Repeated Measures ANOVA - 1

2 When treatment effects are absent, your F ratio would be roughly 1.0. As the treatment effects increased, your F ratio would grow larger than 1.0. In the case of these data, the F-ratio would be fairly large, as seen in the StatView source table below: ANOVA Table for Score DF Sum of Squares Mean Square F-Value P-Value Lambda Power A < Residual Means Table for Score Effect: A a1 a2 a3 Count Mean Std. Dev. Std. Err Imagine, now, that you have the same three conditions and the same 18 scores, but now presume that they come from only 6 participants in a repeated measures design. Even though the MS Between would be identical, in a repeated measures design that variability is not influenced by individual differences. Thus, the MS Between of 33.5 would come from treatment effects and random effects. In order to construct an appropriate F ratio, you now need to develop an error term that contains only random variability. The logic of the procedure we will use is to take the error term that would be constructed were these data from an independent groups design (and would include individual differences and random variability) and remove the portion due to individual differences, which leaves behind the random variability that we want in our error term. Conceptually, then, our F ratio would be comprised of the components seen below: F = Treatment Effect + Random Variability Random Variability Remember, however, that even though the components in the numerator of the F ratio differ in the independent groups and repeated measures ANOVAs, the computations are identical. That is, regardless of the nature of the design, the formula for SS Between is: SS Treatment = Â T 2 n - G2 N Ch. 14 Repeated Measures ANOVA - 2

3 And the formula for df Between is: df Treatment = k -1 Furthermore, you ll still need to compute the SS Error for the independent groups ANOVA (which is just the sum of the SS for each condition) and the df Error for the independent groups ANOVA (which is just n-1 for each condition times the number of conditions). However, because this old error term contains both individual differences and random variability, we need to estimate and remove the contribution of individual differences. We estimate the contribution of individual differences using the same logic as we use when computing the variability among treatments. That is, we treat each participant as the level of a factor (think of the factor as Subject or Participant ). If you think of the computation this way, you ll immediately notice that the formulas for SS Between and SS Subject are identical, with the SS Between working on columns while the SS Subject works on rows. The actual formula would be: SS Subject = P 2 k - G2 N If you ll look at our data again, to complete your computation you would need to sum across each of the participants and then square those sums before adding them and dividing by the number of treatments. Your computation of SS Subject would be: Â a 1 a 2 a 3 P Mean SS Subject = = = You would then enter the SS Subject into the source table and subtract it from the SS Within (which is the error term from the independent groups design). As seen in the source table below, when you subtract that SS Subject, you are left with SS Error = The SS in the denominator of the repeated measures design will always be less than that found in an independent groups design for the same scores. Ch. 14 Repeated Measures ANOVA - 3

4 Source SS df MS F Between Within Groups Subject Error Total Of course, you need to apply the same procedure to the degrees of freedom. The df WithinGroups for the independent groups design must be reduced by the df Subject. The df Subject is simply: df Subjects = n -1 Just as you should note the parallel between the SS Between and the SS Subject, you should also note the parallel between the df Between and the df Subject. Because you remove the df Subject, the df in the error term for the repeated measures design will always be less than the df in the error term for an independent groups design for the same scores. Furthermore, it will always be true that the df Error in a repeated measures design is the product of the df Between and the df Subject. For completeness, below is the source table that StatView would generate for these data using a repeated measures ANOVA: ANOVA Table for A Subject Category for A Category for A * Subject DF Sum of Squares Mean Square F-Value P-Value Lambda Power Means Table for A Effect: Category for A a1 a2 a3 Count Mean Std. Dev. Std. Err You should note the differences between the source tables that you would generate doing the analyses as shown in your Gravetter & Wallnau textbook and that generated by StatView. First of all, the SS and df columns are reversed. But more important, you need to note that the first row is the Subject effect, the second row is the Treatment Effect (called Category for A) and the third row is the Error Effect (Random), which appears as Category for A * Subject. Thus, the F ratio appears in the second row, but is the expected ratio of the MS Between and the MS Error. Ch. 14 Repeated Measures ANOVA - 4

5 You should also note a perplexing result. Generally speaking, the repeated measures design is more powerful than the independent groups design. Thus, you should expect that the F ratio would be larger for the repeated measures design than it is for the independent groups design. For these data, however, that s not the case. Note that for the independent groups ANOVA, F = 25.8 and for the repeated measures ANOVA, F = (For the repeated measures analysis, the difference between the StatView F and the calculator-computed F is due to rounding error.) What happened? Think, first of all, of the formula for the F ratio. The numerator is identical, whether the analysis is for an independent groups design or a repeated measures design. So for any difference in the F ratio to emerge, it has to come from the denominator. Generally speaking, as seen in the formula below, larger F ratios would come from larger df Error and smaller SS Error. F = MS Treatment SS Error df Error But, for identical data, the df Error will always be smaller for a repeated measures analysis! So, how does the increased power emerge? Again, for identical data, it s also true that the SS Error will always be smaller for a repeated measures analysis. As long as the SS Subject is substantial, the F ratio will be larger for the repeated measures analysis. For these data, however, the SS Subject is actually fairly small, resulting in a smaller F ratio. Thus, the power of the repeated measures design emerges from the presumption that people will vary. That is, you re betting on substantial individual differences. As you look at the people around you, that presumption is not all that unreasonable. Use the source table below to determine the break-even point for this data set. What SS Subject would need to be present to give you the exact same F ratio as for the independent groups ANOVA? Source SS df MS F Between Within Groups Subject 5 Error 10 Total So, as long as you had more than that level of SS Subject you would achieve a larger F ratio using the repeated measures design. Testing the Null Hypothesis and Post Hoc Tests for Repeated Measures ANOVAs You would set up and test the null hypothesis for a repeated measures design just as you would for an independent groups design. That is, for this example, the null and alternative hypotheses would be identical for the two designs: Ch. 14 Repeated Measures ANOVA - 5

6 H 0 : m 1 = m 2 = m 3 H 1 : Not H 0 To test the null hypothesis for a repeated measures design, you would look up the F Critical with the df Between and the df Error found in your source table. That is, for this example, F Crit (2,10) = If you reject H 0, as you would in this case, you would then need to compute a post hoc test to determine exactly which of the conditions differed. Again, the computation of Tukey s HSD would parallel the procedure you used for an independent groups analysis. In this case, for the independent groups design, your Tukey s HSD would be: HSD = =1.71 For the repeated measures design, your Tukey s HSD would be: HSD = = 2.1 Ordinarily, of course, your HSD would be smaller for the repeated measures design, due to the typical reduction in the MS Error. For this particular data set, given the lack of individual differences, that s not the case. A Computational Example RESEARCH QUESTION: Does behavior modification (response-cost technique) reduce the outbursts of unruly children? EXPERIMENT: Randomly select 6 participants, who are tested before treatment, then one week, one month, and six months after treatment. The IV is the duration of the treatment. The DV is the number of unruly acts observed. STATISTICAL HYPOTHESES: H 0 : m Before = m 1Week = m 1Month = m 6Months H 1 : Not H 0 DECISION RULE: If F Obt F Crit, Reject H 0. F Crit (3,15) = 3.29 Ch. 14 Repeated Measures ANOVA - 6

7 DATA: Before 1 Week 1 Month 6 Months P P P P P P P X SUM T (SX) SX SS SOURCE TABLE: Between SOURCE SS Formula SS df MS F  T 2 n - G2 N Within grps SSS in each group Between subjs  P 2 k - G2 N Error (SS Within Groups SS Between subjects ) Total G N 2  X - 2 DECISION: POST HOC TEST: INTERPRETATION: EFFECT SIZE: Ch. 14 Repeated Measures ANOVA - 7

8 Suppose that you continued to assess the amount of unruly behavior in the children after the treatment was withdrawn. You assess the number of unruly acts after 12 months, 18 months, 24 months and 30 months. Suppose that you obtain the following data. What could you conclude? 12 Months 18 Months 24 Months 30 Months P P P P P P P T (SX) SX Between SOURCE SS Formula SS df MS F  T 2 n - G2 N Within grps SSS in each group Between subjs  P 2 k - G2 N Error (SS Within Groups SS Between subjects ) Total G N 2  X - 2 DECISION: POST HOC TEST: INTERPRETATION: EFFECT SIZE: Ch. 14 Repeated Measures ANOVA - 8

9 An Example to Compare Independent Groups and Repeated Measures ANOVAs Independent Groups ANOVA A 1 A 2 A 3 A T (SX) (G) SX SS s SOURCE SS df MS F Between Error Total Repeated Measures ANOVA A 1 A 2 A 3 A 4 Exactly the same as above SOURCE SS df MS F Between Within Groups Between Subjs Error Total Ch. 14 Repeated Measures ANOVA - 9

10 Repeated Measures Analyses: The Error Term In a repeated measures analysis, the MS Error is actually the interaction between participants and treatment. However, that won't make much sense to you until we've talked about two-factor ANOVA. For now, we'll simply look at the data that would produce different kinds of error terms in a repeated measures analysis, to give you a clearer understanding of the factors that influence the error term. These examples are derived from the example in your textbook (G&W, p. 464). Imagine a study in which rats are given each of three types of food rewards (2, 4, or 6 grams) when they complete a maze. The DV is the time to complete the maze. As you can see in the graph below, Participant1 is the fastest and Participant6 is the slowest. The differences in average performance represent individual differences. If the 6 lines were absolutely parallel, the MS Error would be 0, so an F ratio could not be computed. So, I've tweaked the data to be sure that the lines were not perfectly parallel. Nonetheless, if performance was as illustrated below, the MS Error would be quite small. The data are seen below in tabular form and then in graphical form. 2 grams 4 grams 6 grams P P P P P P P Mean s Small MSError Participant1 Participant2 Participant3 Participant4 Participant5 Participant6 8 Speed of Response Amount of Reward (grams) The ANOVA on these data would be as seen below. Note that the F-ratio would be significant. Ch. 14 Repeated Measures ANOVA - 10

11 ANOVA Table for Reward Subject Category for Reward Category for Reward * Subject DF Sum of Squares Mean Square F-Value P-Value Lambda Power < Means Table for Reward Effect: Category for Reward Reward 2g Reward 4g Reward 6g Count Mean Std. Dev. Std. Err Moderate MS Error Next, keeping all the data the same (so SS Total would be unchanged), and only rearranging data within a treatment (so that the s 2 for each treatment would be unchanged), I've created greater interaction between participants and treatment. Note that the participant means would now be closer together, which means that the SS Subject is smaller. In the data table below, you'll note that the sums across participants (P) are more similar than in the earlier example. 2 grams 4 grams 6 grams P P P P P P P Mean s Moderate MSError Participant1 Participant2 Participant3 Participant4 Participant5 Participant6 8 Speed of Response Amount of Reward Note that the F-ratio is still significant, though it is much reduced. Note, also, that the MS Treatment is the same as in the earlier example. Ch. 14 Repeated Measures ANOVA - 11

12 ANOVA Table for Reward Subject Category for Reward Category for Reward * Subject DF Sum of Squares Mean Square F-Value P-Value Lambda Power Means Table for Reward Effect: Category for Reward Reward 2g Reward 4g Reward 6g Count Mean Std. Dev. Std. Err Large MS Error Next, using the same procedure, I'll rearrange the scores even more, which will produce an even larger MS Error. Note, again, that the SS Subject grows smaller (as the Participant means grow closer to one another) and the SS Error grows larger. 2 grams 4 grams 6 grams P P P P P P P Mean s Large MSError Participant1 Participant2 Participant3 Participant4 Participant5 Participant6 8 Speed of Response Amount of Reward Ch. 14 Repeated Measures ANOVA - 12

13 ANOVA Table for Reward Subject Category for Reward Category for Reward * Subject DF Sum of Squares Mean Square F-Value P-Value Lambda Power Means Table for Reward Effect: Category for Reward Reward 2g Reward 4g Reward 6g Count Mean Std. Dev. Std. Err Varying Individual Differences It is possible to keep the MS Error constant, while increasing the MS Subject, as the two examples below illustrate. As you see in the first example, the SS Subject is fairly small and the MS Error is quite small. 10 Small Individual Differences Participant1 Participant2 Participant3 Participant4 Participant5 Participant6 8 Speed of Response Amount of Reward (grams) ANOVA Table for Reward Subject Category for Reward Category for Reward * Subject DF Sum of Squares Mean Square F-Value P-Value Lambda Power < Means Table for Reward Effect: Category for Reward Reward 2g Reward 4g Reward 6g Count Mean Std. Dev. Std. Err Next, I've decreased the first two participants' scores by a constant amount and increased the last two participants' scores by a constant amount. Because the interaction between participant and treatment is the same, the MS Error is unchanged. However, because the means for the 6 participants are more different than before, the SS Subject increases. Ch. 14 Repeated Measures ANOVA - 13

14 12 Moderate Individual Differences Participant1 Participant2 Participant3 Participant4 Participant5 Participant6 10 Speed of Response Amount of Reward (grams) ANOVA Table for Reward Subject Category for Reward Category for Reward * Subject DF Sum of Squares Mean Square F-Value P-Value Lambda Power < Means Table for Reward Effect: Category for Reward Reward 2g Reward 4g Reward 6g Count Mean Std. Dev. Std. Err Ch. 14 Repeated Measures ANOVA - 14

15 StatView for Repeated Measures ANOVA: G&W 465 First, enter as many columns (variables) as you have levels of your independent variable. Below left are the data, with each column containing scores for a particular level of the IV. The next step is to highlight all 3 columns and then click on the Compact button. You ll then get the window seen below on the right, which allows you to name the IV (as a compact variable). Note that your data window will now reflect the compacting process, with Stimulus appearing above the 3 columns. To produce the analysis, choose Repeated Measures ANOVA from the Analyze menu. Move your compacted variable to the Repeated measure box on the left, as seen below left. Then, click on OK to produce the actual analysis, seen below right. ANOVA Table for Stimulus Subject Category for Stimulus Category for Stimulus * Subject DF Sum of Squares Mean Square F-Value P-Value Lambda Power Means Table for Stimulus Effect: Category for Stimulus Neutral Pleasant Aversive Count Mean Std. Dev. Std. Err Note that these results are not quite significant, so you would not ordinarily compute a post hoc test. Nonetheless, just to show you how to compute a post hoc test, choose Tukey/Kramer from the Post-hoc tests found on the left, under ANOVA. That will produce the table seen below. It s no surprise that none of the comparisons are significant, given that the overall ANOVA did not produce any significant results. Tukey/Kramer for Stimulus Effect: Category for Stimulus Significance Level: 5 % Neutral, Pleasant Neutral, Aversive Pleasant, Aversive Mean Diff. Crit. Diff Ch. 14 Repeated Measures ANOVA - 15

16 Practice Problems Drs. Dewey, Stink, & Howe were interested in memory for various odors. They conducted a study in which 6 participants were exposed to 10 common food odors (orange, onion, etc.) and 10 common non-food odors (motor oil, skunk, etc.) to see if people are better at identifying one type of odorant or the other. The 20 odors were presented in a random fashion, so that both classes of odors occurred equally often at the beginning of the list, at the end of the list, etc. (Thus, this randomization is a strategy that serves the same function as counterbalancing.) The dependent variable is the number of odors of each class correctly identified by each participant. The data are seen below. Analyze the data and fully interpret the results of this study. Food Odors Non-Food Odors SX (T) SX SS Ch. 14 Repeated Measures ANOVA - 16

17 Suppose that Dr. Belfry was interested in conducting a study about the auditory capabilities of bats, looking at bats abilities to avoid wires of varying thickness as they traverse a maze. The DV is the number of times that the bat touches the wires. (Thus, higher numbers indicate an inability to detect the wire.) Complete the source table below and fully interpret the results. Ch. 14 Repeated Measures ANOVA - 17

18 Dr. Richard Noggin is interested in the effect of different types of persuasive messages on a person s willingness to engage in socially conscious behaviors. To that end, he asks his participants to listen to each of four different types of messages (Fear Invoking, Appeal to Conscience, Guilt, and Information Laden). After listening to each message, the participant rates how effective the message was on a scale of 1-7 (1 = very ineffective and 7 = very effective). Complete the source table and analyze the data as completely as you can. Ch. 14 Repeated Measures ANOVA - 18

19 Dr. Beau Peep believes that pupil size increases during emotional arousal. He was interested in testing if the increase in pupil size was a function of the type of arousal (pleasant vs. aversive). A random sample of 5 participants is selected for the study. Each participant views all three stimuli: neutral, pleasant, and aversive photographs. The neutral photograph portrays a plain brick building. The pleasant photograph consists of a young man and woman sharing a large ice cream cone. Finally, the aversive stimulus is a graphic photograph of an automobile accident. Upon viewing each photograph, the pupil size is measured in millimeters. An incomplete source table resulting from analysis of these data is seen below. Complete the source table and analyze the data as completely as possible. Means Table for Stimulus Effect: Category for Stimulus Neutral Pleasant Aversive Count Mean Std. Dev. Std. Err Ch. 14 Repeated Measures ANOVA - 19

20 Suppose you are interested in studying the impact of duration of exposure to faces on the ability of people to recognize faces. To finesse the issue of the actual durations used, I'll call them Short, Medium, and Long durations. Participants are first exposed to a set of 30 faces for one duration and then tested on their memory for those faces. Then they are exposed to another set of 30 faces for a different duration and then tested. Finally, they are given a final set of 30 faces for the final duration and then tested. The DV for this analysis is the percent Hits (saying Old to an Old item). Suppose that the results of the experiment come out as seen below. Complete the analysis and interpret the results as completely as you can. If the results turned out as seen below, what would they mean to you? [15 pts] Means Table for Duration Effect: Category for Duration Short Medium Long Count Mean Std. Dev. Std. Err Ch. 14 Repeated Measures ANOVA - 20

1.5 Oneway Analysis of Variance

1.5 Oneway Analysis of Variance Statistics: Rosie Cornish. 200. 1.5 Oneway Analysis of Variance 1 Introduction Oneway analysis of variance (ANOVA) is used to compare several means. This method is often used in scientific or medical experiments

More information

ANOVA ANOVA. Two-Way ANOVA. One-Way ANOVA. When to use ANOVA ANOVA. Analysis of Variance. Chapter 16. A procedure for comparing more than two groups

ANOVA ANOVA. Two-Way ANOVA. One-Way ANOVA. When to use ANOVA ANOVA. Analysis of Variance. Chapter 16. A procedure for comparing more than two groups ANOVA ANOVA Analysis of Variance Chapter 6 A procedure for comparing more than two groups independent variable: smoking status non-smoking one pack a day > two packs a day dependent variable: number of

More information

One-Way ANOVA using SPSS 11.0. SPSS ANOVA procedures found in the Compare Means analyses. Specifically, we demonstrate

One-Way ANOVA using SPSS 11.0. SPSS ANOVA procedures found in the Compare Means analyses. Specifically, we demonstrate 1 One-Way ANOVA using SPSS 11.0 This section covers steps for testing the difference between three or more group means using the SPSS ANOVA procedures found in the Compare Means analyses. Specifically,

More information

Section 13, Part 1 ANOVA. Analysis Of Variance

Section 13, Part 1 ANOVA. Analysis Of Variance Section 13, Part 1 ANOVA Analysis Of Variance Course Overview So far in this course we ve covered: Descriptive statistics Summary statistics Tables and Graphs Probability Probability Rules Probability

More information

One-Way Analysis of Variance (ANOVA) Example Problem

One-Way Analysis of Variance (ANOVA) Example Problem One-Way Analysis of Variance (ANOVA) Example Problem Introduction Analysis of Variance (ANOVA) is a hypothesis-testing technique used to test the equality of two or more population (or treatment) means

More information

EXCEL Analysis TookPak [Statistical Analysis] 1. First of all, check to make sure that the Analysis ToolPak is installed. Here is how you do it:

EXCEL Analysis TookPak [Statistical Analysis] 1. First of all, check to make sure that the Analysis ToolPak is installed. Here is how you do it: EXCEL Analysis TookPak [Statistical Analysis] 1 First of all, check to make sure that the Analysis ToolPak is installed. Here is how you do it: a. From the Tools menu, choose Add-Ins b. Make sure Analysis

More information

Lesson 1: Comparison of Population Means Part c: Comparison of Two- Means

Lesson 1: Comparison of Population Means Part c: Comparison of Two- Means Lesson : Comparison of Population Means Part c: Comparison of Two- Means Welcome to lesson c. This third lesson of lesson will discuss hypothesis testing for two independent means. Steps in Hypothesis

More information

INTERPRETING THE ONE-WAY ANALYSIS OF VARIANCE (ANOVA)

INTERPRETING THE ONE-WAY ANALYSIS OF VARIANCE (ANOVA) INTERPRETING THE ONE-WAY ANALYSIS OF VARIANCE (ANOVA) As with other parametric statistics, we begin the one-way ANOVA with a test of the underlying assumptions. Our first assumption is the assumption of

More information

Class 19: Two Way Tables, Conditional Distributions, Chi-Square (Text: Sections 2.5; 9.1)

Class 19: Two Way Tables, Conditional Distributions, Chi-Square (Text: Sections 2.5; 9.1) Spring 204 Class 9: Two Way Tables, Conditional Distributions, Chi-Square (Text: Sections 2.5; 9.) Big Picture: More than Two Samples In Chapter 7: We looked at quantitative variables and compared the

More information

Chapter 5 Analysis of variance SPSS Analysis of variance

Chapter 5 Analysis of variance SPSS Analysis of variance Chapter 5 Analysis of variance SPSS Analysis of variance Data file used: gss.sav How to get there: Analyze Compare Means One-way ANOVA To test the null hypothesis that several population means are equal,

More information

Analysis of Variance ANOVA

Analysis of Variance ANOVA Analysis of Variance ANOVA Overview We ve used the t -test to compare the means from two independent groups. Now we ve come to the final topic of the course: how to compare means from more than two populations.

More information

SPSS Guide: Regression Analysis

SPSS Guide: Regression Analysis SPSS Guide: Regression Analysis I put this together to give you a step-by-step guide for replicating what we did in the computer lab. It should help you run the tests we covered. The best way to get familiar

More information

One-Way Analysis of Variance (ANOVA)

One-Way Analysis of Variance (ANOVA) One-Way Analysis of Variance (ANOVA) Although the t-test is a useful statistic, it is limited to testing hypotheses about two conditions or levels. The analysis of variance (ANOVA) was developed to allow

More information

Two-Way ANOVA Lab: Interactions

Two-Way ANOVA Lab: Interactions Name Two-Way ANOVA Lab: Interactions Perhaps the most complicated situation that you face in interpreting a two-way ANOVA is the presence of an interaction. This brief lab is intended to give you additional

More information

UNDERSTANDING THE TWO-WAY ANOVA

UNDERSTANDING THE TWO-WAY ANOVA UNDERSTANDING THE e have seen how the one-way ANOVA can be used to compare two or more sample means in studies involving a single independent variable. This can be extended to two independent variables

More information

Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression

Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression Objectives: To perform a hypothesis test concerning the slope of a least squares line To recognize that testing for a

More information

ABSORBENCY OF PAPER TOWELS

ABSORBENCY OF PAPER TOWELS ABSORBENCY OF PAPER TOWELS 15. Brief Version of the Case Study 15.1 Problem Formulation 15.2 Selection of Factors 15.3 Obtaining Random Samples of Paper Towels 15.4 How will the Absorbency be measured?

More information

13: Additional ANOVA Topics. Post hoc Comparisons

13: Additional ANOVA Topics. Post hoc Comparisons 13: Additional ANOVA Topics Post hoc Comparisons ANOVA Assumptions Assessing Group Variances When Distributional Assumptions are Severely Violated Kruskal-Wallis Test Post hoc Comparisons In the prior

More information

Chapter 9. Two-Sample Tests. Effect Sizes and Power Paired t Test Calculation

Chapter 9. Two-Sample Tests. Effect Sizes and Power Paired t Test Calculation Chapter 9 Two-Sample Tests Paired t Test (Correlated Groups t Test) Effect Sizes and Power Paired t Test Calculation Summary Independent t Test Chapter 9 Homework Power and Two-Sample Tests: Paired Versus

More information

CHAPTER 11 CHI-SQUARE AND F DISTRIBUTIONS

CHAPTER 11 CHI-SQUARE AND F DISTRIBUTIONS CHAPTER 11 CHI-SQUARE AND F DISTRIBUTIONS CHI-SQUARE TESTS OF INDEPENDENCE (SECTION 11.1 OF UNDERSTANDABLE STATISTICS) In chi-square tests of independence we use the hypotheses. H0: The variables are independent

More information

Study Guide for the Final Exam

Study Guide for the Final Exam Study Guide for the Final Exam When studying, remember that the computational portion of the exam will only involve new material (covered after the second midterm), that material from Exam 1 will make

More information

Main Effects and Interactions

Main Effects and Interactions Main Effects & Interactions page 1 Main Effects and Interactions So far, we ve talked about studies in which there is just one independent variable, such as violence of television program. You might randomly

More information

Randomized Block Analysis of Variance

Randomized Block Analysis of Variance Chapter 565 Randomized Block Analysis of Variance Introduction This module analyzes a randomized block analysis of variance with up to two treatment factors and their interaction. It provides tables of

More information

The Chi-Square Test. STAT E-50 Introduction to Statistics

The Chi-Square Test. STAT E-50 Introduction to Statistics STAT -50 Introduction to Statistics The Chi-Square Test The Chi-square test is a nonparametric test that is used to compare experimental results with theoretical models. That is, we will be comparing observed

More information

Chapter 7. One-way ANOVA

Chapter 7. One-way ANOVA Chapter 7 One-way ANOVA One-way ANOVA examines equality of population means for a quantitative outcome and a single categorical explanatory variable with any number of levels. The t-test of Chapter 6 looks

More information

One-Way Analysis of Variance

One-Way Analysis of Variance One-Way Analysis of Variance Note: Much of the math here is tedious but straightforward. We ll skim over it in class but you should be sure to ask questions if you don t understand it. I. Overview A. We

More information

Chapter 7 Section 1 Homework Set A

Chapter 7 Section 1 Homework Set A Chapter 7 Section 1 Homework Set A 7.15 Finding the critical value t *. What critical value t * from Table D (use software, go to the web and type t distribution applet) should be used to calculate the

More information

Association Between Variables

Association Between Variables Contents 11 Association Between Variables 767 11.1 Introduction............................ 767 11.1.1 Measure of Association................. 768 11.1.2 Chapter Summary.................... 769 11.2 Chi

More information

Chapter 7 Section 7.1: Inference for the Mean of a Population

Chapter 7 Section 7.1: Inference for the Mean of a Population Chapter 7 Section 7.1: Inference for the Mean of a Population Now let s look at a similar situation Take an SRS of size n Normal Population : N(, ). Both and are unknown parameters. Unlike what we used

More information

Post-hoc comparisons & two-way analysis of variance. Two-way ANOVA, II. Post-hoc testing for main effects. Post-hoc testing 9.

Post-hoc comparisons & two-way analysis of variance. Two-way ANOVA, II. Post-hoc testing for main effects. Post-hoc testing 9. Two-way ANOVA, II Post-hoc comparisons & two-way analysis of variance 9.7 4/9/4 Post-hoc testing As before, you can perform post-hoc tests whenever there s a significant F But don t bother if it s a main

More information

1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96

1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96 1 Final Review 2 Review 2.1 CI 1-propZint Scenario 1 A TV manufacturer claims in its warranty brochure that in the past not more than 10 percent of its TV sets needed any repair during the first two years

More information

Recall this chart that showed how most of our course would be organized:

Recall this chart that showed how most of our course would be organized: Chapter 4 One-Way ANOVA Recall this chart that showed how most of our course would be organized: Explanatory Variable(s) Response Variable Methods Categorical Categorical Contingency Tables Categorical

More information

The ANOVA for 2x2 Independent Groups Factorial Design

The ANOVA for 2x2 Independent Groups Factorial Design The ANOVA for 2x2 Independent Groups Factorial Design Please Note: In the analyses above I have tried to avoid using the terms "Independent Variable" and "Dependent Variable" (IV and DV) in order to emphasize

More information

UNDERSTANDING ANALYSIS OF COVARIANCE (ANCOVA)

UNDERSTANDING ANALYSIS OF COVARIANCE (ANCOVA) UNDERSTANDING ANALYSIS OF COVARIANCE () In general, research is conducted for the purpose of explaining the effects of the independent variable on the dependent variable, and the purpose of research design

More information

Data Analysis Tools. Tools for Summarizing Data

Data Analysis Tools. Tools for Summarizing Data Data Analysis Tools This section of the notes is meant to introduce you to many of the tools that are provided by Excel under the Tools/Data Analysis menu item. If your computer does not have that tool

More information

Introduction to Analysis of Variance (ANOVA) Limitations of the t-test

Introduction to Analysis of Variance (ANOVA) Limitations of the t-test Introduction to Analysis of Variance (ANOVA) The Structural Model, The Summary Table, and the One- Way ANOVA Limitations of the t-test Although the t-test is commonly used, it has limitations Can only

More information

" Y. Notation and Equations for Regression Lecture 11/4. Notation:

 Y. Notation and Equations for Regression Lecture 11/4. Notation: Notation: Notation and Equations for Regression Lecture 11/4 m: The number of predictor variables in a regression Xi: One of multiple predictor variables. The subscript i represents any number from 1 through

More information

Comparing Means in Two Populations

Comparing Means in Two Populations Comparing Means in Two Populations Overview The previous section discussed hypothesis testing when sampling from a single population (either a single mean or two means from the same population). Now we

More information

Lecture Notes Module 1

Lecture Notes Module 1 Lecture Notes Module 1 Study Populations A study population is a clearly defined collection of people, animals, plants, or objects. In psychological research, a study population usually consists of a specific

More information

Testing Group Differences using T-tests, ANOVA, and Nonparametric Measures

Testing Group Differences using T-tests, ANOVA, and Nonparametric Measures Testing Group Differences using T-tests, ANOVA, and Nonparametric Measures Jamie DeCoster Department of Psychology University of Alabama 348 Gordon Palmer Hall Box 870348 Tuscaloosa, AL 35487-0348 Phone:

More information

15. Analysis of Variance

15. Analysis of Variance 15. Analysis of Variance A. Introduction B. ANOVA Designs C. One-Factor ANOVA (Between-Subjects) D. Multi-Factor ANOVA (Between-Subjects) E. Unequal Sample Sizes F. Tests Supplementing ANOVA G. Within-Subjects

More information

Chapter 7. Comparing Means in SPSS (t-tests) Compare Means analyses. Specifically, we demonstrate procedures for running Dependent-Sample (or

Chapter 7. Comparing Means in SPSS (t-tests) Compare Means analyses. Specifically, we demonstrate procedures for running Dependent-Sample (or 1 Chapter 7 Comparing Means in SPSS (t-tests) This section covers procedures for testing the differences between two means using the SPSS Compare Means analyses. Specifically, we demonstrate procedures

More information

Introduction to Quantitative Methods

Introduction to Quantitative Methods Introduction to Quantitative Methods October 15, 2009 Contents 1 Definition of Key Terms 2 2 Descriptive Statistics 3 2.1 Frequency Tables......................... 4 2.2 Measures of Central Tendencies.................

More information

This chapter will demonstrate how to perform multiple linear regression with IBM SPSS

This chapter will demonstrate how to perform multiple linear regression with IBM SPSS CHAPTER 7B Multiple Regression: Statistical Methods Using IBM SPSS This chapter will demonstrate how to perform multiple linear regression with IBM SPSS first using the standard method and then using the

More information

Non-Inferiority Tests for Two Proportions

Non-Inferiority Tests for Two Proportions Chapter 0 Non-Inferiority Tests for Two Proportions Introduction This module provides power analysis and sample size calculation for non-inferiority and superiority tests in twosample designs in which

More information

12: Analysis of Variance. Introduction

12: Analysis of Variance. Introduction 1: Analysis of Variance Introduction EDA Hypothesis Test Introduction In Chapter 8 and again in Chapter 11 we compared means from two independent groups. In this chapter we extend the procedure to consider

More information

One-Way Analysis of Variance: A Guide to Testing Differences Between Multiple Groups

One-Way Analysis of Variance: A Guide to Testing Differences Between Multiple Groups One-Way Analysis of Variance: A Guide to Testing Differences Between Multiple Groups In analysis of variance, the main research question is whether the sample means are from different populations. The

More information

research/scientific includes the following: statistical hypotheses: you have a null and alternative you accept one and reject the other

research/scientific includes the following: statistical hypotheses: you have a null and alternative you accept one and reject the other 1 Hypothesis Testing Richard S. Balkin, Ph.D., LPC-S, NCC 2 Overview When we have questions about the effect of a treatment or intervention or wish to compare groups, we use hypothesis testing Parametric

More information

CALCULATIONS & STATISTICS

CALCULATIONS & STATISTICS CALCULATIONS & STATISTICS CALCULATION OF SCORES Conversion of 1-5 scale to 0-100 scores When you look at your report, you will notice that the scores are reported on a 0-100 scale, even though respondents

More information

An analysis method for a quantitative outcome and two categorical explanatory variables.

An analysis method for a quantitative outcome and two categorical explanatory variables. Chapter 11 Two-Way ANOVA An analysis method for a quantitative outcome and two categorical explanatory variables. If an experiment has a quantitative outcome and two categorical explanatory variables that

More information

Testing Research and Statistical Hypotheses

Testing Research and Statistical Hypotheses Testing Research and Statistical Hypotheses Introduction In the last lab we analyzed metric artifact attributes such as thickness or width/thickness ratio. Those were continuous variables, which as you

More information

Part 3. Comparing Groups. Chapter 7 Comparing Paired Groups 189. Chapter 8 Comparing Two Independent Groups 217

Part 3. Comparing Groups. Chapter 7 Comparing Paired Groups 189. Chapter 8 Comparing Two Independent Groups 217 Part 3 Comparing Groups Chapter 7 Comparing Paired Groups 189 Chapter 8 Comparing Two Independent Groups 217 Chapter 9 Comparing More Than Two Groups 257 188 Elementary Statistics Using SAS Chapter 7 Comparing

More information

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( ) Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates

More information

Mixed 2 x 3 ANOVA. Notes

Mixed 2 x 3 ANOVA. Notes Mixed 2 x 3 ANOVA This section explains how to perform an ANOVA when one of the variables takes the form of repeated measures and the other variable is between-subjects that is, independent groups of participants

More information

Friedman's Two-way Analysis of Variance by Ranks -- Analysis of k-within-group Data with a Quantitative Response Variable

Friedman's Two-way Analysis of Variance by Ranks -- Analysis of k-within-group Data with a Quantitative Response Variable Friedman's Two-way Analysis of Variance by Ranks -- Analysis of k-within-group Data with a Quantitative Response Variable Application: This statistic has two applications that can appear very different,

More information

Two-Way Analysis of Variance (ANOVA)

Two-Way Analysis of Variance (ANOVA) Two-Way Analysis of Variance (ANOVA) An understanding of the one-way ANOVA is crucial to understanding the two-way ANOVA, so be sure that the concepts involved in the one-way ANOVA are clear. Important

More information

Calculating P-Values. Parkland College. Isela Guerra Parkland College. Recommended Citation

Calculating P-Values. Parkland College. Isela Guerra Parkland College. Recommended Citation Parkland College A with Honors Projects Honors Program 2014 Calculating P-Values Isela Guerra Parkland College Recommended Citation Guerra, Isela, "Calculating P-Values" (2014). A with Honors Projects.

More information

Session 7 Bivariate Data and Analysis

Session 7 Bivariate Data and Analysis Session 7 Bivariate Data and Analysis Key Terms for This Session Previously Introduced mean standard deviation New in This Session association bivariate analysis contingency table co-variation least squares

More information

Playing with Numbers

Playing with Numbers PLAYING WITH NUMBERS 249 Playing with Numbers CHAPTER 16 16.1 Introduction You have studied various types of numbers such as natural numbers, whole numbers, integers and rational numbers. You have also

More information

KSTAT MINI-MANUAL. Decision Sciences 434 Kellogg Graduate School of Management

KSTAT MINI-MANUAL. Decision Sciences 434 Kellogg Graduate School of Management KSTAT MINI-MANUAL Decision Sciences 434 Kellogg Graduate School of Management Kstat is a set of macros added to Excel and it will enable you to do the statistics required for this course very easily. To

More information

NCSS Statistical Software

NCSS Statistical Software Chapter 06 Introduction This procedure provides several reports for the comparison of two distributions, including confidence intervals for the difference in means, two-sample t-tests, the z-test, the

More information

Permutation Tests for Comparing Two Populations

Permutation Tests for Comparing Two Populations Permutation Tests for Comparing Two Populations Ferry Butar Butar, Ph.D. Jae-Wan Park Abstract Permutation tests for comparing two populations could be widely used in practice because of flexibility of

More information

Guide to Microsoft Excel for calculations, statistics, and plotting data

Guide to Microsoft Excel for calculations, statistics, and plotting data Page 1/47 Guide to Microsoft Excel for calculations, statistics, and plotting data Topic Page A. Writing equations and text 2 1. Writing equations with mathematical operations 2 2. Writing equations with

More information

6.4 Normal Distribution

6.4 Normal Distribution Contents 6.4 Normal Distribution....................... 381 6.4.1 Characteristics of the Normal Distribution....... 381 6.4.2 The Standardized Normal Distribution......... 385 6.4.3 Meaning of Areas under

More information

LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING

LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING In this lab you will explore the concept of a confidence interval and hypothesis testing through a simulation problem in engineering setting.

More information

Multivariate Analysis of Variance (MANOVA)

Multivariate Analysis of Variance (MANOVA) Multivariate Analysis of Variance (MANOVA) Aaron French, Marcelo Macedo, John Poulsen, Tyler Waterson and Angela Yu Keywords: MANCOVA, special cases, assumptions, further reading, computations Introduction

More information

Two-sample hypothesis testing, II 9.07 3/16/2004

Two-sample hypothesis testing, II 9.07 3/16/2004 Two-sample hypothesis testing, II 9.07 3/16/004 Small sample tests for the difference between two independent means For two-sample tests of the difference in mean, things get a little confusing, here,

More information

The F distribution and the basic principle behind ANOVAs. Situating ANOVAs in the world of statistical tests

The F distribution and the basic principle behind ANOVAs. Situating ANOVAs in the world of statistical tests Tutorial The F distribution and the basic principle behind ANOVAs Bodo Winter 1 Updates: September 21, 2011; January 23, 2014; April 24, 2014; March 2, 2015 This tutorial focuses on understanding rather

More information

Experimental Designs (revisited)

Experimental Designs (revisited) Introduction to ANOVA Copyright 2000, 2011, J. Toby Mordkoff Probably, the best way to start thinking about ANOVA is in terms of factors with levels. (I say this because this is how they are described

More information

General Regression Formulae ) (N-2) (1 - r 2 YX

General Regression Formulae ) (N-2) (1 - r 2 YX General Regression Formulae Single Predictor Standardized Parameter Model: Z Yi = β Z Xi + ε i Single Predictor Standardized Statistical Model: Z Yi = β Z Xi Estimate of Beta (Beta-hat: β = r YX (1 Standard

More information

Solutions to Homework 10 Statistics 302 Professor Larget

Solutions to Homework 10 Statistics 302 Professor Larget s to Homework 10 Statistics 302 Professor Larget Textbook Exercises 7.14 Rock-Paper-Scissors (Graded for Accurateness) In Data 6.1 on page 367 we see a table, reproduced in the table below that shows the

More information

Outline. Definitions Descriptive vs. Inferential Statistics The t-test - One-sample t-test

Outline. Definitions Descriptive vs. Inferential Statistics The t-test - One-sample t-test The t-test Outline Definitions Descriptive vs. Inferential Statistics The t-test - One-sample t-test - Dependent (related) groups t-test - Independent (unrelated) groups t-test Comparing means Correlation

More information

Unit 26 Estimation with Confidence Intervals

Unit 26 Estimation with Confidence Intervals Unit 26 Estimation with Confidence Intervals Objectives: To see how confidence intervals are used to estimate a population proportion, a population mean, a difference in population proportions, or a difference

More information

2 Sample t-test (unequal sample sizes and unequal variances)

2 Sample t-test (unequal sample sizes and unequal variances) Variations of the t-test: Sample tail Sample t-test (unequal sample sizes and unequal variances) Like the last example, below we have ceramic sherd thickness measurements (in cm) of two samples representing

More information

ANALYSIS OF TREND CHAPTER 5

ANALYSIS OF TREND CHAPTER 5 ANALYSIS OF TREND CHAPTER 5 ERSH 8310 Lecture 7 September 13, 2007 Today s Class Analysis of trends Using contrasts to do something a bit more practical. Linear trends. Quadratic trends. Trends in SPSS.

More information

MULTIPLE REGRESSION WITH CATEGORICAL DATA

MULTIPLE REGRESSION WITH CATEGORICAL DATA DEPARTMENT OF POLITICAL SCIENCE AND INTERNATIONAL RELATIONS Posc/Uapp 86 MULTIPLE REGRESSION WITH CATEGORICAL DATA I. AGENDA: A. Multiple regression with categorical variables. Coding schemes. Interpreting

More information

Analysis of Data. Organizing Data Files in SPSS. Descriptive Statistics

Analysis of Data. Organizing Data Files in SPSS. Descriptive Statistics Analysis of Data Claudia J. Stanny PSY 67 Research Design Organizing Data Files in SPSS All data for one subject entered on the same line Identification data Between-subjects manipulations: variable to

More information

AP: LAB 8: THE CHI-SQUARE TEST. Probability, Random Chance, and Genetics

AP: LAB 8: THE CHI-SQUARE TEST. Probability, Random Chance, and Genetics Ms. Foglia Date AP: LAB 8: THE CHI-SQUARE TEST Probability, Random Chance, and Genetics Why do we study random chance and probability at the beginning of a unit on genetics? Genetics is the study of inheritance,

More information

Hypothesis testing - Steps

Hypothesis testing - Steps Hypothesis testing - Steps Steps to do a two-tailed test of the hypothesis that β 1 0: 1. Set up the hypotheses: H 0 : β 1 = 0 H a : β 1 0. 2. Compute the test statistic: t = b 1 0 Std. error of b 1 =

More information

MATH 140 Lab 4: Probability and the Standard Normal Distribution

MATH 140 Lab 4: Probability and the Standard Normal Distribution MATH 140 Lab 4: Probability and the Standard Normal Distribution Problem 1. Flipping a Coin Problem In this problem, we want to simualte the process of flipping a fair coin 1000 times. Note that the outcomes

More information

1 Basic ANOVA concepts

1 Basic ANOVA concepts Math 143 ANOVA 1 Analysis of Variance (ANOVA) Recall, when we wanted to compare two population means, we used the 2-sample t procedures. Now let s expand this to compare k 3 population means. As with the

More information

DEPARTMENT OF PSYCHOLOGY UNIVERSITY OF LANCASTER MSC IN PSYCHOLOGICAL RESEARCH METHODS ANALYSING AND INTERPRETING DATA 2 PART 1 WEEK 9

DEPARTMENT OF PSYCHOLOGY UNIVERSITY OF LANCASTER MSC IN PSYCHOLOGICAL RESEARCH METHODS ANALYSING AND INTERPRETING DATA 2 PART 1 WEEK 9 DEPARTMENT OF PSYCHOLOGY UNIVERSITY OF LANCASTER MSC IN PSYCHOLOGICAL RESEARCH METHODS ANALYSING AND INTERPRETING DATA 2 PART 1 WEEK 9 Analysis of covariance and multiple regression So far in this course,

More information

Statistiek II. John Nerbonne. October 1, 2010. Dept of Information Science j.nerbonne@rug.nl

Statistiek II. John Nerbonne. October 1, 2010. Dept of Information Science j.nerbonne@rug.nl Dept of Information Science j.nerbonne@rug.nl October 1, 2010 Course outline 1 One-way ANOVA. 2 Factorial ANOVA. 3 Repeated measures ANOVA. 4 Correlation and regression. 5 Multiple regression. 6 Logistic

More information

Final Exam Practice Problem Answers

Final Exam Practice Problem Answers Final Exam Practice Problem Answers The following data set consists of data gathered from 77 popular breakfast cereals. The variables in the data set are as follows: Brand: The brand name of the cereal

More information

Bivariate Statistics Session 2: Measuring Associations Chi-Square Test

Bivariate Statistics Session 2: Measuring Associations Chi-Square Test Bivariate Statistics Session 2: Measuring Associations Chi-Square Test Features Of The Chi-Square Statistic The chi-square test is non-parametric. That is, it makes no assumptions about the distribution

More information

Introduction to Hypothesis Testing. Hypothesis Testing. Step 1: State the Hypotheses

Introduction to Hypothesis Testing. Hypothesis Testing. Step 1: State the Hypotheses Introduction to Hypothesis Testing 1 Hypothesis Testing A hypothesis test is a statistical procedure that uses sample data to evaluate a hypothesis about a population Hypothesis is stated in terms of the

More information

How to calculate an ANOVA table

How to calculate an ANOVA table How to calculate an ANOVA table Calculations by Hand We look at the following example: Let us say we measure the height of some plants under the effect of different fertilizers. Treatment Measures Mean

More information

Using Microsoft Excel to Analyze Data

Using Microsoft Excel to Analyze Data Entering and Formatting Data Using Microsoft Excel to Analyze Data Open Excel. Set up the spreadsheet page (Sheet 1) so that anyone who reads it will understand the page. For the comparison of pipets:

More information

Multiple-Comparison Procedures

Multiple-Comparison Procedures Multiple-Comparison Procedures References A good review of many methods for both parametric and nonparametric multiple comparisons, planned and unplanned, and with some discussion of the philosophical

More information

A Statistical Analysis of Popular Lottery Winning Strategies

A Statistical Analysis of Popular Lottery Winning Strategies CS-BIGS 4(1): 66-72 2010 CS-BIGS http://www.bentley.edu/csbigs/vol4-1/chen.pdf A Statistical Analysis of Popular Lottery Winning Strategies Albert C. Chen Torrey Pines High School, USA Y. Helio Yang San

More information

The Analysis of Variance ANOVA

The Analysis of Variance ANOVA -3σ -σ -σ +σ +σ +3σ The Analysis of Variance ANOVA Lecture 0909.400.0 / 0909.400.0 Dr. P. s Clinic Consultant Module in Probability & Statistics in Engineering Today in P&S -3σ -σ -σ +σ +σ +3σ Analysis

More information

SPSS Explore procedure

SPSS Explore procedure SPSS Explore procedure One useful function in SPSS is the Explore procedure, which will produce histograms, boxplots, stem-and-leaf plots and extensive descriptive statistics. To run the Explore procedure,

More information

Independent samples t-test. Dr. Tom Pierce Radford University

Independent samples t-test. Dr. Tom Pierce Radford University Independent samples t-test Dr. Tom Pierce Radford University The logic behind drawing causal conclusions from experiments The sampling distribution of the difference between means The standard error of

More information

A STUDY OF WHETHER HAVING A PROFESSIONAL STAFF WITH ADVANCED DEGREES INCREASES STUDENT ACHIEVEMENT MEGAN M. MOSSER. Submitted to

A STUDY OF WHETHER HAVING A PROFESSIONAL STAFF WITH ADVANCED DEGREES INCREASES STUDENT ACHIEVEMENT MEGAN M. MOSSER. Submitted to Advanced Degrees and Student Achievement-1 Running Head: Advanced Degrees and Student Achievement A STUDY OF WHETHER HAVING A PROFESSIONAL STAFF WITH ADVANCED DEGREES INCREASES STUDENT ACHIEVEMENT By MEGAN

More information

Statistics Review PSY379

Statistics Review PSY379 Statistics Review PSY379 Basic concepts Measurement scales Populations vs. samples Continuous vs. discrete variable Independent vs. dependent variable Descriptive vs. inferential stats Common analyses

More information

t Tests in Excel The Excel Statistical Master By Mark Harmon Copyright 2011 Mark Harmon

t Tests in Excel The Excel Statistical Master By Mark Harmon Copyright 2011 Mark Harmon t-tests in Excel By Mark Harmon Copyright 2011 Mark Harmon No part of this publication may be reproduced or distributed without the express permission of the author. mark@excelmasterseries.com www.excelmasterseries.com

More information

Two-sample inference: Continuous data

Two-sample inference: Continuous data Two-sample inference: Continuous data Patrick Breheny April 5 Patrick Breheny STA 580: Biostatistics I 1/32 Introduction Our next two lectures will deal with two-sample inference for continuous data As

More information

Using Excel for inferential statistics

Using Excel for inferential statistics FACT SHEET Using Excel for inferential statistics Introduction When you collect data, you expect a certain amount of variation, just caused by chance. A wide variety of statistical tests can be applied

More information

PSYC 381 Statistics Arlo Clark-Foos, Ph.D.

PSYC 381 Statistics Arlo Clark-Foos, Ph.D. One-Way Within- Groups ANOVA PSYC 381 Statistics Arlo Clark-Foos, Ph.D. Comparing Designs Pros of Between No order or carryover effects Between-Groups Design Pros of Within More costly Higher variability

More information