# 4.4. Further Analysis within ANOVA

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 4.4. Further Analysis within ANOVA 1) Estimation of the effects Fixed effects model: α i = µ i µ is estimated by a i = ( x i x) if H 0 : µ 1 = µ 2 = = µ k is rejected. Random effects model: If H 0 : σa 2 =0 is rejected, then we estimate the variability σa 2 among the population means by ki=1 n 2 i s 2 MSB MSW A = with n 0 = N 2, n 0 N(k 1) where N = n 1 + n n k, with n 0 = n in the special case of equal sampel sizes n i = n in all groups. 135

2 2) Planned Comparisons: Contrasts The hypothetical data below shows errors in a test made by subjects under the influence of two drugs in 4 groups of 8 subjects each. Group A1 is a control group, not given any drug. The other groups are experimental groups, group A2 gets drug A, group A3 gets drug B, and group A4 gets both drugs. 136

3 Suppose we are really interested in answering specific questions such as: 1. On the average do drugs have any effect on learning at all? 2. Do subjects make more errors if given both drugs than if given only one? 3. Do the two drugs differ in the number of errors they produce? All these questions can be formulated as null hypotheses of the form H 0 : λ 1 µ 1 + λ 2 µ λ k µ k = 0, where λ 1 + λ λ k = 0. For example, the first question asks whether the mean of group A1, µ 1, differes from the average of the means for the groups A2, A3, and A4, (µ 2 + µ 3 + µ 4 )/3. That is, we wish to test the null hypothesis H 0 (1) : µ µ µ µ 4 =

4 A contrast is a combination of population means of the form ψ = λ i µ i, where λ i = 0. The corresponding sample contrast is In our example: L = λ i x i. L 1 = X X X X 4. Now, since each individual observation X ij is distributed as N(µ i, σ 2 ) and all observations are independent of each other, each sample mean X is distributed as N(µ i, σ 2 /n i ), such that ( L N λi µ i, σ 2 λ 2 ) i, n i which implies that under the null hypothesis L σ λ 2 i n i N(0, 1). 138

5 Now recalling that the square of a standard normally distributed random variable is always χ 2 -distributed with df = 1, we obtain denoting Q = L 2 /( λ 2 i /n i): Q/σ 2 χ 2 (1). Furthermore, using SSW/σ 2 χ 2 (N k), we obtain that: Q/MSW F (1, N k), the square root of which has a t-distribution, that is: t = L t(n k), s(l) where s(l) = s λ 2 i which simplies to n i and s = MSW, λ 2 s(l) = s i for equal sample sizes n. n s(l) is called the standard error of the contrast. 139

6 This suggests to test the null hypothesis H 0 : λ 1 µ 1 + λ 2 µ λ k µ k = 0 with a conventional t-test of the form t = L t(n k). s(l) Example: (continued.) L = X X X X 4 = ( ) 3 = 4.167, MSW λ 2 s(l) = i (1 + 3/9) = n 8 = 1, 109 such that t = = The associated p-value in a two-sided test is TDIST(3.76;28;2)=0.08%, and in a onesided test against H 1 : µ 1 < (µ 2 + µ 3 + µ 4 )/3 0.04%, which we regard as clear statistical evidence that the drugs considered increase error rates. 140

7 Alternatively we may test the null hypothesis H 0 : λ 1 µ 1 + λ 2 µ λ k µ k = 0 with an F -test of the form F = Q MSW = t2 F (1, N k), where Q = which simplifies to L2 λ 2 i /n i, Q = nl2 λ 2 i in the case of identical sample sizes n i = n. Example: (continued.) Q = nl2 λ 2 i = /3 = 104.2, F = Q MSW = = The associated p-value may be found in Excel as FDIST(14.1;1;28)=0.08%, the same as before. 141

8 The remaining questions may be tackled in an analogous way: 1. Do subjects make more errors if given both drugs than if given only one? H 0 (2) : µ (µ 2 + µ 3 ) = 0 2. Do the two drugs differ in the number of errors they produce? H 0 (3) : µ 2 µ 3 = 0 The computational steps in conducting the test are summarized in the tables below: A1 A2 A3 A4 λ 2 i X i λ i (1) 1-1/3-1/3-1/3 4/3 λ i (2) 0-1/2-1/2 1 3/2 λ i (3) L s(l) t Q F p H 0 (1) H 0 (2) H 0 (3) Note that the F -test Q/MSW F (1, N k) results in the same p-value as the two-sided t-test L/s(L) t(n k), and that t 2 = F. 142

9 Orthogonal hypotheses The 3 hypotheses in the preceding section were special in the sense that the truth of each null hypothesis is unrelated to the truth of any others. If H 0 (1) is false, we know that drugs have some effect upon errors, although H 0 (1) says nothing about whether the effect of taking both drugs simultaneously is different from the average of the effects of the drugs taken seperately. Similarly, if H 0 (2) is false, we know that the effect of taking both drugs is different from taking only one, but we don t know which of the two drugs taken individually is more effective. A set of contrasts such that the truth of any one of them is unrelated to the truth of any other is called orthogonal. For equal sample sizes in each group the orthogonality of two contrasts L 1 = λ 1i x i and L 2 = λ 2i x i may be assessed by checking the orthogonality condition λ1i λ 2i = 0. ( λ1i λ 2i n i = 0 for n i const. 143 )

10 In an ANOVA with k groups, no more than k 1 orthogonal contrasts can be tested. Such a set of k 1 orthogonal contrasts is however not unique. In practice the researcher will select a set of orthogonal contrasts such that those contrasts of particular interest in the research question are included. Once such a set is found, it exploits all available information that can be extracted for answering the maximum amount of k 1 independent questions, that can be asked in form of contrasts. Once such a set is found, it has the property that the sum of squares for the individual kontrasts Q = L 2 /( λ 2 i /n i) sum up to the sum of squares for the model: Q 1 + Q Q k 1 = SSB. In our example: = (the difference to SSB= in the ANOVA table is only due to rounding error.) 144

11 To illustrate the importance of orthogonal hypothesis, assume that instead of testing the orthogonal hypotheses H 0 (1) H 0 (3) we would have tested the original hypothesis H 0 (1) : µ µ µ µ 4 = 0 together with H 0 (4) : µ 4 µ 1 = 0, that is that error rates are the same when taking both drugs or taking no drugs, upon the data below: 145

12 The two hypotheses are not orthogonal: A1 A2 A3 A4 X i λ i (1) 1-1/3-1/3-1/3 λ i (4) We may therefore get conflicting results from both hypothesis tests: L Q F p H 0 (1) H 0 (4) In this example, we safely accept H 0 (1) that taking drugs has no impact on error rates, while we strongly reject H 0 (4) that taking both drugs has no impact. 146

13 Constructing Orthogonal Tests If all sample sizes are equal (n i = n=const.), orthogonal tests may be constructed based on the principle that the test on the differences among a given set of means is orthogonal to any test involving their average. Consider our original set of hypotheses: H 0 (1) : µ (µ 2 + µ 3 + µ 4 ) = 0 H 0 (2) : µ (µ 2 + µ 3 ) = 0 H 0 (3) : µ 2 µ 3 = 0 H 0 (1) involves the average of µ 2, µ 3, and µ 4, while H 0 (2) and H 0 (3) involve differences among them. Similarly, H 0 (2) involves the average of µ 2 and µ 3, while H 0 (3) involves the difference between them. 147

14 A Word of Caution Contrasts are a special form of planned comparisons, that is, the hypotheses with the contrasts to be tested must be set up before taking a look at the data, otherwise the associated p-values will not be valid. The situation is similar to whether applying a onesided or a two-sided test. Assume you want to test whether the means in two samples are the same using the conventional significance level of α = 5%. You apply a two-sided test and get a p-value of 8%, so you may not reject. Let s say as a step in your calculations you figured out that x 1 > x 2. You may be tempted then to replace your original two-sided test against H 1 : µ 1 µ 2 by a one-sided test against H 1 : µ 1 > µ 2, which, technically, would allow you to divide your p- value by 2 and get a significant result at 4%. But that p-value of 4% is fraud, because the reason that it is only one half of the two sided p-value is exactly that without looking at the data, you also had a 50% chance that the sample means would have come out as x 1 < x

15 3) Multiple Comparisons As seen above, contrasts must be formulated in advance of the analysis in order for the p-values to be valid. Multiple-comparisons procedures, on the other hand, are designed for testing effects suggested by the data after the ANOVA F -test led to a rejection of the hypothesis that all sample means are equal. All multiple comparisons methods we will discuss consist of building t-ratios of the form t ij = x i x j SE ij, or, equivalently, setting up confidence intervals of the form for all ( k) k(k 1) = 2 2 within the k groups. [( x i x j ) ± t SE ij ] pairs that can be built The methods differ in the calculation of the standard errors SE ij and the distributions and critical levels used in the determination of t. 149

16 Fisher s LSD = least-significant difference One obvious choice is to extend the 2 independent sample t-test to k independent samples with test statistics t ij = s x i x j 1ni + 1 nj, where s = MSW, and to declare µ i and µ j different whenever t ij > t α/2 (N k), or, equivalently, whenever 0 / ( x i x j ) ± t α/2 (N k) s n i n j This procedure fixes the probability of a false rejection for each single pair of means being compared as α.. This is a problem if the numbers of means being compared is large. For example, if we use LSD with a significance of α=5% to compare k=20 means, then there are k(k 1) 2 = 190 pairs of means and we expect 5% 190 = 9.5 false rejections! 150

17 Bonferroni Method The Bonferroni method uses the same test statistic as Fisher s LSD, but replaces the significance level α for each single pair with α = α/ ( ) k 2 or equivalently the original p-value with p = ( ) k 2 p as a conservative estimate of the probability that any false rejection among all k(k 1) 2 comparisons will occur. An obvious disadvantage is that the test becomes weak when k is large. Example: Comparison of A1 and A2 (original data): t ij = LSD: s x i x j 1ni + 1 nj = = p LSD = TDIST(2.668; 28; 2) = 1.25%. Bonferroni: p B = p LSD = % = 7.5%. 151

18 Tukey s HSD = honestly significant difference Tukey s method for multiple comparisons delivers the most precise estimate of the probability that any false rejection among all paired comparisons will occur. In its original form it is only applicable in the case of equal sample sizes in each group (n i = n), but corrections for unequal sample sizes have been suggested and are implemented in SAS EG. Tukey s honestly significant difference is: HSD = q α (k, N k) s n, where q α denotes the α-critical value from the so called Studentized range distribution tabulated e.g. in table 6 of Azcel, and s is estimated by MSW, as usual. Two group means µ i and µ j are declared different at significance level α if or, equivalently, if 0 / [ ( x i x j ) ± HSD ], x i x j s/ n > q α(k, N k). 152

19 Example: Comparison of A1 and A2 (continued). LSD and Bonferroni gave conflicting results whether the difference between µ 1 and µ 2 is significant at 5% or not (recall p LSD = 1.25% and p B = 7.5%), because LSD underestimates the probability of any false rejection among all paired comparisons, whereas Bonferroni overestimates it. Applying Tukey s procedure yields x i x j s/ n = /8 = Looking up in a table of critical values for the Studentized range distribution reveals that q 0.05 (k, N k) = q 0.05 (4; 28) = 3.86, which is larger than the statistic calculated above. The difference between µ 1 and µ 2 is therefore not significant at 5% level, if the test was first suggested by the data. Note: A planned comparison before looking at the data in form of a contrast would have found a significant difference with a p-value of 1.25%(=p LSD ). 153

20 Simultaneous Confidence Intervals The idea with multiple comparisons in posthoc tests is usually to determine, which groups may be combined into larger groups, that are homogeneous in the sense that their group means are not significantly different. For that purpose, one constructs confidence intervals of the form [( x i x j ) ± smallest significant difference], where the smallest significant differences are: LSD : tα(n k) s n i n j Bonferroni: t α2 /( k 2 )(N k) s n i n j s HSD : q α (k, N k) n Combinations of groups are considered homogeneous at confidence level (1 α), if none of their paired comparisons lies outside the corresponding confidence intervals; that is, pairs of means, the confidence intervals of which include the value 0 will not be declared significantly different, and vice versa. 154

21 Homogeneous Subgroups in SAS EG If your only reason for doing multiple comparisons is to identify homogeneous groups in the sense that their group means are not significantly different, then the easiest way to do so is in the Means/Comparison tab of the One-Way ANOVA procedure. This will not provide you with p-values for the differences in means though, and it will not allow you to define your own contrasts. Below is some output for Tukey s HSD and Fisher s LSD: One-Way Analysis of Variance Results Tukey's Studentized Range (HSD) Test for errors NOTE: This test controls the Type I experimentwise error rate, but it generally has a higher Type II error rate than REGWQ. Alpha 0.05 Error Degrees of Freedom 28 Error Mean Square Critical Value of Studentized Range Minimum Significant Difference Means with the same letter are not significantly different. Tukey Grouping Mean N drug A A B A B B B B One-Way Analysis of Variance Results t Tests (LSD) for errors NOTE: This test controls the Type I comparisonwise error rate, not the experimentwise error rate. Alpha 0.05 Error Degrees of Freedom 28 Error Mean Square Critical Value of t Least Significant Difference Means with the same letter are not significantly different. t Grouping Mean N drug A B B C B C C

22 Multiple Comparisons in SAS EG Post hoc tests in SAS EG may be found in Analyze/ANOVA/Linear Models. For the overall ANOVA F -test (the same as under Analyze/ANOVA/One-Way Anova) select the dependent variable (e.g. errors) and classification variable (e.g. drug) under Task Roles. You will also have to reselect your classification variable as Main Effect under Model. To perform the multiple comparisons push Add under Effects to estimate within Post Hoc Tests/Least Squares. This will open four drop-down menues under Options for means tests. Select Yes under Class effects to use, All pairwise differences for Show p- values for differences, and True for Show confidence limits. The option No adjustment under Adjustment method for comparison stands for LSD. Homogeneous subgroups in our example are (A1,A3) and (A2,A3) under LSD, and (A1,A2, A3), (A2,A4) under Tukey and Bonferroni. 156

23 Analyze => ANOVA => Linear Models

24

25 drug Linear Models Least Squares Means errors LSMEAN LSMEAN Number Least Squares Means for effect drug Pr > t for H0: LSMean(i)=LSMean(j) Dependent Variable: errors i/j < < drug errors LSMEAN 95% Confidence Limits i Least Squares Means for Effect drug j Difference Between Means 95% Confidence Limits for LSMean(i)- LSMean(j)

26 Linear Models Least Squares Means Adjustment for Multiple Comparisons: Bonferroni drug errors LSMEAN LSMEAN Number Least Squares Means for effect drug Pr > t for H0: LSMean(i)=LSMean(j) Dependent Variable: errors i/j drug errors LSMEAN 95% Confidence Limits i Least Squares Means for Effect drug j Difference Between Means Simultaneous 95% Confidence Limits for LSMean(i)-LSMean(j)

27 Linear Models Least Squares Means Adjustment for Multiple Comparisons: Tukey drug errors LSMEAN LSMEAN Number Least Squares Means for effect drug Pr > t for H0: LSMean(i)=LSMean(j) Dependent Variable: errors i/j drug errors LSMEAN 95% Confidence Limits i Least Squares Means for Effect drug j Difference Between Means Simultaneous 95% Confidence Limits for LSMean(i)-LSMean(j)

28 Contrasts in SAS EG Calculating contrasts in SAS EG requires a (tiny) little bit of programming. Proceed as if you were going to perform some multiple comparisons. Then, after having made the same choices as described on the preceding pages, press the Preview code buttom in the lower left corner of whichever tab you are in. Next push the Insert Code buttom in the upper left corner of the Code preview for Task window. This will bring up the program code generated by SAS EG in such a way that it indicates spaces to insert the users own code. Double-click on the space immediately above the line beginning with the command LSMEANS <independent variables name>. This will bring up a window where you can enter your own code. The syntax to be entered is: Contrast <table text> <indep. variable> λ 1, λ 2,..., λ k ; The coefficients λ 1, λ 2,..., λ k must be entered as real numbers seperated by spaces (no commas); fractions (such as 1/2) are not allowed. 162

29 Analyze => ANOVA => Linear Models... Preview Code => Insert Code OK => OK => Run

30 Linear Models Dependent Variable: errors errors Source DF Sum of Squares Mean Square F Value Pr > F Model Error Corrected Total R-Square Coeff Var Root MSE errors Mean Source DF Type III SS Mean Square F Value Pr > F drug Contrast DF Contrast SS Mean Square F Value Pr > F H1: control vs other H2: 2 drugs vs 1 drug H3: drug A vs drug B Parameter Estimate Standard Error t Value Pr > t Intercept B <.0001 drug B <.0001 drug B drug B drug B... The X'X matrix has been found to be singular, and a generalized inverse was used to solve the normal equations. Terms whose estimates are followed by the letter 'B' are not uniquely estimable.

### o Exercise A Comparisonwise versus Experimentwise Error Rates

Multiple Comparisons Contents The Estimation of Group (Treatment) Means o Example Multiple Comparisons o Fisher's Least Significant Difference (LSD) Theory Example o Tukey's Honest Significant Difference

### Lecture 23 Multiple Comparisons & Contrasts

Lecture 23 Multiple Comparisons & Contrasts STAT 512 Spring 2011 Background Reading KNNL: 17.3-17.7 23-1 Topic Overview Linear Combinations and Contrasts Pairwise Comparisons and Multiple Testing Adjustments

### Statistics for Management II-STAT 362-Final Review

Statistics for Management II-STAT 362-Final Review Multiple Choice Identify the letter of the choice that best completes the statement or answers the question. 1. The ability of an interval estimate to

### c. The factor is the type of TV program that was watched. The treatment is the embedded commercials in the TV programs.

STAT E-150 - Statistical Methods Assignment 9 Solutions Exercises 12.8, 12.13, 12.75 For each test: Include appropriate graphs to see that the conditions are met. Use Tukey's Honestly Significant Difference

### ANOVA - Analysis of Variance

ANOVA - Analysis of Variance ANOVA - Analysis of Variance Extends independent-samples t test Compares the means of groups of independent observations Don t be fooled by the name. ANOVA does not compare

### Multiple-Comparison Procedures

Multiple-Comparison Procedures References A good review of many methods for both parametric and nonparametric multiple comparisons, planned and unplanned, and with some discussion of the philosophical

### MEAN SEPARATION TESTS (LSD AND Tukey s Procedure) is rejected, we need a method to determine which means are significantly different from the others.

MEAN SEPARATION TESTS (LSD AND Tukey s Procedure) If Ho 1 2... n is rejected, we need a method to determine which means are significantly different from the others. We ll look at three separation tests

### Section 13, Part 1 ANOVA. Analysis Of Variance

Section 13, Part 1 ANOVA Analysis Of Variance Course Overview So far in this course we ve covered: Descriptive statistics Summary statistics Tables and Graphs Probability Probability Rules Probability

### Contrasts ask specific questions as opposed to the general ANOVA null vs. alternative

Chapter 13 Contrasts and Custom Hypotheses Contrasts ask specific questions as opposed to the general ANOVA null vs. alternative hypotheses. In a one-way ANOVA with a k level factor, the null hypothesis

### Multivariate Analysis of Variance (MANOVA)

Multivariate Analysis of Variance (MANOVA) Example: (Spector, 987) describes a study of two drugs on human heart rate There are 24 subjects enrolled in the study which are assigned at random to one of

### Tukey s HSD (Honestly Significant Difference).

Agenda for Week 4 (Tuesday, Jan 26) Week 4 Hour 1 AnOVa review. Week 4 Hour 2 Multiple Testing Tukey s HSD (Honestly Significant Difference). Week 4 Hour 3 (Thursday) Two-way AnOVa. Sometimes you ll need

### Contents 1. Contents

Contents 1 Contents 3 K-sample Methods 2 3.1 Setup............................ 2 3.2 Classic Method Based on Normality Assumption..... 3 3.3 Permutation F -test.................... 5 3.4 Kruskal-Wallis

### Analysis of numerical data S4

Basic medical statistics for clinical and experimental research Analysis of numerical data S4 Katarzyna Jóźwiak k.jozwiak@nki.nl 3rd November 2015 1/42 Hypothesis tests: numerical and ordinal data 1 group:

### One-Way ANOVA using SPSS 11.0. SPSS ANOVA procedures found in the Compare Means analyses. Specifically, we demonstrate

1 One-Way ANOVA using SPSS 11.0 This section covers steps for testing the difference between three or more group means using the SPSS ANOVA procedures found in the Compare Means analyses. Specifically,

### Testing: is my coin fair?

Testing: is my coin fair? Formally: we want to make some inference about P(head) Try it: toss coin several times (say 7 times) Assume that it is fair ( P(head)= ), and see if this assumption is compatible

### Allelopathic Effects on Root and Shoot Growth: One-Way Analysis of Variance (ANOVA) in SPSS. Dan Flynn

Allelopathic Effects on Root and Shoot Growth: One-Way Analysis of Variance (ANOVA) in SPSS Dan Flynn Just as t-tests are useful for asking whether the means of two groups are different, analysis of variance

### Two-Sample T-Test from Means and SD s

Chapter 07 Two-Sample T-Test from Means and SD s Introduction This procedure computes the two-sample t-test and several other two-sample tests directly from the mean, standard deviation, and sample size.

### 0.1 Estimating and Testing Differences in the Treatment means

0. Estimating and Testing Differences in the Treatment means Is the F-test significant, we only learn that not all population means are the same, but through this test we can not determine where the differences

### PASS Sample Size Software. Linear Regression

Chapter 855 Introduction Linear regression is a commonly used procedure in statistical analysis. One of the main objectives in linear regression analysis is to test hypotheses about the slope (sometimes

### Lesson 1: Comparison of Population Means Part c: Comparison of Two- Means

Lesson : Comparison of Population Means Part c: Comparison of Two- Means Welcome to lesson c. This third lesson of lesson will discuss hypothesis testing for two independent means. Steps in Hypothesis

### Chapter 5 Analysis of variance SPSS Analysis of variance

Chapter 5 Analysis of variance SPSS Analysis of variance Data file used: gss.sav How to get there: Analyze Compare Means One-way ANOVA To test the null hypothesis that several population means are equal,

### One-Way Analysis of Variance

Spring, 000 - - Administrative Items One-Way Analysis of Variance Midterm Grades. Make-up exams, in general. Getting help See me today -:0 or Wednesday from -:0. Send an e-mail to stine@wharton. Visit

### Chapter 11: Linear Regression - Inference in Regression Analysis - Part 2

Chapter 11: Linear Regression - Inference in Regression Analysis - Part 2 Note: Whether we calculate confidence intervals or perform hypothesis tests we need the distribution of the statistic we will use.

### Hypothesis Testing Level I Quantitative Methods. IFT Notes for the CFA exam

Hypothesis Testing 2014 Level I Quantitative Methods IFT Notes for the CFA exam Contents 1. Introduction... 3 2. Hypothesis Testing... 3 3. Hypothesis Tests Concerning the Mean... 10 4. Hypothesis Tests

### MCQ TESTING OF HYPOTHESIS

MCQ TESTING OF HYPOTHESIS MCQ 13.1 A statement about a population developed for the purpose of testing is called: (a) Hypothesis (b) Hypothesis testing (c) Level of significance (d) Test-statistic MCQ

### taken together, can provide strong support. Using a method for combining probabilities, it can be determined that combining the probability values of

taken together, can provide strong support. Using a method for combining probabilities, it can be determined that combining the probability values of 0.11 and 0.07 results in a probability value of 0.045.

### The Type I error rate is the fraction of times a Type I error is made. Comparison-wise type I error rate CER. Experiment-wise type I error rate EER

Topic 5. Mean separation: Multiple comparisons [S&T Ch.8 except 8.3] 5. 1. Basic concepts If there are more than treatments the problem is to determine which means are significantly different. This process

### Statistics II Final Exam - January Use the University stationery to give your answers to the following questions.

Statistics II Final Exam - January 2012 Use the University stationery to give your answers to the following questions. Do not forget to write down your name and class group in each page. Indicate clearly

About Single Factor ANOVAs TABLE OF CONTENTS About Single Factor ANOVAs... 1 What is a SINGLE FACTOR ANOVA... 1 Single Factor ANOVA... 1 Calculating Single Factor ANOVAs... 2 STEP 1: State the hypotheses...

### Analysis of Variance ANOVA

Analysis of Variance ANOVA Overview We ve used the t -test to compare the means from two independent groups. Now we ve come to the final topic of the course: how to compare means from more than two populations.

### where b is the slope of the line and a is the intercept i.e. where the line cuts the y axis.

Least Squares Introduction We have mentioned that one should not always conclude that because two variables are correlated that one variable is causing the other to behave a certain way. However, sometimes

### Inferences About Differences Between Means Edpsy 580

Inferences About Differences Between Means Edpsy 580 Carolyn J. Anderson Department of Educational Psychology University of Illinois at Urbana-Champaign Inferences About Differences Between Means Slide

### SPSS Guide: Tests of Differences

SPSS Guide: Tests of Differences I put this together to give you a step-by-step guide for replicating what we did in the computer lab. It should help you run the tests we covered. The best way to get familiar

### Ch. 12 The Analysis of Variance. can be modelled as normal random variables. come from populations having the same variance.

Ch. 12 The Analysis of Variance Residual Analysis and Model Assessment Model Assumptions: The measurements X i,j can be modelled as normal random variables. are statistically independent. come from populations

### Week 7 Lecture: Two-way Analysis of Variance (Chapter 12) Two-way ANOVA with Equal Replication (see Zar s section 12.1)

Week 7 Lecture: Two-way Analysis of Variance (Chapter ) We can extend the idea of a one-way ANOVA, which tests the effects of one factor on a response variable, to a two-way ANOVA which tests the effects

### Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression

Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression Objectives: To perform a hypothesis test concerning the slope of a least squares line To recognize that testing for a

### Chapter 19 Split-Plot Designs

Chapter 19 Split-Plot Designs Split-plot designs are needed when the levels of some treatment factors are more difficult to change during the experiment than those of others. The designs have a nested

### Hypothesis Testing hypothesis testing approach formulation of the test statistic

Hypothesis Testing For the next few lectures, we re going to look at various test statistics that are formulated to allow us to test hypotheses in a variety of contexts: In all cases, the hypothesis testing

### Aus: Statnotes: Topics in Multivariate Analysis, by G. David Garson (Zugriff am

Aus: Statnotes: Topics in Multivariate Analysis, by G. David Garson http://faculty.chass.ncsu.edu/garson/pa765/anova.htm (Zugriff am 20.10.2010) Planned multiple comparison t-tests, also just called "multiple

### Chapter 11: Two Variable Regression Analysis

Department of Mathematics Izmir University of Economics Week 14-15 2014-2015 In this chapter, we will focus on linear models and extend our analysis to relationships between variables, the definitions

### We know from STAT.1030 that the relevant test statistic for equality of proportions is:

2. Chi 2 -tests for equality of proportions Introduction: Two Samples Consider comparing the sample proportions p 1 and p 2 in independent random samples of size n 1 and n 2 out of two populations which

### Factorial Analysis of Variance

Chapter 560 Factorial Analysis of Variance Introduction A common task in research is to compare the average response across levels of one or more factor variables. Examples of factor variables are income

### ACTM State Exam-Statistics

ACTM State Exam-Statistics For the 25 multiple-choice questions, make your answer choice and record it on the answer sheet provided. Once you have completed that section of the test, proceed to the tie-breaker

### ANOVA ANOVA. Two-Way ANOVA. One-Way ANOVA. When to use ANOVA ANOVA. Analysis of Variance. Chapter 16. A procedure for comparing more than two groups

ANOVA ANOVA Analysis of Variance Chapter 6 A procedure for comparing more than two groups independent variable: smoking status non-smoking one pack a day > two packs a day dependent variable: number of

### INTERPRETING THE ONE-WAY ANALYSIS OF VARIANCE (ANOVA)

INTERPRETING THE ONE-WAY ANALYSIS OF VARIANCE (ANOVA) As with other parametric statistics, we begin the one-way ANOVA with a test of the underlying assumptions. Our first assumption is the assumption of

### Permutation Tests with SAS

Permutation Tests with SAS /* testmultest1.sas */ options linesize=79 noovp formdlim='_'; title 'Permutation test example from lecture: One-sided p = 0.10'; data scramble; input group Y; datalines; 1 1.3

### Data Analysis. Lecture Empirical Model Building and Methods (Empirische Modellbildung und Methoden) SS Analysis of Experiments - Introduction

Data Analysis Lecture Empirical Model Building and Methods (Empirische Modellbildung und Methoden) Prof. Dr. Dr. h.c. Dieter Rombach Dr. Andreas Jedlitschka SS 2014 Analysis of Experiments - Introduction

### Regression Analysis Prof. Soumen Maity Department of Mathematics Indian Institute of Technology, Kharagpur

Regression Analysis Prof. Soumen Maity Department of Mathematics Indian Institute of Technology, Kharagpur Lecture - 7 Multiple Linear Regression (Contd.) This is my second lecture on Multiple Linear Regression

### SIMULTANEOUS COMPARISONS AND THE CONTROL OF TYPE I ERRORS CHAPTER 6

SIMULTANEOUS COMPARISONS AND THE CONTROL OF TYPE I ERRORS CHAPTER 6 ERSH 8310 Lecture 8 September 18, 2007 Today s Class Discussion of the new course schedule. Take-home midterm (one instead of two) and

### Data Analysis Tools. Tools for Summarizing Data

Data Analysis Tools This section of the notes is meant to introduce you to many of the tools that are provided by Excel under the Tools/Data Analysis menu item. If your computer does not have that tool

### MS&E 226: Small Data. Lecture 17: Additional topics in inference (v1) Ramesh Johari

MS&E 226: Small Data Lecture 17: Additional topics in inference (v1) Ramesh Johari ramesh.johari@stanford.edu 1 / 34 Warnings 2 / 34 Modeling assumptions: Regression Remember that most of the inference

### Lecture 7: Binomial Test, Chisquare

Lecture 7: Binomial Test, Chisquare Test, and ANOVA May, 01 GENOME 560, Spring 01 Goals ANOVA Binomial test Chi square test Fisher s exact test Su In Lee, CSE & GS suinlee@uw.edu 1 Whirlwind Tour of One/Two

### 3. Nonparametric methods

3. Nonparametric methods If the probability distributions of the statistical variables are unknown or are not as required (e.g. normality assumption violated), then we may still apply nonparametric tests

### tests whether there is an association between the outcome variable and a predictor variable. In the Assistant, you can perform a Chi-Square Test for

This paper explains the research conducted by Minitab statisticians to develop the methods and data checks used in the Assistant in Minitab 17 Statistical Software. In practice, quality professionals sometimes

### OLS is not only unbiased it is also the most precise (efficient) unbiased estimation technique - ie the estimator has the smallest variance

Lecture 5: Hypothesis Testing What we know now: OLS is not only unbiased it is also the most precise (efficient) unbiased estimation technique - ie the estimator has the smallest variance (if the Gauss-Markov

### Hypothesis Testing. Chapter 7

Hypothesis Testing Chapter 7 Hypothesis Testing Time to make the educated guess after answering: What the population is, how to extract the sample, what characteristics to measure in the sample, After

### CHAPTER 13. Experimental Design and Analysis of Variance

CHAPTER 13 Experimental Design and Analysis of Variance CONTENTS STATISTICS IN PRACTICE: BURKE MARKETING SERVICES, INC. 13.1 AN INTRODUCTION TO EXPERIMENTAL DESIGN AND ANALYSIS OF VARIANCE Data Collection

### EPS 625 ANALYSIS OF COVARIANCE (ANCOVA) EXAMPLE USING THE GENERAL LINEAR MODEL PROGRAM

EPS 6 ANALYSIS OF COVARIANCE (ANCOVA) EXAMPLE USING THE GENERAL LINEAR MODEL PROGRAM ANCOVA One Continuous Dependent Variable (DVD Rating) Interest Rating in DVD One Categorical/Discrete Independent Variable

### ANOVA MULTIPLE CHOICE QUESTIONS. In the following multiple-choice questions, select the best answer.

ANOVA MULTIPLE CHOICE QUESTIONS In the following multiple-choice questions, select the best answer. 1. Analysis of variance is a statistical method of comparing the of several populations. a. standard

### Hypothesis testing - Steps

Hypothesis testing - Steps Steps to do a two-tailed test of the hypothesis that β 1 0: 1. Set up the hypotheses: H 0 : β 1 = 0 H a : β 1 0. 2. Compute the test statistic: t = b 1 0 Std. error of b 1 =

### T adult = 96 T child = 114.

Homework Solutions Do all tests at the 5% level and quote p-values when possible. When answering each question uses sentences and include the relevant JMP output and plots (do not include the data in your

### KSTAT MINI-MANUAL. Decision Sciences 434 Kellogg Graduate School of Management

KSTAT MINI-MANUAL Decision Sciences 434 Kellogg Graduate School of Management Kstat is a set of macros added to Excel and it will enable you to do the statistics required for this course very easily. To

### Simple Tricks for Using SPSS for Windows

Simple Tricks for Using SPSS for Windows Chapter 14. Follow-up Tests for the Two-Way Factorial ANOVA The Interaction is Not Significant If you have performed a two-way ANOVA using the General Linear Model,

### Notes on Maxwell & Delaney

Notes on Maxwell & Delaney PSY710 5 Chapter 5 - Multiple Comparisons of Means 5.1 Inflation of Type I Error Rate When conducting a statistical test, we typically set α =.05 or α =.01 so that the probability

### LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING

LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING In this lab you will explore the concept of a confidence interval and hypothesis testing through a simulation problem in engineering setting.

### ANOVA Analysis of Variance

ANOVA Analysis of Variance What is ANOVA and why do we use it? Can test hypotheses about mean differences between more than 2 samples. Can also make inferences about the effects of several different IVs,

### AP Statistics 2007 Scoring Guidelines

AP Statistics 2007 Scoring Guidelines The College Board: Connecting Students to College Success The College Board is a not-for-profit membership association whose mission is to connect students to college

### Multiple Comparisons. Cohen Chpt 13

Multiple Comparisons Cohen Chpt 13 How many t-tests? We do an experiment, 1 factor, 3 levels (= 3 groups). The ANOVA gives us a significant F-value. What now? 4 levels, 1 factor: how many independent comparisons?

### Introduction to Stata

Introduction to Stata September 23, 2014 Stata is one of a few statistical analysis programs that social scientists use. Stata is in the mid-range of how easy it is to use. Other options include SPSS,

### A) to D) to B) to E) to C) to Rate of hay fever per 1000 population for people under 25

Unit 0 Review #3 Name: Date:. Suppose a random sample of 380 married couples found that 54 had two or more personality preferences in common. In another random sample of 573 married couples, it was found

### Business Statistics. Lecture 8: More Hypothesis Testing

Business Statistics Lecture 8: More Hypothesis Testing 1 Goals for this Lecture Review of t-tests Additional hypothesis tests Two-sample tests Paired tests 2 The Basic Idea of Hypothesis Testing Start

### STT 430/630/ES 760 Lecture Notes: Chapter 6: Hypothesis Testing 1. February 23, 2009 Chapter 6: Introduction to Hypothesis Testing

STT 430/630/ES 760 Lecture Notes: Chapter 6: Hypothesis Testing 1 February 23, 2009 Chapter 6: Introduction to Hypothesis Testing One of the primary uses of statistics is to use data to infer something

### An example ANOVA situation. 1-Way ANOVA. Some notation for ANOVA. Are these differences significant? Example (Treating Blisters)

An example ANOVA situation Example (Treating Blisters) 1-Way ANOVA MATH 143 Department of Mathematics and Statistics Calvin College Subjects: 25 patients with blisters Treatments: Treatment A, Treatment

### SELF-TEST: SIMPLE REGRESSION

ECO 22000 McRAE SELF-TEST: SIMPLE REGRESSION Note: Those questions indicated with an (N) are unlikely to appear in this form on an in-class examination, but you should be able to describe the procedures

### Multivariate ANOVA. Lecture 5 July 26, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2. Lecture #5-7/26/2011 Slide 1 of 47

Multivariate ANOVA Lecture 5 July 26, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2 Lecture #5-7/26/2011 Slide 1 of 47 Today s Lecture Univariate ANOVA Example Overview Today s

### The t Test. April 3-8, exp (2πσ 2 ) 2σ 2. µ ln L(µ, σ2 x) = 1 σ 2. i=1. 2σ (σ 2 ) 2. ˆσ 2 0 = 1 n. (x i µ 0 ) 2 = i=1

The t Test April 3-8, 008 Again, we begin with independent normal observations X, X n with unknown mean µ and unknown variance σ The likelihood function Lµ, σ x) exp πσ ) n/ σ x i µ) Thus, ˆµ x ln Lµ,

### Two-Sample T-Tests Assuming Equal Variance (Enter Means)

Chapter 4 Two-Sample T-Tests Assuming Equal Variance (Enter Means) Introduction This procedure provides sample size and power calculations for one- or two-sided two-sample t-tests when the variances of

### Linear Regression with One Regressor

Linear Regression with One Regressor Michael Ash Lecture 10 Analogy to the Mean True parameter µ Y β 0 and β 1 Meaning Central tendency Intercept and slope E(Y ) E(Y X ) = β 0 + β 1 X Data Y i (X i, Y

### MANOVA. Lecture 7. October 5, 2005 Multivariate Analysis. Lecture #7-10/5/2005 Slide 1 of 55

Lecture 7 October 5, 2005 Multivariate Analysis Lecture #7-10/5/2005 Slide 1 of 55 Today s Lecture» Today s Lecture Two-Way Two-Way Example One-way. Two-way. Note: Homework #5 due Thursday 10/13 at 5pm.

### Math 62 Statistics Sample Exam Questions

Math 62 Statistics Sample Exam Questions 1. (10) Explain the difference between the distribution of a population and the sampling distribution of a statistic, such as the mean, of a sample randomly selected

### Fisher's least significant difference (LSD) 2. If outcome is do not reject H, then! stop. Otherwise continue to #3.

Fisher's least significant difference (LSD) Procedure: 1. Perform overall test of H : vs. H a :. Á. Á â Á. " # >. œ. œ â œ.! " # > 2. If outcome is do not reject H, then! stop. Otherwise continue to #3.

### MAT X Hypothesis Testing - Part I

MAT 2379 3X Hypothesis Testing - Part I Definition : A hypothesis is a conjecture concerning a value of a population parameter (or the shape of the population). The hypothesis will be tested by evaluating

### Simple Linear Regression Chapter 11

Simple Linear Regression Chapter 11 Rationale Frequently decision-making situations require modeling of relationships among business variables. For instance, the amount of sale of a product may be related

### Two-Sample T-Tests Allowing Unequal Variance (Enter Difference)

Chapter 45 Two-Sample T-Tests Allowing Unequal Variance (Enter Difference) Introduction This procedure provides sample size and power calculations for one- or two-sided two-sample t-tests when no assumption

### Linear constraints in multiple linear regression. Analysis of variance.

Section 16 Linear constraints in multiple linear regression. Analysis of variance. Multiple linear regression with general linear constraints. Let us consider a multiple linear regression Y = X + β and

### The calculations lead to the following values: d 2 = 46, n = 8, s d 2 = 4, s d = 2, SEof d = s d n s d n

EXAMPLE 1: Paired t-test and t-interval DBP Readings by Two Devices The diastolic blood pressures (DBP) of 8 patients were determined using two techniques: the standard method used by medical personnel

### Module 10: Mixed model theory II: Tests and confidence intervals

St@tmaster 02429/MIXED LINEAR MODELS PREPARED BY THE STATISTICS GROUPS AT IMM, DTU AND KU-LIFE Module 10: Mixed model theory II: Tests and confidence intervals 10.1 Notes..................................

### Background to ANOVA. One-Way Analysis of Variance: ANOVA. The One-Way ANOVA Summary Table. Hypothesis Test in One-Way ANOVA

Background to ANOVA One-Way Analysis of Variance: ANOVA Dr. J. Kyle Roberts Southern Methodist University Simmons School of Education and Human Development Department of Teaching and Learning Recall from

### Chapter 16 - Analyses of Variance and Covariance as General Linear Models Eye fixations per line of text for poor, average, and good readers:

Chapter 6 - Analyses of Variance and Covariance as General Linear Models 6. Eye fixations per line of text for poor, average, and good readers: a. Design matrix, using only the first subject in each group:

### Statistical Consulting Topics. MANOVA: Multivariate ANOVA

Statistical Consulting Topics MANOVA: Multivariate ANOVA Predictors are still factors, but we have more than one continuous-variable response on each experimental unit. For example, y i = (y i1, y i2 ).

### Power and Sample Size Determination

Power and Sample Size Determination Bret Hanlon and Bret Larget Department of Statistics University of Wisconsin Madison November 3 8, 2011 Power 1 / 31 Experimental Design To this point in the semester,

### Multiple Hypothesis Testing: The F-test

Multiple Hypothesis Testing: The F-test Matt Blackwell December 3, 2008 1 A bit of review When moving into the matrix version of linear regression, it is easy to lose sight of the big picture and get lost

### Hypothesis Testing. Dr. Bob Gee Dean Scott Bonney Professor William G. Journigan American Meridian University

Hypothesis Testing Dr. Bob Gee Dean Scott Bonney Professor William G. Journigan American Meridian University 1 AMU / Bon-Tech, LLC, Journi-Tech Corporation Copyright 2015 Learning Objectives Upon successful

### Descriptive Statistics

Descriptive Statistics Primer Descriptive statistics Central tendency Variation Relative position Relationships Calculating descriptive statistics Descriptive Statistics Purpose to describe or summarize

Chapter 12 Sample Size and Power Calculations Chapter Table of Contents Introduction...253 Hypothesis Testing...255 Confidence Intervals...260 Equivalence Tests...264 One-Way ANOVA...269 Power Computation

### One-Way Analysis of Variance (ANOVA) Example Problem

One-Way Analysis of Variance (ANOVA) Example Problem Introduction Analysis of Variance (ANOVA) is a hypothesis-testing technique used to test the equality of two or more population (or treatment) means

### Multiple Regression Analysis in Minitab 1

Multiple Regression Analysis in Minitab 1 Suppose we are interested in how the exercise and body mass index affect the blood pressure. A random sample of 10 males 50 years of age is selected and their