4.4. Further Analysis within ANOVA


 Dina Lester
 2 years ago
 Views:
Transcription
1 4.4. Further Analysis within ANOVA 1) Estimation of the effects Fixed effects model: α i = µ i µ is estimated by a i = ( x i x) if H 0 : µ 1 = µ 2 = = µ k is rejected. Random effects model: If H 0 : σa 2 =0 is rejected, then we estimate the variability σa 2 among the population means by ki=1 n 2 i s 2 MSB MSW A = with n 0 = N 2, n 0 N(k 1) where N = n 1 + n n k, with n 0 = n in the special case of equal sampel sizes n i = n in all groups. 135
2 2) Planned Comparisons: Contrasts The hypothetical data below shows errors in a test made by subjects under the influence of two drugs in 4 groups of 8 subjects each. Group A1 is a control group, not given any drug. The other groups are experimental groups, group A2 gets drug A, group A3 gets drug B, and group A4 gets both drugs. 136
3 Suppose we are really interested in answering specific questions such as: 1. On the average do drugs have any effect on learning at all? 2. Do subjects make more errors if given both drugs than if given only one? 3. Do the two drugs differ in the number of errors they produce? All these questions can be formulated as null hypotheses of the form H 0 : λ 1 µ 1 + λ 2 µ λ k µ k = 0, where λ 1 + λ λ k = 0. For example, the first question asks whether the mean of group A1, µ 1, differes from the average of the means for the groups A2, A3, and A4, (µ 2 + µ 3 + µ 4 )/3. That is, we wish to test the null hypothesis H 0 (1) : µ µ µ µ 4 =
4 A contrast is a combination of population means of the form ψ = λ i µ i, where λ i = 0. The corresponding sample contrast is In our example: L = λ i x i. L 1 = X X X X 4. Now, since each individual observation X ij is distributed as N(µ i, σ 2 ) and all observations are independent of each other, each sample mean X is distributed as N(µ i, σ 2 /n i ), such that ( L N λi µ i, σ 2 λ 2 ) i, n i which implies that under the null hypothesis L σ λ 2 i n i N(0, 1). 138
5 Now recalling that the square of a standard normally distributed random variable is always χ 2 distributed with df = 1, we obtain denoting Q = L 2 /( λ 2 i /n i): Q/σ 2 χ 2 (1). Furthermore, using SSW/σ 2 χ 2 (N k), we obtain that: Q/MSW F (1, N k), the square root of which has a tdistribution, that is: t = L t(n k), s(l) where s(l) = s λ 2 i which simplies to n i and s = MSW, λ 2 s(l) = s i for equal sample sizes n. n s(l) is called the standard error of the contrast. 139
6 This suggests to test the null hypothesis H 0 : λ 1 µ 1 + λ 2 µ λ k µ k = 0 with a conventional ttest of the form t = L t(n k). s(l) Example: (continued.) L = X X X X 4 = ( ) 3 = 4.167, MSW λ 2 s(l) = i (1 + 3/9) = n 8 = 1, 109 such that t = = The associated pvalue in a twosided test is TDIST(3.76;28;2)=0.08%, and in a onesided test against H 1 : µ 1 < (µ 2 + µ 3 + µ 4 )/3 0.04%, which we regard as clear statistical evidence that the drugs considered increase error rates. 140
7 Alternatively we may test the null hypothesis H 0 : λ 1 µ 1 + λ 2 µ λ k µ k = 0 with an F test of the form F = Q MSW = t2 F (1, N k), where Q = which simplifies to L2 λ 2 i /n i, Q = nl2 λ 2 i in the case of identical sample sizes n i = n. Example: (continued.) Q = nl2 λ 2 i = /3 = 104.2, F = Q MSW = = The associated pvalue may be found in Excel as FDIST(14.1;1;28)=0.08%, the same as before. 141
8 The remaining questions may be tackled in an analogous way: 1. Do subjects make more errors if given both drugs than if given only one? H 0 (2) : µ (µ 2 + µ 3 ) = 0 2. Do the two drugs differ in the number of errors they produce? H 0 (3) : µ 2 µ 3 = 0 The computational steps in conducting the test are summarized in the tables below: A1 A2 A3 A4 λ 2 i X i λ i (1) 11/31/31/3 4/3 λ i (2) 01/21/2 1 3/2 λ i (3) L s(l) t Q F p H 0 (1) H 0 (2) H 0 (3) Note that the F test Q/MSW F (1, N k) results in the same pvalue as the twosided ttest L/s(L) t(n k), and that t 2 = F. 142
9 Orthogonal hypotheses The 3 hypotheses in the preceding section were special in the sense that the truth of each null hypothesis is unrelated to the truth of any others. If H 0 (1) is false, we know that drugs have some effect upon errors, although H 0 (1) says nothing about whether the effect of taking both drugs simultaneously is different from the average of the effects of the drugs taken seperately. Similarly, if H 0 (2) is false, we know that the effect of taking both drugs is different from taking only one, but we don t know which of the two drugs taken individually is more effective. A set of contrasts such that the truth of any one of them is unrelated to the truth of any other is called orthogonal. For equal sample sizes in each group the orthogonality of two contrasts L 1 = λ 1i x i and L 2 = λ 2i x i may be assessed by checking the orthogonality condition λ1i λ 2i = 0. ( λ1i λ 2i n i = 0 for n i const. 143 )
10 In an ANOVA with k groups, no more than k 1 orthogonal contrasts can be tested. Such a set of k 1 orthogonal contrasts is however not unique. In practice the researcher will select a set of orthogonal contrasts such that those contrasts of particular interest in the research question are included. Once such a set is found, it exploits all available information that can be extracted for answering the maximum amount of k 1 independent questions, that can be asked in form of contrasts. Once such a set is found, it has the property that the sum of squares for the individual kontrasts Q = L 2 /( λ 2 i /n i) sum up to the sum of squares for the model: Q 1 + Q Q k 1 = SSB. In our example: = (the difference to SSB= in the ANOVA table is only due to rounding error.) 144
11 To illustrate the importance of orthogonal hypothesis, assume that instead of testing the orthogonal hypotheses H 0 (1) H 0 (3) we would have tested the original hypothesis H 0 (1) : µ µ µ µ 4 = 0 together with H 0 (4) : µ 4 µ 1 = 0, that is that error rates are the same when taking both drugs or taking no drugs, upon the data below: 145
12 The two hypotheses are not orthogonal: A1 A2 A3 A4 X i λ i (1) 11/31/31/3 λ i (4) We may therefore get conflicting results from both hypothesis tests: L Q F p H 0 (1) H 0 (4) In this example, we safely accept H 0 (1) that taking drugs has no impact on error rates, while we strongly reject H 0 (4) that taking both drugs has no impact. 146
13 Constructing Orthogonal Tests If all sample sizes are equal (n i = n=const.), orthogonal tests may be constructed based on the principle that the test on the differences among a given set of means is orthogonal to any test involving their average. Consider our original set of hypotheses: H 0 (1) : µ (µ 2 + µ 3 + µ 4 ) = 0 H 0 (2) : µ (µ 2 + µ 3 ) = 0 H 0 (3) : µ 2 µ 3 = 0 H 0 (1) involves the average of µ 2, µ 3, and µ 4, while H 0 (2) and H 0 (3) involve differences among them. Similarly, H 0 (2) involves the average of µ 2 and µ 3, while H 0 (3) involves the difference between them. 147
14 A Word of Caution Contrasts are a special form of planned comparisons, that is, the hypotheses with the contrasts to be tested must be set up before taking a look at the data, otherwise the associated pvalues will not be valid. The situation is similar to whether applying a onesided or a twosided test. Assume you want to test whether the means in two samples are the same using the conventional significance level of α = 5%. You apply a twosided test and get a pvalue of 8%, so you may not reject. Let s say as a step in your calculations you figured out that x 1 > x 2. You may be tempted then to replace your original twosided test against H 1 : µ 1 µ 2 by a onesided test against H 1 : µ 1 > µ 2, which, technically, would allow you to divide your p value by 2 and get a significant result at 4%. But that pvalue of 4% is fraud, because the reason that it is only one half of the two sided pvalue is exactly that without looking at the data, you also had a 50% chance that the sample means would have come out as x 1 < x
15 3) Multiple Comparisons As seen above, contrasts must be formulated in advance of the analysis in order for the pvalues to be valid. Multiplecomparisons procedures, on the other hand, are designed for testing effects suggested by the data after the ANOVA F test led to a rejection of the hypothesis that all sample means are equal. All multiple comparisons methods we will discuss consist of building tratios of the form t ij = x i x j SE ij, or, equivalently, setting up confidence intervals of the form for all ( k) k(k 1) = 2 2 within the k groups. [( x i x j ) ± t SE ij ] pairs that can be built The methods differ in the calculation of the standard errors SE ij and the distributions and critical levels used in the determination of t. 149
16 Fisher s LSD = leastsignificant difference One obvious choice is to extend the 2 independent sample ttest to k independent samples with test statistics t ij = s x i x j 1ni + 1 nj, where s = MSW, and to declare µ i and µ j different whenever t ij > t α/2 (N k), or, equivalently, whenever 0 / ( x i x j ) ± t α/2 (N k) s n i n j This procedure fixes the probability of a false rejection for each single pair of means being compared as α.. This is a problem if the numbers of means being compared is large. For example, if we use LSD with a significance of α=5% to compare k=20 means, then there are k(k 1) 2 = 190 pairs of means and we expect 5% 190 = 9.5 false rejections! 150
17 Bonferroni Method The Bonferroni method uses the same test statistic as Fisher s LSD, but replaces the significance level α for each single pair with α = α/ ( ) k 2 or equivalently the original pvalue with p = ( ) k 2 p as a conservative estimate of the probability that any false rejection among all k(k 1) 2 comparisons will occur. An obvious disadvantage is that the test becomes weak when k is large. Example: Comparison of A1 and A2 (original data): t ij = LSD: s x i x j 1ni + 1 nj = = p LSD = TDIST(2.668; 28; 2) = 1.25%. Bonferroni: p B = p LSD = % = 7.5%. 151
18 Tukey s HSD = honestly significant difference Tukey s method for multiple comparisons delivers the most precise estimate of the probability that any false rejection among all paired comparisons will occur. In its original form it is only applicable in the case of equal sample sizes in each group (n i = n), but corrections for unequal sample sizes have been suggested and are implemented in SAS EG. Tukey s honestly significant difference is: HSD = q α (k, N k) s n, where q α denotes the αcritical value from the so called Studentized range distribution tabulated e.g. in table 6 of Azcel, and s is estimated by MSW, as usual. Two group means µ i and µ j are declared different at significance level α if or, equivalently, if 0 / [ ( x i x j ) ± HSD ], x i x j s/ n > q α(k, N k). 152
19 Example: Comparison of A1 and A2 (continued). LSD and Bonferroni gave conflicting results whether the difference between µ 1 and µ 2 is significant at 5% or not (recall p LSD = 1.25% and p B = 7.5%), because LSD underestimates the probability of any false rejection among all paired comparisons, whereas Bonferroni overestimates it. Applying Tukey s procedure yields x i x j s/ n = /8 = Looking up in a table of critical values for the Studentized range distribution reveals that q 0.05 (k, N k) = q 0.05 (4; 28) = 3.86, which is larger than the statistic calculated above. The difference between µ 1 and µ 2 is therefore not significant at 5% level, if the test was first suggested by the data. Note: A planned comparison before looking at the data in form of a contrast would have found a significant difference with a pvalue of 1.25%(=p LSD ). 153
20 Simultaneous Confidence Intervals The idea with multiple comparisons in posthoc tests is usually to determine, which groups may be combined into larger groups, that are homogeneous in the sense that their group means are not significantly different. For that purpose, one constructs confidence intervals of the form [( x i x j ) ± smallest significant difference], where the smallest significant differences are: LSD : tα(n k) s n i n j Bonferroni: t α2 /( k 2 )(N k) s n i n j s HSD : q α (k, N k) n Combinations of groups are considered homogeneous at confidence level (1 α), if none of their paired comparisons lies outside the corresponding confidence intervals; that is, pairs of means, the confidence intervals of which include the value 0 will not be declared significantly different, and vice versa. 154
21 Homogeneous Subgroups in SAS EG If your only reason for doing multiple comparisons is to identify homogeneous groups in the sense that their group means are not significantly different, then the easiest way to do so is in the Means/Comparison tab of the OneWay ANOVA procedure. This will not provide you with pvalues for the differences in means though, and it will not allow you to define your own contrasts. Below is some output for Tukey s HSD and Fisher s LSD: OneWay Analysis of Variance Results Tukey's Studentized Range (HSD) Test for errors NOTE: This test controls the Type I experimentwise error rate, but it generally has a higher Type II error rate than REGWQ. Alpha 0.05 Error Degrees of Freedom 28 Error Mean Square Critical Value of Studentized Range Minimum Significant Difference Means with the same letter are not significantly different. Tukey Grouping Mean N drug A A B A B B B B OneWay Analysis of Variance Results t Tests (LSD) for errors NOTE: This test controls the Type I comparisonwise error rate, not the experimentwise error rate. Alpha 0.05 Error Degrees of Freedom 28 Error Mean Square Critical Value of t Least Significant Difference Means with the same letter are not significantly different. t Grouping Mean N drug A B B C B C C
22 Multiple Comparisons in SAS EG Post hoc tests in SAS EG may be found in Analyze/ANOVA/Linear Models. For the overall ANOVA F test (the same as under Analyze/ANOVA/OneWay Anova) select the dependent variable (e.g. errors) and classification variable (e.g. drug) under Task Roles. You will also have to reselect your classification variable as Main Effect under Model. To perform the multiple comparisons push Add under Effects to estimate within Post Hoc Tests/Least Squares. This will open four dropdown menues under Options for means tests. Select Yes under Class effects to use, All pairwise differences for Show p values for differences, and True for Show confidence limits. The option No adjustment under Adjustment method for comparison stands for LSD. Homogeneous subgroups in our example are (A1,A3) and (A2,A3) under LSD, and (A1,A2, A3), (A2,A4) under Tukey and Bonferroni. 156
23 Analyze => ANOVA => Linear Models
24
25 drug Linear Models Least Squares Means errors LSMEAN LSMEAN Number Least Squares Means for effect drug Pr > t for H0: LSMean(i)=LSMean(j) Dependent Variable: errors i/j < < drug errors LSMEAN 95% Confidence Limits i Least Squares Means for Effect drug j Difference Between Means 95% Confidence Limits for LSMean(i) LSMean(j)
26 Linear Models Least Squares Means Adjustment for Multiple Comparisons: Bonferroni drug errors LSMEAN LSMEAN Number Least Squares Means for effect drug Pr > t for H0: LSMean(i)=LSMean(j) Dependent Variable: errors i/j drug errors LSMEAN 95% Confidence Limits i Least Squares Means for Effect drug j Difference Between Means Simultaneous 95% Confidence Limits for LSMean(i)LSMean(j)
27 Linear Models Least Squares Means Adjustment for Multiple Comparisons: Tukey drug errors LSMEAN LSMEAN Number Least Squares Means for effect drug Pr > t for H0: LSMean(i)=LSMean(j) Dependent Variable: errors i/j drug errors LSMEAN 95% Confidence Limits i Least Squares Means for Effect drug j Difference Between Means Simultaneous 95% Confidence Limits for LSMean(i)LSMean(j)
28 Contrasts in SAS EG Calculating contrasts in SAS EG requires a (tiny) little bit of programming. Proceed as if you were going to perform some multiple comparisons. Then, after having made the same choices as described on the preceding pages, press the Preview code buttom in the lower left corner of whichever tab you are in. Next push the Insert Code buttom in the upper left corner of the Code preview for Task window. This will bring up the program code generated by SAS EG in such a way that it indicates spaces to insert the users own code. Doubleclick on the space immediately above the line beginning with the command LSMEANS <independent variables name>. This will bring up a window where you can enter your own code. The syntax to be entered is: Contrast <table text> <indep. variable> λ 1, λ 2,..., λ k ; The coefficients λ 1, λ 2,..., λ k must be entered as real numbers seperated by spaces (no commas); fractions (such as 1/2) are not allowed. 162
29 Analyze => ANOVA => Linear Models... Preview Code => Insert Code OK => OK => Run
30 Linear Models Dependent Variable: errors errors Source DF Sum of Squares Mean Square F Value Pr > F Model Error Corrected Total RSquare Coeff Var Root MSE errors Mean Source DF Type III SS Mean Square F Value Pr > F drug Contrast DF Contrast SS Mean Square F Value Pr > F H1: control vs other H2: 2 drugs vs 1 drug H3: drug A vs drug B Parameter Estimate Standard Error t Value Pr > t Intercept B <.0001 drug B <.0001 drug B drug B drug B... The X'X matrix has been found to be singular, and a generalized inverse was used to solve the normal equations. Terms whose estimates are followed by the letter 'B' are not uniquely estimable.
o Exercise A Comparisonwise versus Experimentwise Error Rates
Multiple Comparisons Contents The Estimation of Group (Treatment) Means o Example Multiple Comparisons o Fisher's Least Significant Difference (LSD) Theory Example o Tukey's Honest Significant Difference
More informationLecture 23 Multiple Comparisons & Contrasts
Lecture 23 Multiple Comparisons & Contrasts STAT 512 Spring 2011 Background Reading KNNL: 17.317.7 231 Topic Overview Linear Combinations and Contrasts Pairwise Comparisons and Multiple Testing Adjustments
More informationStatistics for Management IISTAT 362Final Review
Statistics for Management IISTAT 362Final Review Multiple Choice Identify the letter of the choice that best completes the statement or answers the question. 1. The ability of an interval estimate to
More informationc. The factor is the type of TV program that was watched. The treatment is the embedded commercials in the TV programs.
STAT E150  Statistical Methods Assignment 9 Solutions Exercises 12.8, 12.13, 12.75 For each test: Include appropriate graphs to see that the conditions are met. Use Tukey's Honestly Significant Difference
More informationANOVA  Analysis of Variance
ANOVA  Analysis of Variance ANOVA  Analysis of Variance Extends independentsamples t test Compares the means of groups of independent observations Don t be fooled by the name. ANOVA does not compare
More informationMultipleComparison Procedures
MultipleComparison Procedures References A good review of many methods for both parametric and nonparametric multiple comparisons, planned and unplanned, and with some discussion of the philosophical
More informationMEAN SEPARATION TESTS (LSD AND Tukey s Procedure) is rejected, we need a method to determine which means are significantly different from the others.
MEAN SEPARATION TESTS (LSD AND Tukey s Procedure) If Ho 1 2... n is rejected, we need a method to determine which means are significantly different from the others. We ll look at three separation tests
More informationSection 13, Part 1 ANOVA. Analysis Of Variance
Section 13, Part 1 ANOVA Analysis Of Variance Course Overview So far in this course we ve covered: Descriptive statistics Summary statistics Tables and Graphs Probability Probability Rules Probability
More informationContrasts ask specific questions as opposed to the general ANOVA null vs. alternative
Chapter 13 Contrasts and Custom Hypotheses Contrasts ask specific questions as opposed to the general ANOVA null vs. alternative hypotheses. In a oneway ANOVA with a k level factor, the null hypothesis
More informationMultivariate Analysis of Variance (MANOVA)
Multivariate Analysis of Variance (MANOVA) Example: (Spector, 987) describes a study of two drugs on human heart rate There are 24 subjects enrolled in the study which are assigned at random to one of
More informationTukey s HSD (Honestly Significant Difference).
Agenda for Week 4 (Tuesday, Jan 26) Week 4 Hour 1 AnOVa review. Week 4 Hour 2 Multiple Testing Tukey s HSD (Honestly Significant Difference). Week 4 Hour 3 (Thursday) Twoway AnOVa. Sometimes you ll need
More informationContents 1. Contents
Contents 1 Contents 3 Ksample Methods 2 3.1 Setup............................ 2 3.2 Classic Method Based on Normality Assumption..... 3 3.3 Permutation F test.................... 5 3.4 KruskalWallis
More informationAnalysis of numerical data S4
Basic medical statistics for clinical and experimental research Analysis of numerical data S4 Katarzyna Jóźwiak k.jozwiak@nki.nl 3rd November 2015 1/42 Hypothesis tests: numerical and ordinal data 1 group:
More informationOneWay ANOVA using SPSS 11.0. SPSS ANOVA procedures found in the Compare Means analyses. Specifically, we demonstrate
1 OneWay ANOVA using SPSS 11.0 This section covers steps for testing the difference between three or more group means using the SPSS ANOVA procedures found in the Compare Means analyses. Specifically,
More informationTesting: is my coin fair?
Testing: is my coin fair? Formally: we want to make some inference about P(head) Try it: toss coin several times (say 7 times) Assume that it is fair ( P(head)= ), and see if this assumption is compatible
More informationAllelopathic Effects on Root and Shoot Growth: OneWay Analysis of Variance (ANOVA) in SPSS. Dan Flynn
Allelopathic Effects on Root and Shoot Growth: OneWay Analysis of Variance (ANOVA) in SPSS Dan Flynn Just as ttests are useful for asking whether the means of two groups are different, analysis of variance
More informationTwoSample TTest from Means and SD s
Chapter 07 TwoSample TTest from Means and SD s Introduction This procedure computes the twosample ttest and several other twosample tests directly from the mean, standard deviation, and sample size.
More information0.1 Estimating and Testing Differences in the Treatment means
0. Estimating and Testing Differences in the Treatment means Is the Ftest significant, we only learn that not all population means are the same, but through this test we can not determine where the differences
More informationPASS Sample Size Software. Linear Regression
Chapter 855 Introduction Linear regression is a commonly used procedure in statistical analysis. One of the main objectives in linear regression analysis is to test hypotheses about the slope (sometimes
More informationLesson 1: Comparison of Population Means Part c: Comparison of Two Means
Lesson : Comparison of Population Means Part c: Comparison of Two Means Welcome to lesson c. This third lesson of lesson will discuss hypothesis testing for two independent means. Steps in Hypothesis
More informationChapter 5 Analysis of variance SPSS Analysis of variance
Chapter 5 Analysis of variance SPSS Analysis of variance Data file used: gss.sav How to get there: Analyze Compare Means Oneway ANOVA To test the null hypothesis that several population means are equal,
More informationOneWay Analysis of Variance
Spring, 000   Administrative Items OneWay Analysis of Variance Midterm Grades. Makeup exams, in general. Getting help See me today :0 or Wednesday from :0. Send an email to stine@wharton. Visit
More informationChapter 11: Linear Regression  Inference in Regression Analysis  Part 2
Chapter 11: Linear Regression  Inference in Regression Analysis  Part 2 Note: Whether we calculate confidence intervals or perform hypothesis tests we need the distribution of the statistic we will use.
More informationHypothesis Testing Level I Quantitative Methods. IFT Notes for the CFA exam
Hypothesis Testing 2014 Level I Quantitative Methods IFT Notes for the CFA exam Contents 1. Introduction... 3 2. Hypothesis Testing... 3 3. Hypothesis Tests Concerning the Mean... 10 4. Hypothesis Tests
More informationMCQ TESTING OF HYPOTHESIS
MCQ TESTING OF HYPOTHESIS MCQ 13.1 A statement about a population developed for the purpose of testing is called: (a) Hypothesis (b) Hypothesis testing (c) Level of significance (d) Teststatistic MCQ
More informationtaken together, can provide strong support. Using a method for combining probabilities, it can be determined that combining the probability values of
taken together, can provide strong support. Using a method for combining probabilities, it can be determined that combining the probability values of 0.11 and 0.07 results in a probability value of 0.045.
More informationThe Type I error rate is the fraction of times a Type I error is made. Comparisonwise type I error rate CER. Experimentwise type I error rate EER
Topic 5. Mean separation: Multiple comparisons [S&T Ch.8 except 8.3] 5. 1. Basic concepts If there are more than treatments the problem is to determine which means are significantly different. This process
More informationStatistics II Final Exam  January Use the University stationery to give your answers to the following questions.
Statistics II Final Exam  January 2012 Use the University stationery to give your answers to the following questions. Do not forget to write down your name and class group in each page. Indicate clearly
More informationAbout Single Factor ANOVAs
About Single Factor ANOVAs TABLE OF CONTENTS About Single Factor ANOVAs... 1 What is a SINGLE FACTOR ANOVA... 1 Single Factor ANOVA... 1 Calculating Single Factor ANOVAs... 2 STEP 1: State the hypotheses...
More informationAnalysis of Variance ANOVA
Analysis of Variance ANOVA Overview We ve used the t test to compare the means from two independent groups. Now we ve come to the final topic of the course: how to compare means from more than two populations.
More informationwhere b is the slope of the line and a is the intercept i.e. where the line cuts the y axis.
Least Squares Introduction We have mentioned that one should not always conclude that because two variables are correlated that one variable is causing the other to behave a certain way. However, sometimes
More informationInferences About Differences Between Means Edpsy 580
Inferences About Differences Between Means Edpsy 580 Carolyn J. Anderson Department of Educational Psychology University of Illinois at UrbanaChampaign Inferences About Differences Between Means Slide
More informationSPSS Guide: Tests of Differences
SPSS Guide: Tests of Differences I put this together to give you a stepbystep guide for replicating what we did in the computer lab. It should help you run the tests we covered. The best way to get familiar
More informationCh. 12 The Analysis of Variance. can be modelled as normal random variables. come from populations having the same variance.
Ch. 12 The Analysis of Variance Residual Analysis and Model Assessment Model Assumptions: The measurements X i,j can be modelled as normal random variables. are statistically independent. come from populations
More informationWeek 7 Lecture: Twoway Analysis of Variance (Chapter 12) Twoway ANOVA with Equal Replication (see Zar s section 12.1)
Week 7 Lecture: Twoway Analysis of Variance (Chapter ) We can extend the idea of a oneway ANOVA, which tests the effects of one factor on a response variable, to a twoway ANOVA which tests the effects
More informationUnit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression
Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression Objectives: To perform a hypothesis test concerning the slope of a least squares line To recognize that testing for a
More informationChapter 19 SplitPlot Designs
Chapter 19 SplitPlot Designs Splitplot designs are needed when the levels of some treatment factors are more difficult to change during the experiment than those of others. The designs have a nested
More informationHypothesis Testing hypothesis testing approach formulation of the test statistic
Hypothesis Testing For the next few lectures, we re going to look at various test statistics that are formulated to allow us to test hypotheses in a variety of contexts: In all cases, the hypothesis testing
More informationAus: Statnotes: Topics in Multivariate Analysis, by G. David Garson (Zugriff am
Aus: Statnotes: Topics in Multivariate Analysis, by G. David Garson http://faculty.chass.ncsu.edu/garson/pa765/anova.htm (Zugriff am 20.10.2010) Planned multiple comparison ttests, also just called "multiple
More informationChapter 11: Two Variable Regression Analysis
Department of Mathematics Izmir University of Economics Week 1415 20142015 In this chapter, we will focus on linear models and extend our analysis to relationships between variables, the definitions
More informationWe know from STAT.1030 that the relevant test statistic for equality of proportions is:
2. Chi 2 tests for equality of proportions Introduction: Two Samples Consider comparing the sample proportions p 1 and p 2 in independent random samples of size n 1 and n 2 out of two populations which
More informationFactorial Analysis of Variance
Chapter 560 Factorial Analysis of Variance Introduction A common task in research is to compare the average response across levels of one or more factor variables. Examples of factor variables are income
More informationACTM State ExamStatistics
ACTM State ExamStatistics For the 25 multiplechoice questions, make your answer choice and record it on the answer sheet provided. Once you have completed that section of the test, proceed to the tiebreaker
More informationANOVA ANOVA. TwoWay ANOVA. OneWay ANOVA. When to use ANOVA ANOVA. Analysis of Variance. Chapter 16. A procedure for comparing more than two groups
ANOVA ANOVA Analysis of Variance Chapter 6 A procedure for comparing more than two groups independent variable: smoking status nonsmoking one pack a day > two packs a day dependent variable: number of
More informationINTERPRETING THE ONEWAY ANALYSIS OF VARIANCE (ANOVA)
INTERPRETING THE ONEWAY ANALYSIS OF VARIANCE (ANOVA) As with other parametric statistics, we begin the oneway ANOVA with a test of the underlying assumptions. Our first assumption is the assumption of
More informationPermutation Tests with SAS
Permutation Tests with SAS /* testmultest1.sas */ options linesize=79 noovp formdlim='_'; title 'Permutation test example from lecture: Onesided p = 0.10'; data scramble; input group Y; datalines; 1 1.3
More informationData Analysis. Lecture Empirical Model Building and Methods (Empirische Modellbildung und Methoden) SS Analysis of Experiments  Introduction
Data Analysis Lecture Empirical Model Building and Methods (Empirische Modellbildung und Methoden) Prof. Dr. Dr. h.c. Dieter Rombach Dr. Andreas Jedlitschka SS 2014 Analysis of Experiments  Introduction
More informationRegression Analysis Prof. Soumen Maity Department of Mathematics Indian Institute of Technology, Kharagpur
Regression Analysis Prof. Soumen Maity Department of Mathematics Indian Institute of Technology, Kharagpur Lecture  7 Multiple Linear Regression (Contd.) This is my second lecture on Multiple Linear Regression
More informationSIMULTANEOUS COMPARISONS AND THE CONTROL OF TYPE I ERRORS CHAPTER 6
SIMULTANEOUS COMPARISONS AND THE CONTROL OF TYPE I ERRORS CHAPTER 6 ERSH 8310 Lecture 8 September 18, 2007 Today s Class Discussion of the new course schedule. Takehome midterm (one instead of two) and
More informationData Analysis Tools. Tools for Summarizing Data
Data Analysis Tools This section of the notes is meant to introduce you to many of the tools that are provided by Excel under the Tools/Data Analysis menu item. If your computer does not have that tool
More informationMS&E 226: Small Data. Lecture 17: Additional topics in inference (v1) Ramesh Johari
MS&E 226: Small Data Lecture 17: Additional topics in inference (v1) Ramesh Johari ramesh.johari@stanford.edu 1 / 34 Warnings 2 / 34 Modeling assumptions: Regression Remember that most of the inference
More informationLecture 7: Binomial Test, Chisquare
Lecture 7: Binomial Test, Chisquare Test, and ANOVA May, 01 GENOME 560, Spring 01 Goals ANOVA Binomial test Chi square test Fisher s exact test Su In Lee, CSE & GS suinlee@uw.edu 1 Whirlwind Tour of One/Two
More information3. Nonparametric methods
3. Nonparametric methods If the probability distributions of the statistical variables are unknown or are not as required (e.g. normality assumption violated), then we may still apply nonparametric tests
More informationtests whether there is an association between the outcome variable and a predictor variable. In the Assistant, you can perform a ChiSquare Test for
This paper explains the research conducted by Minitab statisticians to develop the methods and data checks used in the Assistant in Minitab 17 Statistical Software. In practice, quality professionals sometimes
More informationOLS is not only unbiased it is also the most precise (efficient) unbiased estimation technique  ie the estimator has the smallest variance
Lecture 5: Hypothesis Testing What we know now: OLS is not only unbiased it is also the most precise (efficient) unbiased estimation technique  ie the estimator has the smallest variance (if the GaussMarkov
More informationHypothesis Testing. Chapter 7
Hypothesis Testing Chapter 7 Hypothesis Testing Time to make the educated guess after answering: What the population is, how to extract the sample, what characteristics to measure in the sample, After
More informationCHAPTER 13. Experimental Design and Analysis of Variance
CHAPTER 13 Experimental Design and Analysis of Variance CONTENTS STATISTICS IN PRACTICE: BURKE MARKETING SERVICES, INC. 13.1 AN INTRODUCTION TO EXPERIMENTAL DESIGN AND ANALYSIS OF VARIANCE Data Collection
More informationEPS 625 ANALYSIS OF COVARIANCE (ANCOVA) EXAMPLE USING THE GENERAL LINEAR MODEL PROGRAM
EPS 6 ANALYSIS OF COVARIANCE (ANCOVA) EXAMPLE USING THE GENERAL LINEAR MODEL PROGRAM ANCOVA One Continuous Dependent Variable (DVD Rating) Interest Rating in DVD One Categorical/Discrete Independent Variable
More informationANOVA MULTIPLE CHOICE QUESTIONS. In the following multiplechoice questions, select the best answer.
ANOVA MULTIPLE CHOICE QUESTIONS In the following multiplechoice questions, select the best answer. 1. Analysis of variance is a statistical method of comparing the of several populations. a. standard
More informationHypothesis testing  Steps
Hypothesis testing  Steps Steps to do a twotailed test of the hypothesis that β 1 0: 1. Set up the hypotheses: H 0 : β 1 = 0 H a : β 1 0. 2. Compute the test statistic: t = b 1 0 Std. error of b 1 =
More informationT adult = 96 T child = 114.
Homework Solutions Do all tests at the 5% level and quote pvalues when possible. When answering each question uses sentences and include the relevant JMP output and plots (do not include the data in your
More informationKSTAT MINIMANUAL. Decision Sciences 434 Kellogg Graduate School of Management
KSTAT MINIMANUAL Decision Sciences 434 Kellogg Graduate School of Management Kstat is a set of macros added to Excel and it will enable you to do the statistics required for this course very easily. To
More informationSimple Tricks for Using SPSS for Windows
Simple Tricks for Using SPSS for Windows Chapter 14. Followup Tests for the TwoWay Factorial ANOVA The Interaction is Not Significant If you have performed a twoway ANOVA using the General Linear Model,
More informationNotes on Maxwell & Delaney
Notes on Maxwell & Delaney PSY710 5 Chapter 5  Multiple Comparisons of Means 5.1 Inflation of Type I Error Rate When conducting a statistical test, we typically set α =.05 or α =.01 so that the probability
More informationLAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING
LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING In this lab you will explore the concept of a confidence interval and hypothesis testing through a simulation problem in engineering setting.
More informationANOVA Analysis of Variance
ANOVA Analysis of Variance What is ANOVA and why do we use it? Can test hypotheses about mean differences between more than 2 samples. Can also make inferences about the effects of several different IVs,
More informationAP Statistics 2007 Scoring Guidelines
AP Statistics 2007 Scoring Guidelines The College Board: Connecting Students to College Success The College Board is a notforprofit membership association whose mission is to connect students to college
More informationMultiple Comparisons. Cohen Chpt 13
Multiple Comparisons Cohen Chpt 13 How many ttests? We do an experiment, 1 factor, 3 levels (= 3 groups). The ANOVA gives us a significant Fvalue. What now? 4 levels, 1 factor: how many independent comparisons?
More informationIntroduction to Stata
Introduction to Stata September 23, 2014 Stata is one of a few statistical analysis programs that social scientists use. Stata is in the midrange of how easy it is to use. Other options include SPSS,
More informationA) to D) to B) to E) to C) to Rate of hay fever per 1000 population for people under 25
Unit 0 Review #3 Name: Date:. Suppose a random sample of 380 married couples found that 54 had two or more personality preferences in common. In another random sample of 573 married couples, it was found
More informationBusiness Statistics. Lecture 8: More Hypothesis Testing
Business Statistics Lecture 8: More Hypothesis Testing 1 Goals for this Lecture Review of ttests Additional hypothesis tests Twosample tests Paired tests 2 The Basic Idea of Hypothesis Testing Start
More informationSTT 430/630/ES 760 Lecture Notes: Chapter 6: Hypothesis Testing 1. February 23, 2009 Chapter 6: Introduction to Hypothesis Testing
STT 430/630/ES 760 Lecture Notes: Chapter 6: Hypothesis Testing 1 February 23, 2009 Chapter 6: Introduction to Hypothesis Testing One of the primary uses of statistics is to use data to infer something
More informationAn example ANOVA situation. 1Way ANOVA. Some notation for ANOVA. Are these differences significant? Example (Treating Blisters)
An example ANOVA situation Example (Treating Blisters) 1Way ANOVA MATH 143 Department of Mathematics and Statistics Calvin College Subjects: 25 patients with blisters Treatments: Treatment A, Treatment
More informationSELFTEST: SIMPLE REGRESSION
ECO 22000 McRAE SELFTEST: SIMPLE REGRESSION Note: Those questions indicated with an (N) are unlikely to appear in this form on an inclass examination, but you should be able to describe the procedures
More informationMultivariate ANOVA. Lecture 5 July 26, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2. Lecture #57/26/2011 Slide 1 of 47
Multivariate ANOVA Lecture 5 July 26, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2 Lecture #57/26/2011 Slide 1 of 47 Today s Lecture Univariate ANOVA Example Overview Today s
More informationThe t Test. April 38, exp (2πσ 2 ) 2σ 2. µ ln L(µ, σ2 x) = 1 σ 2. i=1. 2σ (σ 2 ) 2. ˆσ 2 0 = 1 n. (x i µ 0 ) 2 = i=1
The t Test April 38, 008 Again, we begin with independent normal observations X, X n with unknown mean µ and unknown variance σ The likelihood function Lµ, σ x) exp πσ ) n/ σ x i µ) Thus, ˆµ x ln Lµ,
More informationTwoSample TTests Assuming Equal Variance (Enter Means)
Chapter 4 TwoSample TTests Assuming Equal Variance (Enter Means) Introduction This procedure provides sample size and power calculations for one or twosided twosample ttests when the variances of
More informationLinear Regression with One Regressor
Linear Regression with One Regressor Michael Ash Lecture 10 Analogy to the Mean True parameter µ Y β 0 and β 1 Meaning Central tendency Intercept and slope E(Y ) E(Y X ) = β 0 + β 1 X Data Y i (X i, Y
More informationMANOVA. Lecture 7. October 5, 2005 Multivariate Analysis. Lecture #710/5/2005 Slide 1 of 55
Lecture 7 October 5, 2005 Multivariate Analysis Lecture #710/5/2005 Slide 1 of 55 Today s Lecture» Today s Lecture TwoWay TwoWay Example Oneway. Twoway. Note: Homework #5 due Thursday 10/13 at 5pm.
More informationMath 62 Statistics Sample Exam Questions
Math 62 Statistics Sample Exam Questions 1. (10) Explain the difference between the distribution of a population and the sampling distribution of a statistic, such as the mean, of a sample randomly selected
More informationBASICS: USING THE TI83 CALCULATOR KENKEL NOTES:
BASICS: USING THE TI83 CALCULATOR KENKEL NOTES: LINK FOR VARIOUS TI CALCULATOR HANDBOOKS: http://education.ti.com/educationportal/appsdelivery/download/ download_select_product.jsp?cid=us&displaymode=g&contentpaneid=17
More informationFisher's least significant difference (LSD) 2. If outcome is do not reject H, then! stop. Otherwise continue to #3.
Fisher's least significant difference (LSD) Procedure: 1. Perform overall test of H : vs. H a :. Á. Á â Á. " # >. œ. œ â œ.! " # > 2. If outcome is do not reject H, then! stop. Otherwise continue to #3.
More informationMAT X Hypothesis Testing  Part I
MAT 2379 3X Hypothesis Testing  Part I Definition : A hypothesis is a conjecture concerning a value of a population parameter (or the shape of the population). The hypothesis will be tested by evaluating
More informationSimple Linear Regression Chapter 11
Simple Linear Regression Chapter 11 Rationale Frequently decisionmaking situations require modeling of relationships among business variables. For instance, the amount of sale of a product may be related
More informationTwoSample TTests Allowing Unequal Variance (Enter Difference)
Chapter 45 TwoSample TTests Allowing Unequal Variance (Enter Difference) Introduction This procedure provides sample size and power calculations for one or twosided twosample ttests when no assumption
More informationLinear constraints in multiple linear regression. Analysis of variance.
Section 16 Linear constraints in multiple linear regression. Analysis of variance. Multiple linear regression with general linear constraints. Let us consider a multiple linear regression Y = X + β and
More informationThe calculations lead to the following values: d 2 = 46, n = 8, s d 2 = 4, s d = 2, SEof d = s d n s d n
EXAMPLE 1: Paired ttest and tinterval DBP Readings by Two Devices The diastolic blood pressures (DBP) of 8 patients were determined using two techniques: the standard method used by medical personnel
More informationModule 10: Mixed model theory II: Tests and confidence intervals
St@tmaster 02429/MIXED LINEAR MODELS PREPARED BY THE STATISTICS GROUPS AT IMM, DTU AND KULIFE Module 10: Mixed model theory II: Tests and confidence intervals 10.1 Notes..................................
More informationBackground to ANOVA. OneWay Analysis of Variance: ANOVA. The OneWay ANOVA Summary Table. Hypothesis Test in OneWay ANOVA
Background to ANOVA OneWay Analysis of Variance: ANOVA Dr. J. Kyle Roberts Southern Methodist University Simmons School of Education and Human Development Department of Teaching and Learning Recall from
More informationChapter 16  Analyses of Variance and Covariance as General Linear Models Eye fixations per line of text for poor, average, and good readers:
Chapter 6  Analyses of Variance and Covariance as General Linear Models 6. Eye fixations per line of text for poor, average, and good readers: a. Design matrix, using only the first subject in each group:
More informationStatistical Consulting Topics. MANOVA: Multivariate ANOVA
Statistical Consulting Topics MANOVA: Multivariate ANOVA Predictors are still factors, but we have more than one continuousvariable response on each experimental unit. For example, y i = (y i1, y i2 ).
More informationPower and Sample Size Determination
Power and Sample Size Determination Bret Hanlon and Bret Larget Department of Statistics University of Wisconsin Madison November 3 8, 2011 Power 1 / 31 Experimental Design To this point in the semester,
More informationMultiple Hypothesis Testing: The Ftest
Multiple Hypothesis Testing: The Ftest Matt Blackwell December 3, 2008 1 A bit of review When moving into the matrix version of linear regression, it is easy to lose sight of the big picture and get lost
More informationHypothesis Testing. Dr. Bob Gee Dean Scott Bonney Professor William G. Journigan American Meridian University
Hypothesis Testing Dr. Bob Gee Dean Scott Bonney Professor William G. Journigan American Meridian University 1 AMU / BonTech, LLC, JourniTech Corporation Copyright 2015 Learning Objectives Upon successful
More informationDescriptive Statistics
Descriptive Statistics Primer Descriptive statistics Central tendency Variation Relative position Relationships Calculating descriptive statistics Descriptive Statistics Purpose to describe or summarize
More informationChapter 12 Sample Size and Power Calculations. Chapter Table of Contents
Chapter 12 Sample Size and Power Calculations Chapter Table of Contents Introduction...253 Hypothesis Testing...255 Confidence Intervals...260 Equivalence Tests...264 OneWay ANOVA...269 Power Computation
More informationOneWay Analysis of Variance (ANOVA) Example Problem
OneWay Analysis of Variance (ANOVA) Example Problem Introduction Analysis of Variance (ANOVA) is a hypothesistesting technique used to test the equality of two or more population (or treatment) means
More informationMultiple Regression Analysis in Minitab 1
Multiple Regression Analysis in Minitab 1 Suppose we are interested in how the exercise and body mass index affect the blood pressure. A random sample of 10 males 50 years of age is selected and their
More informationStatistiek II. John Nerbonne. October 1, 2010. Dept of Information Science j.nerbonne@rug.nl
Dept of Information Science j.nerbonne@rug.nl October 1, 2010 Course outline 1 Oneway ANOVA. 2 Factorial ANOVA. 3 Repeated measures ANOVA. 4 Correlation and regression. 5 Multiple regression. 6 Logistic
More informationInferential Statistics
Inferential Statistics Sampling and the normal distribution Zscores Confidence levels and intervals Hypothesis testing Commonly used statistical methods Inferential Statistics Descriptive statistics are
More information