individualdifferences


 Carmella Dean
 4 years ago
 Views:
Transcription
1 1 Simple ANalysis Of Variance (ANOVA) Oftentimes we have more than two groups that we want to compare. The purpose of ANOVA is to allow us to compare group means from several independent samples. In general, ANOVA procedures are generalizations of the ttest and it can be shown that, if one is only interested in the difference between two groups on one independent categorical (i.e. grouping variable), that the independent samples ttest is a special case of ANOVA. A oneway ANOVA refers to having only one independent grouping variable or factor, which is the independent variable. It is possible to have more than one grouping variable, but we will start with the simplest case. If one only has two levels of the grouping variable then one can simply conduct an independent samples ttest, but if one has more than two levels of the grouping variable than one needs to conduct an ANOVA. Since we have more than two groups in ANOVA we need to figure out a way to describe the difference between all the means. One way to do this is to figure out the variance between the sample means because a large variance implies that the sample means differ a lot, whereas a small variance implies that the sample means are not that different. This will give us a single numeric value for the difference between all the sample means. The statistic used in ANOVA partitions the variance into two components: (1) the between treatment 1 variability and () the treatment variability. Whenever from different samples are compared there are three sources that can cause differences to be observed between the sample means: 1. Difference due to Treatment. Individual Differences 3. Differences due to Experimental Error These are the three different sources of variability that can be cause one to observe differences between treatment groups and so these sources of variability are referred to as the between treatment variability. Only two of these sources of variability can be observed a treatment group, specifically individual differences and experimental error, and these are referred to as the treatment variability. The statistics used in ANOVA, the F statistic, uses a ratio of between treatment variability and treatment variability to test whether or not there is a difference among treatments. Specifically: F between tr eatment variabilty treatment var iablity treatment effect + individualdifferences + experiment alerror individualdifferences + experiment alerror 1 Note that groups do not always represent treatments. Oftentimes ANOVA is used to determine differences in intact groups such as those that differ by ethnicity or gender. It should be noted that your book, and many statistical software packages refer to the treatment variability as the error variability.
2 If the treatment effect is small than the ratio will be close to one. Therefore, an Fstatistic close to one would be expected if the null hypothesis were true and there were no treatment differences. If the treatment effect is large then the ratio will be much greater than one because the between treatment variability will be much larger than the treatment variability. The hypotheses tested in ANOVA are: H 0 : µ 1 µ µ 3... µ K H 1 : at least one mean is different from the rest where K the total number of groups or sample means being compared In the population, group has mean µ and variance σ. In the sample, group has mean X and variance s. The sample size for each group is n and the total number of observations, N n 1 + n + n n K. The grand mean, of all observations is X. The assumptions underlying the test are the same as the assumption underlying the ttest for independent samples. Specifically, 1. Each group, in the population is normally distributed with mean µ. The variance in each group is the same so that σ1 σ K σ K σ, otherwise known as the homogeneity of variance assumption. 3. Each observation is independent of each other The computations underlying a simple oneway ANOVA are pretty straightforward if you remember that a variance is composed of two parts: (1) the sum of squared deviations from the mean () and () the degrees of freedom (df), which can be though of as the number of potentially different values that are used to compute the minus 1. Therefore the total variance, across all groups, is computed using total ( X X ) and df total N 1. We partition this variance into two parts, the treatment variance and the between treatment variance. Note that the total variance is simply the sum of treatment variance and between treatment variance and the df for the total variance is simply the sum of the df associated with the treatment variance and between treatment variance. The treatment or group variance is computed using error X X ), which represents the sum the squared deviations from each group mean and ( df df error (n 1 1) + (n 1) + (n 3 1)) + + (n K 1) (total number of observations) (number of groups) N K. The ratio of and df is known as the Mean Square groups ( ) or Mean Square Error ( error ) The betweentreatment variability is computed using the between treatment n ( X X ), which represents the sum of the squared deviations of all group means from the grand (overall) mean and df between df treatment K 1, or the number of groups minus
3 3 one.. The ratio of between and df between is known as the Mean Square between groups ( between ) or Mean Square Treatment ( treatment ) The Fstatistic is calculated by computing the ratio of Mean Square between groups ( between or treatment ) and Mean Square groups ( or error ). Specifically, F between This ratio follows a sampling distribution known as the F distribution which is a family of distributions based on the df of the numerator and the df of the denominator. Example A psychologist is interested in determining the extent to which physical attractiveness may influence a person s udgment of other personal characteristics, such as intelligence or ability. So he selects three groups of subects and asks them to pretend to be a company personnel manager and he gives them all a stack of identical ob applications which include picture of the applicants. One group of subects is given only pictures of very attractive people, another group is given only pictures of average looking people and a third group is given only pictures of unattractive people. Subects are asked to rate the quality of each applicant on a scale of 0 (which represents very poor qualities) to 10 (which represents excellent qualities). The following data is obtained: Attractive Average Unattractive What should he conclude? Well, we first need to calculate the grand mean and the means for each of the three groups: X X X X
4 4 ( X X Now we can calculate 3 ) N K (5 4.55) + (4 4.55) (6 5.9) + (5 5.9) 34 3 and between 11( ) n ( X K 1 X ) + 1( ) ( ) (4.36) (1.36) So the Fstatistic 36.63/ , but how likely is it to have obtained this value if the null hypothesis is true? With and 31 df the critical F, at α.05, is approximately, 3.3. So the psychologist can reect the null hypothesis and conclude that person s udgment of the ob qualifications of prospective applicants appears to be influenced by how attractive the prospective applicant is. The ANOVA procedure is robust to violations of the assumptions, especially the assumption of normality. Violating the assumption of homogeneity of variance is especially problematic if the groups consist of different sample sizes. Levene s test, which we talked about before in terms of the ttest, can be used to test if the homogeneity of variance assumption has been violated. If it has, then the Welch procedure can be used to adust the df used in ANOVA, similar to what we talked about for the ttest. If the normality assumption is violated then the data can be transformed (because this won t change the results of the statistical test it will ust rescale things) to be more normally distributed. Common transformation include: 1. Taking the square root of each observation is beneficial if the data is very skewed.. Taking the log of each observation is beneficial if the data is very positively skewed. 3. Taking the reciprocal of each observation (i.e. 1/observation) is beneficial if there are very large values in the positive tail of the distribution. Another approach to dealing with a violation of the normality assumption is to use a trimmed sample which removes a fixed percentage of the extreme values in each of the tails of the distribution or a Windsorized sample which replaces the values that are trimmed with the most extreme observations in the tail that are left. In the latter case the df need to be adusted by the number of values that are replaced. As we explore more complicated ANOVA models (models with more than one grouping variable) it will become important to be able to differentiate between fixed factors (or groups) and random factors. 3 Note: Answers obtained by hand, from Excel, or from a statistical software package will all most likely vary slightly due to rounding error.
5 5 A fixed factor is one in which the researcher is only interested in the various levels of the different groups that are being studied. These levels are not assumed to be representative of, nor generalizable to, other levels of the group. A random factor is one in which the researcher considers the various levels of the grouping variable to be a random sample from all possible levels. In this situation the results of the statistical test may be generalized to other levels of the group. It should be noted that there is a direct relationship between the ttest for independent samples and the ANOVA, when K. Specifically, it can be shown mathematically that the Fstatistic the tstatistic, squared (i.e. F t ) Power and Effect Size Similar to the ttest, finding statistical significance does not tell us whether the differences are important from a practical perspective. Several measure of effect size have been proposed, all of which differ in terms of how biased they are. η (etasquared) or the correlation ratio is one of the oldest measure of effect size. It represents the percentage of total variability that can be accounted for by differences in the grouping variable or the percentage by with the error variability (i.e. treatment variability) is reduced by considering group membership. This is done by calculating the ratio of between and total Specifically: η between total 73.5 For our previous example, η. 55, meaning 55% of the variation in ratings can be accounted for by differences in the independent variable (i.e. the groups). This effect size measure is biased upwards, meaning it is larger than would be expected if it were to have been calculated from the population, rather than estimated from the sample. An alternative effect size measure to η is ω (omegasquared). It also measures the percentage of total variability that can be accounted for by between group variability but does so by using values, rather than values, thereby making use of sample size information. Specifically, for a fixed effect 4 ANOVA: ω between ( k 1) total (3 1)(1.94) For our previous example, ω This measure of effect size has been found to be less biased than η. Note that it is smaller for what we obtained for η. 4 Note that this measure of effect size is computed slightly differently for a random effects ANOVA model and that the formula for a random effects ANOVA model is not presented here.
6 6 Estimating power for ANOVA is a straightforward extension of how power was estimated for the ttest. We simply use different notation, and different tables. Moreover, we assume equal sample sizes in each group, which is the optimal situation. In an ANOVA context, φ is comparable to d in the independent ttest context, and separates out the effect size from the sample size. However, we need to incorporate the fact that we are using variance estimates in the ANOVA context. Specifically, φ ( µ ) µ / K So, if we were to assume that the population values correspond exactly to what we obtained in our example (unlikely as this may be) then φ [( ) + ( ) ( ) ]/ Furthermore, in an ANOVA context, φ is comparable to δ in the independent ttest context, in that it incorporates sample size to allow us to determine how large of a sample we need to detect meaningful differences, from a practical perspective. However, even though we may wind up with unequal sample sizes in our group we calculate power based on the assumption of equal sample sizes. Specifically, φ φ n where n the number of subects in each group So, if we were to assume that we expected 1 subects in each of our groups in our example then: φ φ n In an ANOVA context we can use to the noncentrality parameter for the F distribution, which is the mean of the Fdistribution if the null hypothesis is false, with K 1 and N K df for the numerator and denominator, respectively. For our example, we will use an estimate corresponding to φ 3.0, because our table in the book does not go any higher and we will compare it to the noncentrality parameter with df for the numerator and 30 df for the denominator (because our book does not have very fine gradiations for df in the denominator. Using the table in the book we find that β.03 if we want to conduct our test at α.01. Therefore, since Power 1  β the power of the experiment we ran was approximately.97.
1.5 Oneway Analysis of Variance
Statistics: Rosie Cornish. 200. 1.5 Oneway Analysis of Variance 1 Introduction Oneway analysis of variance (ANOVA) is used to compare several means. This method is often used in scientific or medical experiments
More informationIntroduction to Analysis of Variance (ANOVA) Limitations of the ttest
Introduction to Analysis of Variance (ANOVA) The Structural Model, The Summary Table, and the One Way ANOVA Limitations of the ttest Although the ttest is commonly used, it has limitations Can only
More informationStudy Guide for the Final Exam
Study Guide for the Final Exam When studying, remember that the computational portion of the exam will only involve new material (covered after the second midterm), that material from Exam 1 will make
More informationINTERPRETING THE ONEWAY ANALYSIS OF VARIANCE (ANOVA)
INTERPRETING THE ONEWAY ANALYSIS OF VARIANCE (ANOVA) As with other parametric statistics, we begin the oneway ANOVA with a test of the underlying assumptions. Our first assumption is the assumption of
More informationOneWay Analysis of Variance
OneWay Analysis of Variance Note: Much of the math here is tedious but straightforward. We ll skim over it in class but you should be sure to ask questions if you don t understand it. I. Overview A. We
More informationSection 13, Part 1 ANOVA. Analysis Of Variance
Section 13, Part 1 ANOVA Analysis Of Variance Course Overview So far in this course we ve covered: Descriptive statistics Summary statistics Tables and Graphs Probability Probability Rules Probability
More informationRandomized Block Analysis of Variance
Chapter 565 Randomized Block Analysis of Variance Introduction This module analyzes a randomized block analysis of variance with up to two treatment factors and their interaction. It provides tables of
More informationUNDERSTANDING THE TWOWAY ANOVA
UNDERSTANDING THE e have seen how the oneway ANOVA can be used to compare two or more sample means in studies involving a single independent variable. This can be extended to two independent variables
More informationAnalysis of Data. Organizing Data Files in SPSS. Descriptive Statistics
Analysis of Data Claudia J. Stanny PSY 67 Research Design Organizing Data Files in SPSS All data for one subject entered on the same line Identification data Betweensubjects manipulations: variable to
More informationMeasures of Central Tendency and Variability: Summarizing your Data for Others
Measures of Central Tendency and Variability: Summarizing your Data for Others 1 I. Measures of Central Tendency: Allow us to summarize an entire data set with a single value (the midpoint). 1. Mode :
More informationTwoSample TTests Allowing Unequal Variance (Enter Difference)
Chapter 45 TwoSample TTests Allowing Unequal Variance (Enter Difference) Introduction This procedure provides sample size and power calculations for one or twosided twosample ttests when no assumption
More informationTwoSample TTests Assuming Equal Variance (Enter Means)
Chapter 4 TwoSample TTests Assuming Equal Variance (Enter Means) Introduction This procedure provides sample size and power calculations for one or twosided twosample ttests when the variances of
More informationAdditional sources Compilation of sources: http://lrs.ed.uiuc.edu/tseportal/datacollectionmethodologies/jintselink/tselink.htm
Mgt 540 Research Methods Data Analysis 1 Additional sources Compilation of sources: http://lrs.ed.uiuc.edu/tseportal/datacollectionmethodologies/jintselink/tselink.htm http://web.utk.edu/~dap/random/order/start.htm
More informationOneWay Analysis of Variance (ANOVA) Example Problem
OneWay Analysis of Variance (ANOVA) Example Problem Introduction Analysis of Variance (ANOVA) is a hypothesistesting technique used to test the equality of two or more population (or treatment) means
More informationMULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS
MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS MSR = Mean Regression Sum of Squares MSE = Mean Squared Error RSS = Regression Sum of Squares SSE = Sum of Squared Errors/Residuals α = Level of Significance
More informationGeneral Regression Formulae ) (N2) (1  r 2 YX
General Regression Formulae Single Predictor Standardized Parameter Model: Z Yi = β Z Xi + ε i Single Predictor Standardized Statistical Model: Z Yi = β Z Xi Estimate of Beta (Betahat: β = r YX (1 Standard
More informationUNDERSTANDING ANALYSIS OF COVARIANCE (ANCOVA)
UNDERSTANDING ANALYSIS OF COVARIANCE () In general, research is conducted for the purpose of explaining the effects of the independent variable on the dependent variable, and the purpose of research design
More informationChapter 5 Analysis of variance SPSS Analysis of variance
Chapter 5 Analysis of variance SPSS Analysis of variance Data file used: gss.sav How to get there: Analyze Compare Means Oneway ANOVA To test the null hypothesis that several population means are equal,
More informationDDBA 8438: The t Test for Independent Samples Video Podcast Transcript
DDBA 8438: The t Test for Independent Samples Video Podcast Transcript JENNIFER ANN MORROW: Welcome to The t Test for Independent Samples. My name is Dr. Jennifer Ann Morrow. In today's demonstration,
More informationIndependent t Test (Comparing Two Means)
Independent t Test (Comparing Two Means) The objectives of this lesson are to learn: the definition/purpose of independent ttest when to use the independent ttest the use of SPSS to complete an independent
More informationLesson 1: Comparison of Population Means Part c: Comparison of Two Means
Lesson : Comparison of Population Means Part c: Comparison of Two Means Welcome to lesson c. This third lesson of lesson will discuss hypothesis testing for two independent means. Steps in Hypothesis
More informationHYPOTHESIS TESTING: CONFIDENCE INTERVALS, TTESTS, ANOVAS, AND REGRESSION
HYPOTHESIS TESTING: CONFIDENCE INTERVALS, TTESTS, ANOVAS, AND REGRESSION HOD 2990 10 November 2010 Lecture Background This is a lightning speed summary of introductory statistical methods for senior undergraduate
More informationTHE FIRST SET OF EXAMPLES USE SUMMARY DATA... EXAMPLE 7.2, PAGE 227 DESCRIBES A PROBLEM AND A HYPOTHESIS TEST IS PERFORMED IN EXAMPLE 7.
THERE ARE TWO WAYS TO DO HYPOTHESIS TESTING WITH STATCRUNCH: WITH SUMMARY DATA (AS IN EXAMPLE 7.17, PAGE 236, IN ROSNER); WITH THE ORIGINAL DATA (AS IN EXAMPLE 8.5, PAGE 301 IN ROSNER THAT USES DATA FROM
More informationIntroduction to Hypothesis Testing. Hypothesis Testing. Step 1: State the Hypotheses
Introduction to Hypothesis Testing 1 Hypothesis Testing A hypothesis test is a statistical procedure that uses sample data to evaluate a hypothesis about a population Hypothesis is stated in terms of the
More informationLAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING
LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING In this lab you will explore the concept of a confidence interval and hypothesis testing through a simulation problem in engineering setting.
More informationNonInferiority Tests for Two Means using Differences
Chapter 450 oninferiority Tests for Two Means using Differences Introduction This procedure computes power and sample size for noninferiority tests in twosample designs in which the outcome is a continuous
More informationClass 19: Two Way Tables, Conditional Distributions, ChiSquare (Text: Sections 2.5; 9.1)
Spring 204 Class 9: Two Way Tables, Conditional Distributions, ChiSquare (Text: Sections 2.5; 9.) Big Picture: More than Two Samples In Chapter 7: We looked at quantitative variables and compared the
More informationComparing Means in Two Populations
Comparing Means in Two Populations Overview The previous section discussed hypothesis testing when sampling from a single population (either a single mean or two means from the same population). Now we
More informationBiostatistics: DESCRIPTIVE STATISTICS: 2, VARIABILITY
Biostatistics: DESCRIPTIVE STATISTICS: 2, VARIABILITY 1. Introduction Besides arriving at an appropriate expression of an average or consensus value for observations of a population, it is important to
More informationRecall this chart that showed how most of our course would be organized:
Chapter 4 OneWay ANOVA Recall this chart that showed how most of our course would be organized: Explanatory Variable(s) Response Variable Methods Categorical Categorical Contingency Tables Categorical
More informationPermutation Tests for Comparing Two Populations
Permutation Tests for Comparing Two Populations Ferry Butar Butar, Ph.D. JaeWan Park Abstract Permutation tests for comparing two populations could be widely used in practice because of flexibility of
More informationChapter 7. Oneway ANOVA
Chapter 7 Oneway ANOVA Oneway ANOVA examines equality of population means for a quantitative outcome and a single categorical explanatory variable with any number of levels. The ttest of Chapter 6 looks
More informationDescriptive Statistics
Descriptive Statistics Primer Descriptive statistics Central tendency Variation Relative position Relationships Calculating descriptive statistics Descriptive Statistics Purpose to describe or summarize
More informationStatistics courses often teach the twosample ttest, linear regression, and analysis of variance
2 Making Connections: The TwoSample ttest, Regression, and ANOVA In theory, there s no difference between theory and practice. In practice, there is. Yogi Berra 1 Statistics courses often teach the twosample
More informationStandard Deviation Estimator
CSS.com Chapter 905 Standard Deviation Estimator Introduction Even though it is not of primary interest, an estimate of the standard deviation (SD) is needed when calculating the power or sample size of
More informationt Tests in Excel The Excel Statistical Master By Mark Harmon Copyright 2011 Mark Harmon
ttests in Excel By Mark Harmon Copyright 2011 Mark Harmon No part of this publication may be reproduced or distributed without the express permission of the author. mark@excelmasterseries.com www.excelmasterseries.com
More informationHYPOTHESIS TESTING: POWER OF THE TEST
HYPOTHESIS TESTING: POWER OF THE TEST The first 6 steps of the 9step test of hypothesis are called "the test". These steps are not dependent on the observed data values. When planning a research project,
More informationNCSS Statistical Software
Chapter 06 Introduction This procedure provides several reports for the comparison of two distributions, including confidence intervals for the difference in means, twosample ttests, the ztest, the
More informationStatistiek II. John Nerbonne. October 1, 2010. Dept of Information Science j.nerbonne@rug.nl
Dept of Information Science j.nerbonne@rug.nl October 1, 2010 Course outline 1 Oneway ANOVA. 2 Factorial ANOVA. 3 Repeated measures ANOVA. 4 Correlation and regression. 5 Multiple regression. 6 Logistic
More informationANOVA ANOVA. TwoWay ANOVA. OneWay ANOVA. When to use ANOVA ANOVA. Analysis of Variance. Chapter 16. A procedure for comparing more than two groups
ANOVA ANOVA Analysis of Variance Chapter 6 A procedure for comparing more than two groups independent variable: smoking status nonsmoking one pack a day > two packs a day dependent variable: number of
More informationAnalysis of Variance ANOVA
Analysis of Variance ANOVA Overview We ve used the t test to compare the means from two independent groups. Now we ve come to the final topic of the course: how to compare means from more than two populations.
More informationOutline. Topic 4  Analysis of Variance Approach to Regression. Partitioning Sums of Squares. Total Sum of Squares. Partitioning sums of squares
Topic 4  Analysis of Variance Approach to Regression Outline Partitioning sums of squares Degrees of freedom Expected mean squares General linear test  Fall 2013 R 2 and the coefficient of correlation
More informationIntroduction to Fixed Effects Methods
Introduction to Fixed Effects Methods 1 1.1 The Promise of Fixed Effects for Nonexperimental Research... 1 1.2 The PairedComparisons ttest as a Fixed Effects Method... 2 1.3 Costs and Benefits of Fixed
More informationChapter 9. TwoSample Tests. Effect Sizes and Power Paired t Test Calculation
Chapter 9 TwoSample Tests Paired t Test (Correlated Groups t Test) Effect Sizes and Power Paired t Test Calculation Summary Independent t Test Chapter 9 Homework Power and TwoSample Tests: Paired Versus
More information" Y. Notation and Equations for Regression Lecture 11/4. Notation:
Notation: Notation and Equations for Regression Lecture 11/4 m: The number of predictor variables in a regression Xi: One of multiple predictor variables. The subscript i represents any number from 1 through
More informationResearch Methods & Experimental Design
Research Methods & Experimental Design 16.422 Human Supervisory Control April 2004 Research Methods Qualitative vs. quantitative Understanding the relationship between objectives (research question) and
More informationresearch/scientific includes the following: statistical hypotheses: you have a null and alternative you accept one and reject the other
1 Hypothesis Testing Richard S. Balkin, Ph.D., LPCS, NCC 2 Overview When we have questions about the effect of a treatment or intervention or wish to compare groups, we use hypothesis testing Parametric
More informationComparing Two Groups. Standard Error of ȳ 1 ȳ 2. Setting. Two Independent Samples
Comparing Two Groups Chapter 7 describes two ways to compare two populations on the basis of independent samples: a confidence interval for the difference in population means and a hypothesis test. The
More informationOneWay Analysis of Variance: A Guide to Testing Differences Between Multiple Groups
OneWay Analysis of Variance: A Guide to Testing Differences Between Multiple Groups In analysis of variance, the main research question is whether the sample means are from different populations. The
More informationCHAPTER 13. Experimental Design and Analysis of Variance
CHAPTER 13 Experimental Design and Analysis of Variance CONTENTS STATISTICS IN PRACTICE: BURKE MARKETING SERVICES, INC. 13.1 AN INTRODUCTION TO EXPERIMENTAL DESIGN AND ANALYSIS OF VARIANCE Data Collection
More informationCOMPARISONS OF CUSTOMER LOYALTY: PUBLIC & PRIVATE INSURANCE COMPANIES.
277 CHAPTER VI COMPARISONS OF CUSTOMER LOYALTY: PUBLIC & PRIVATE INSURANCE COMPANIES. This chapter contains a full discussion of customer loyalty comparisons between private and public insurance companies
More informationTHE KRUSKAL WALLLIS TEST
THE KRUSKAL WALLLIS TEST TEODORA H. MEHOTCHEVA Wednesday, 23 rd April 08 THE KRUSKALWALLIS TEST: The nonparametric alternative to ANOVA: testing for difference between several independent groups 2 NON
More information12: Analysis of Variance. Introduction
1: Analysis of Variance Introduction EDA Hypothesis Test Introduction In Chapter 8 and again in Chapter 11 we compared means from two independent groups. In this chapter we extend the procedure to consider
More information5. Linear Regression
5. Linear Regression Outline.................................................................... 2 Simple linear regression 3 Linear model............................................................. 4
More informationNCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )
Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates
More informationMultivariate Analysis of Variance (MANOVA)
Chapter 415 Multivariate Analysis of Variance (MANOVA) Introduction Multivariate analysis of variance (MANOVA) is an extension of common analysis of variance (ANOVA). In ANOVA, differences among various
More informationRobust t Tests. James H. Steiger. Department of Psychology and Human Development Vanderbilt University
Robust t Tests James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) 1 / 29 Robust t Tests 1 Introduction 2 Effect of Violations
More informationIntroduction to General and Generalized Linear Models
Introduction to General and Generalized Linear Models General Linear Models  part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK2800 Kgs. Lyngby
More informationUNDERSTANDING THE DEPENDENTSAMPLES t TEST
UNDERSTANDING THE DEPENDENTSAMPLES t TEST A dependentsamples t test (a.k.a. matched or pairedsamples, matchedpairs, samples, or subjects, simple repeatedmeasures or withingroups, or correlated groups)
More informationTesting Group Differences using Ttests, ANOVA, and Nonparametric Measures
Testing Group Differences using Ttests, ANOVA, and Nonparametric Measures Jamie DeCoster Department of Psychology University of Alabama 348 Gordon Palmer Hall Box 870348 Tuscaloosa, AL 354870348 Phone:
More informationThe Analysis of Variance ANOVA
3σ σ σ +σ +σ +3σ The Analysis of Variance ANOVA Lecture 0909.400.0 / 0909.400.0 Dr. P. s Clinic Consultant Module in Probability & Statistics in Engineering Today in P&S 3σ σ σ +σ +σ +3σ Analysis
More informationLecture Notes Module 1
Lecture Notes Module 1 Study Populations A study population is a clearly defined collection of people, animals, plants, or objects. In psychological research, a study population usually consists of a specific
More informationAssociation Between Variables
Contents 11 Association Between Variables 767 11.1 Introduction............................ 767 11.1.1 Measure of Association................. 768 11.1.2 Chapter Summary.................... 769 11.2 Chi
More informationMultivariate Analysis of Variance. The general purpose of multivariate analysis of variance (MANOVA) is to determine
2  Manova 4.3.05 25 Multivariate Analysis of Variance What Multivariate Analysis of Variance is The general purpose of multivariate analysis of variance (MANOVA) is to determine whether multiple levels
More informationSimple Linear Regression Inference
Simple Linear Regression Inference 1 Inference requirements The Normality assumption of the stochastic term e is needed for inference even if it is not a OLS requirement. Therefore we have: Interpretation
More information2 Sample ttest (unequal sample sizes and unequal variances)
Variations of the ttest: Sample tail Sample ttest (unequal sample sizes and unequal variances) Like the last example, below we have ceramic sherd thickness measurements (in cm) of two samples representing
More informationUnit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression
Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression Objectives: To perform a hypothesis test concerning the slope of a least squares line To recognize that testing for a
More informationIntroduction to. Hypothesis Testing CHAPTER LEARNING OBJECTIVES. 1 Identify the four steps of hypothesis testing.
Introduction to Hypothesis Testing CHAPTER 8 LEARNING OBJECTIVES After reading this chapter, you should be able to: 1 Identify the four steps of hypothesis testing. 2 Define null hypothesis, alternative
More informationNonInferiority Tests for Two Proportions
Chapter 0 NonInferiority Tests for Two Proportions Introduction This module provides power analysis and sample size calculation for noninferiority and superiority tests in twosample designs in which
More informationYou have data! What s next?
You have data! What s next? Data Analysis, Your Research Questions, and Proposal Writing Zoo 511 Spring 2014 Part 1:! Research Questions Part 1:! Research Questions Write down > 2 things you thought were
More informationIntroduction. Hypothesis Testing. Hypothesis Testing. Significance Testing
Introduction Hypothesis Testing Mark Lunt Arthritis Research UK Centre for Ecellence in Epidemiology University of Manchester 13/10/2015 We saw last week that we can never know the population parameters
More informationMultivariate Analysis of Variance (MANOVA)
Multivariate Analysis of Variance (MANOVA) Aaron French, Marcelo Macedo, John Poulsen, Tyler Waterson and Angela Yu Keywords: MANCOVA, special cases, assumptions, further reading, computations Introduction
More informationProbability Calculator
Chapter 95 Introduction Most statisticians have a set of probability tables that they refer to in doing their statistical wor. This procedure provides you with a set of electronic statistical tables that
More information15. Analysis of Variance
15. Analysis of Variance A. Introduction B. ANOVA Designs C. OneFactor ANOVA (BetweenSubjects) D. MultiFactor ANOVA (BetweenSubjects) E. Unequal Sample Sizes F. Tests Supplementing ANOVA G. WithinSubjects
More informationUNDERSTANDING THE INDEPENDENTSAMPLES t TEST
UNDERSTANDING The independentsamples t test evaluates the difference between the means of two independent or unrelated groups. That is, we evaluate whether the means for two independent groups are significantly
More informationCS 147: Computer Systems Performance Analysis
CS 147: Computer Systems Performance Analysis OneFactor Experiments CS 147: Computer Systems Performance Analysis OneFactor Experiments 1 / 42 Overview Introduction Overview Overview Introduction Finding
More informationStatistics. Onetwo sided test, Parametric and nonparametric test statistics: one group, two groups, and more than two groups samples
Statistics Onetwo sided test, Parametric and nonparametric test statistics: one group, two groups, and more than two groups samples February 3, 00 Jobayer Hossain, Ph.D. & Tim Bunnell, Ph.D. Nemours
More informationExamining Differences (Comparing Groups) using SPSS Inferential statistics (Part I) Dwayne Devonish
Examining Differences (Comparing Groups) using SPSS Inferential statistics (Part I) Dwayne Devonish Statistics Statistics are quantitative methods of describing, analysing, and drawing inferences (conclusions)
More informationRockefeller College University at Albany
Rockefeller College University at Albany PAD 705 Handout: Hypothesis Testing on Multiple Parameters In many cases we may wish to know whether two or more variables are jointly significant in a regression.
More informationBivariate Statistics Session 2: Measuring Associations ChiSquare Test
Bivariate Statistics Session 2: Measuring Associations ChiSquare Test Features Of The ChiSquare Statistic The chisquare test is nonparametric. That is, it makes no assumptions about the distribution
More informationCrosstabulation & Chi Square
Crosstabulation & Chi Square Robert S Michael Chisquare as an Index of Association After examining the distribution of each of the variables, the researcher s next task is to look for relationships among
More informationConcepts of Experimental Design
Design Institute for Six Sigma A SAS White Paper Table of Contents Introduction...1 Basic Concepts... 1 Designing an Experiment... 2 Write Down Research Problem and Questions... 2 Define Population...
More information1 Basic ANOVA concepts
Math 143 ANOVA 1 Analysis of Variance (ANOVA) Recall, when we wanted to compare two population means, we used the 2sample t procedures. Now let s expand this to compare k 3 population means. As with the
More informationPSYCHOLOGY 320L Problem Set #3: OneWay ANOVA and Analytical Comparisons
PSYCHOLOGY 30L Problem Set #3: OneWay ANOVA and Analytical Comparisons Name: Score:. You and Dr. Exercise have decided to conduct a study on exercise and its effects on mood ratings. Many studies (Babyak
More informationNormality Testing in Excel
Normality Testing in Excel By Mark Harmon Copyright 2011 Mark Harmon No part of this publication may be reproduced or distributed without the express permission of the author. mark@excelmasterseries.com
More informationSCHOOL OF HEALTH AND HUMAN SCIENCES DON T FORGET TO RECODE YOUR MISSING VALUES
SCHOOL OF HEALTH AND HUMAN SCIENCES Using SPSS Topics addressed today: 1. Differences between groups 2. Graphing Use the s4data.sav file for the first part of this session. DON T FORGET TO RECODE YOUR
More informationKSTAT MINIMANUAL. Decision Sciences 434 Kellogg Graduate School of Management
KSTAT MINIMANUAL Decision Sciences 434 Kellogg Graduate School of Management Kstat is a set of macros added to Excel and it will enable you to do the statistics required for this course very easily. To
More informationTwosample ttests.  Independent samples  Pooled standard devation  The equal variance assumption
Twosample ttests.  Independent samples  Pooled standard devation  The equal variance assumption Last time, we used the mean of one sample to test against the hypothesis that the true mean was a particular
More informationFinal Exam Practice Problem Answers
Final Exam Practice Problem Answers The following data set consists of data gathered from 77 popular breakfast cereals. The variables in the data set are as follows: Brand: The brand name of the cereal
More informationMULTIPLE REGRESSION WITH CATEGORICAL DATA
DEPARTMENT OF POLITICAL SCIENCE AND INTERNATIONAL RELATIONS Posc/Uapp 86 MULTIPLE REGRESSION WITH CATEGORICAL DATA I. AGENDA: A. Multiple regression with categorical variables. Coding schemes. Interpreting
More informationCHAPTER 14 NONPARAMETRIC TESTS
CHAPTER 14 NONPARAMETRIC TESTS Everything that we have done up until now in statistics has relied heavily on one major fact: that our data is normally distributed. We have been able to make inferences
More informationModule 5: Multiple Regression Analysis
Using Statistical Data Using to Make Statistical Decisions: Data Multiple to Make Regression Decisions Analysis Page 1 Module 5: Multiple Regression Analysis Tom Ilvento, University of Delaware, College
More informationThe ttest and Basic Inference Principles
Chapter 6 The ttest and Basic Inference Principles The ttest is used as an example of the basic principles of statistical inference. One of the simplest situations for which we might design an experiment
More informationCALCULATIONS & STATISTICS
CALCULATIONS & STATISTICS CALCULATION OF SCORES Conversion of 15 scale to 0100 scores When you look at your report, you will notice that the scores are reported on a 0100 scale, even though respondents
More informationChapter 8 Hypothesis Testing Chapter 8 Hypothesis Testing 81 Overview 82 Basics of Hypothesis Testing
Chapter 8 Hypothesis Testing 1 Chapter 8 Hypothesis Testing 81 Overview 82 Basics of Hypothesis Testing 83 Testing a Claim About a Proportion 85 Testing a Claim About a Mean: s Not Known 86 Testing
More informationSTA201TE. 5. Measures of relationship: correlation (5%) Correlation coefficient; Pearson r; correlation and causation; proportion of common variance
Principles of Statistics STA201TE This TECEP is an introduction to descriptive and inferential statistics. Topics include: measures of central tendency, variability, correlation, regression, hypothesis
More informationConfidence Intervals for the Difference Between Two Means
Chapter 47 Confidence Intervals for the Difference Between Two Means Introduction This procedure calculates the sample size necessary to achieve a specified distance from the difference in sample means
More information13: Additional ANOVA Topics. Post hoc Comparisons
13: Additional ANOVA Topics Post hoc Comparisons ANOVA Assumptions Assessing Group Variances When Distributional Assumptions are Severely Violated KruskalWallis Test Post hoc Comparisons In the prior
More informationExperimental Designs (revisited)
Introduction to ANOVA Copyright 2000, 2011, J. Toby Mordkoff Probably, the best way to start thinking about ANOVA is in terms of factors with levels. (I say this because this is how they are described
More information