individualdifferences

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "individualdifferences"

Transcription

1 1 Simple ANalysis Of Variance (ANOVA) Oftentimes we have more than two groups that we want to compare. The purpose of ANOVA is to allow us to compare group means from several independent samples. In general, ANOVA procedures are generalizations of the t-test and it can be shown that, if one is only interested in the difference between two groups on one independent categorical (i.e. grouping variable), that the independent samples t-test is a special case of ANOVA. A one-way ANOVA refers to having only one independent grouping variable or factor, which is the independent variable. It is possible to have more than one grouping variable, but we will start with the simplest case. If one only has two levels of the grouping variable then one can simply conduct an independent samples t-test, but if one has more than two levels of the grouping variable than one needs to conduct an ANOVA. Since we have more than two groups in ANOVA we need to figure out a way to describe the difference between all the means. One way to do this is to figure out the variance between the sample means because a large variance implies that the sample means differ a lot, whereas a small variance implies that the sample means are not that different. This will give us a single numeric value for the difference between all the sample means. The statistic used in ANOVA partitions the variance into two components: (1) the between treatment 1 variability and () the treatment variability. Whenever from different samples are compared there are three sources that can cause differences to be observed between the sample means: 1. Difference due to Treatment. Individual Differences 3. Differences due to Experimental Error These are the three different sources of variability that can be cause one to observe differences between treatment groups and so these sources of variability are referred to as the between treatment variability. Only two of these sources of variability can be observed a treatment group, specifically individual differences and experimental error, and these are referred to as the treatment variability. The statistics used in ANOVA, the F statistic, uses a ratio of between treatment variability and treatment variability to test whether or not there is a difference among treatments. Specifically: F between tr eatment variabilty treatment var iablity treatment effect + individualdifferences + experiment alerror individualdifferences + experiment alerror 1 Note that groups do not always represent treatments. Oftentimes ANOVA is used to determine differences in intact groups such as those that differ by ethnicity or gender. It should be noted that your book, and many statistical software packages refer to the treatment variability as the error variability.

2 If the treatment effect is small than the ratio will be close to one. Therefore, an F-statistic close to one would be expected if the null hypothesis were true and there were no treatment differences. If the treatment effect is large then the ratio will be much greater than one because the between treatment variability will be much larger than the treatment variability. The hypotheses tested in ANOVA are: H 0 : µ 1 µ µ 3... µ K H 1 : at least one mean is different from the rest where K the total number of groups or sample means being compared In the population, group has mean µ and variance σ. In the sample, group has mean X and variance s. The sample size for each group is n and the total number of observations, N n 1 + n + n n K. The grand mean, of all observations is X. The assumptions underlying the test are the same as the assumption underlying the t-test for independent samples. Specifically, 1. Each group, in the population is normally distributed with mean µ. The variance in each group is the same so that σ1 σ K σ K σ, otherwise known as the homogeneity of variance assumption. 3. Each observation is independent of each other The computations underlying a simple one-way ANOVA are pretty straightforward if you remember that a variance is composed of two parts: (1) the sum of squared deviations from the mean () and () the degrees of freedom (df), which can be though of as the number of potentially different values that are used to compute the minus 1. Therefore the total variance, across all groups, is computed using total ( X X ) and df total N 1. We partition this variance into two parts, the treatment variance and the between treatment variance. Note that the total variance is simply the sum of treatment variance and between treatment variance and the df for the total variance is simply the sum of the df associated with the treatment variance and between treatment variance. The -treatment or -group variance is computed using error X X ), which represents the sum the squared deviations from each group mean and ( df df error (n 1 1) + (n 1) + (n 3 1)) + + (n K 1) (total number of observations) (number of groups) N K. The ratio of and df is known as the Mean Square groups ( ) or Mean Square Error ( error ) The between-treatment variability is computed using the between treatment n ( X X ), which represents the sum of the squared deviations of all group means from the grand (overall) mean and df between df treatment K 1, or the number of groups minus

3 3 one.. The ratio of between and df between is known as the Mean Square between groups ( between ) or Mean Square Treatment ( treatment ) The F-statistic is calculated by computing the ratio of Mean Square between groups ( between or treatment ) and Mean Square groups ( or error ). Specifically, F between This ratio follows a sampling distribution known as the F distribution which is a family of distributions based on the df of the numerator and the df of the denominator. Example A psychologist is interested in determining the extent to which physical attractiveness may influence a person s udgment of other personal characteristics, such as intelligence or ability. So he selects three groups of subects and asks them to pretend to be a company personnel manager and he gives them all a stack of identical ob applications which include picture of the applicants. One group of subects is given only pictures of very attractive people, another group is given only pictures of average looking people and a third group is given only pictures of unattractive people. Subects are asked to rate the quality of each applicant on a scale of 0 (which represents very poor qualities) to 10 (which represents excellent qualities). The following data is obtained: Attractive Average Unattractive What should he conclude? Well, we first need to calculate the grand mean and the means for each of the three groups: X X X X

4 4 ( X X Now we can calculate 3 ) N K (5 4.55) + (4 4.55) (6 5.9) + (5 5.9) 34 3 and between 11( ) n ( X K 1 X ) + 1( ) ( ) (4.36) (1.36) So the F-statistic 36.63/ , but how likely is it to have obtained this value if the null hypothesis is true? With and 31 df the critical F, at α.05, is approximately, 3.3. So the psychologist can reect the null hypothesis and conclude that person s udgment of the ob qualifications of prospective applicants appears to be influenced by how attractive the prospective applicant is. The ANOVA procedure is robust to violations of the assumptions, especially the assumption of normality. Violating the assumption of homogeneity of variance is especially problematic if the groups consist of different sample sizes. Levene s test, which we talked about before in terms of the t-test, can be used to test if the homogeneity of variance assumption has been violated. If it has, then the Welch procedure can be used to adust the df used in ANOVA, similar to what we talked about for the t-test. If the normality assumption is violated then the data can be transformed (because this won t change the results of the statistical test it will ust re-scale things) to be more normally distributed. Common transformation include: 1. Taking the square root of each observation is beneficial if the data is very skewed.. Taking the log of each observation is beneficial if the data is very positively skewed. 3. Taking the reciprocal of each observation (i.e. 1/observation) is beneficial if there are very large values in the positive tail of the distribution. Another approach to dealing with a violation of the normality assumption is to use a trimmed sample which removes a fixed percentage of the extreme values in each of the tails of the distribution or a Windsorized sample which replaces the values that are trimmed with the most extreme observations in the tail that are left. In the latter case the df need to be adusted by the number of values that are replaced. As we explore more complicated ANOVA models (models with more than one grouping variable) it will become important to be able to differentiate between fixed factors (or groups) and random factors. 3 Note: Answers obtained by hand, from Excel, or from a statistical software package will all most likely vary slightly due to rounding error.

5 5 A fixed factor is one in which the researcher is only interested in the various levels of the different groups that are being studied. These levels are not assumed to be representative of, nor generalizable to, other levels of the group. A random factor is one in which the researcher considers the various levels of the grouping variable to be a random sample from all possible levels. In this situation the results of the statistical test may be generalized to other levels of the group. It should be noted that there is a direct relationship between the t-test for independent samples and the ANOVA, when K. Specifically, it can be shown mathematically that the F-statistic the t-statistic, squared (i.e. F t ) Power and Effect Size Similar to the t-test, finding statistical significance does not tell us whether the differences are important from a practical perspective. Several measure of effect size have been proposed, all of which differ in terms of how biased they are. η (eta-squared) or the correlation ratio is one of the oldest measure of effect size. It represents the percentage of total variability that can be accounted for by differences in the grouping variable or the percentage by with the error variability (i.e. treatment variability) is reduced by considering group membership. This is done by calculating the ratio of between and total Specifically: η between total 73.5 For our previous example, η. 55, meaning 55% of the variation in ratings can be accounted for by differences in the independent variable (i.e. the groups). This effect size measure is biased upwards, meaning it is larger than would be expected if it were to have been calculated from the population, rather than estimated from the sample. An alternative effect size measure to η is ω (omega-squared). It also measures the percentage of total variability that can be accounted for by between group variability but does so by using values, rather than values, thereby making use of sample size information. Specifically, for a fixed effect 4 ANOVA: ω between ( k 1) total (3 1)(1.94) For our previous example, ω This measure of effect size has been found to be less biased than η. Note that it is smaller for what we obtained for η. 4 Note that this measure of effect size is computed slightly differently for a random effects ANOVA model and that the formula for a random effects ANOVA model is not presented here.

6 6 Estimating power for ANOVA is a straightforward extension of how power was estimated for the t-test. We simply use different notation, and different tables. Moreover, we assume equal sample sizes in each group, which is the optimal situation. In an ANOVA context, φ is comparable to d in the independent t-test context, and separates out the effect size from the sample size. However, we need to incorporate the fact that we are using variance estimates in the ANOVA context. Specifically, φ ( µ ) µ / K So, if we were to assume that the population values correspond exactly to what we obtained in our example (unlikely as this may be) then φ [( ) + ( ) ( ) ]/ Furthermore, in an ANOVA context, φ is comparable to δ in the independent t-test context, in that it incorporates sample size to allow us to determine how large of a sample we need to detect meaningful differences, from a practical perspective. However, even though we may wind up with unequal sample sizes in our group we calculate power based on the assumption of equal sample sizes. Specifically, φ φ n where n the number of subects in each group So, if we were to assume that we expected 1 subects in each of our groups in our example then: φ φ n In an ANOVA context we can use to the non-centrality parameter for the F distribution, which is the mean of the F-distribution if the null hypothesis is false, with K 1 and N K df for the numerator and denominator, respectively. For our example, we will use an estimate corresponding to φ 3.0, because our table in the book does not go any higher and we will compare it to the non-centrality parameter with df for the numerator and 30 df for the denominator (because our book does not have very fine gradiations for df in the denominator. Using the table in the book we find that β.03 if we want to conduct our test at α.01. Therefore, since Power 1 - β the power of the experiment we ran was approximately.97.

1.5 Oneway Analysis of Variance

1.5 Oneway Analysis of Variance Statistics: Rosie Cornish. 200. 1.5 Oneway Analysis of Variance 1 Introduction Oneway analysis of variance (ANOVA) is used to compare several means. This method is often used in scientific or medical experiments

More information

Introduction to Analysis of Variance (ANOVA) Limitations of the t-test

Introduction to Analysis of Variance (ANOVA) Limitations of the t-test Introduction to Analysis of Variance (ANOVA) The Structural Model, The Summary Table, and the One- Way ANOVA Limitations of the t-test Although the t-test is commonly used, it has limitations Can only

More information

Null Hypothesis H 0. The null hypothesis (denoted by H 0

Null Hypothesis H 0. The null hypothesis (denoted by H 0 Hypothesis test In statistics, a hypothesis is a claim or statement about a property of a population. A hypothesis test (or test of significance) is a standard procedure for testing a claim about a property

More information

Study Guide for the Final Exam

Study Guide for the Final Exam Study Guide for the Final Exam When studying, remember that the computational portion of the exam will only involve new material (covered after the second midterm), that material from Exam 1 will make

More information

INTERPRETING THE ONE-WAY ANALYSIS OF VARIANCE (ANOVA)

INTERPRETING THE ONE-WAY ANALYSIS OF VARIANCE (ANOVA) INTERPRETING THE ONE-WAY ANALYSIS OF VARIANCE (ANOVA) As with other parametric statistics, we begin the one-way ANOVA with a test of the underlying assumptions. Our first assumption is the assumption of

More information

One-Way Analysis of Variance

One-Way Analysis of Variance One-Way Analysis of Variance Note: Much of the math here is tedious but straightforward. We ll skim over it in class but you should be sure to ask questions if you don t understand it. I. Overview A. We

More information

Factorial Analysis of Variance

Factorial Analysis of Variance Chapter 560 Factorial Analysis of Variance Introduction A common task in research is to compare the average response across levels of one or more factor variables. Examples of factor variables are income

More information

Section 13, Part 1 ANOVA. Analysis Of Variance

Section 13, Part 1 ANOVA. Analysis Of Variance Section 13, Part 1 ANOVA Analysis Of Variance Course Overview So far in this course we ve covered: Descriptive statistics Summary statistics Tables and Graphs Probability Probability Rules Probability

More information

Hypothesis Testing Level I Quantitative Methods. IFT Notes for the CFA exam

Hypothesis Testing Level I Quantitative Methods. IFT Notes for the CFA exam Hypothesis Testing 2014 Level I Quantitative Methods IFT Notes for the CFA exam Contents 1. Introduction... 3 2. Hypothesis Testing... 3 3. Hypothesis Tests Concerning the Mean... 10 4. Hypothesis Tests

More information

Randomized Block Analysis of Variance

Randomized Block Analysis of Variance Chapter 565 Randomized Block Analysis of Variance Introduction This module analyzes a randomized block analysis of variance with up to two treatment factors and their interaction. It provides tables of

More information

Analysis of Data. Organizing Data Files in SPSS. Descriptive Statistics

Analysis of Data. Organizing Data Files in SPSS. Descriptive Statistics Analysis of Data Claudia J. Stanny PSY 67 Research Design Organizing Data Files in SPSS All data for one subject entered on the same line Identification data Between-subjects manipulations: variable to

More information

Two-Sample T-Tests Allowing Unequal Variance (Enter Difference)

Two-Sample T-Tests Allowing Unequal Variance (Enter Difference) Chapter 45 Two-Sample T-Tests Allowing Unequal Variance (Enter Difference) Introduction This procedure provides sample size and power calculations for one- or two-sided two-sample t-tests when no assumption

More information

Power & Effect Size power Effect Size

Power & Effect Size power Effect Size Power & Effect Size Until recently, researchers were primarily concerned with controlling Type I errors (i.e. finding a difference when one does not truly exist). Although it is important to make sure

More information

UNDERSTANDING THE TWO-WAY ANOVA

UNDERSTANDING THE TWO-WAY ANOVA UNDERSTANDING THE e have seen how the one-way ANOVA can be used to compare two or more sample means in studies involving a single independent variable. This can be extended to two independent variables

More information

Two-Sample T-Tests Assuming Equal Variance (Enter Means)

Two-Sample T-Tests Assuming Equal Variance (Enter Means) Chapter 4 Two-Sample T-Tests Assuming Equal Variance (Enter Means) Introduction This procedure provides sample size and power calculations for one- or two-sided two-sample t-tests when the variances of

More information

ANOVA MULTIPLE CHOICE QUESTIONS. In the following multiple-choice questions, select the best answer.

ANOVA MULTIPLE CHOICE QUESTIONS. In the following multiple-choice questions, select the best answer. ANOVA MULTIPLE CHOICE QUESTIONS In the following multiple-choice questions, select the best answer. 1. Analysis of variance is a statistical method of comparing the of several populations. a. standard

More information

One-Way Analysis of Variance (ANOVA) Example Problem

One-Way Analysis of Variance (ANOVA) Example Problem One-Way Analysis of Variance (ANOVA) Example Problem Introduction Analysis of Variance (ANOVA) is a hypothesis-testing technique used to test the equality of two or more population (or treatment) means

More information

General Regression Formulae ) (N-2) (1 - r 2 YX

General Regression Formulae ) (N-2) (1 - r 2 YX General Regression Formulae Single Predictor Standardized Parameter Model: Z Yi = β Z Xi + ε i Single Predictor Standardized Statistical Model: Z Yi = β Z Xi Estimate of Beta (Beta-hat: β = r YX (1 Standard

More information

SPSS Guide: Tests of Differences

SPSS Guide: Tests of Differences SPSS Guide: Tests of Differences I put this together to give you a step-by-step guide for replicating what we did in the computer lab. It should help you run the tests we covered. The best way to get familiar

More information

Two-Sample T-Test from Means and SD s

Two-Sample T-Test from Means and SD s Chapter 07 Two-Sample T-Test from Means and SD s Introduction This procedure computes the two-sample t-test and several other two-sample tests directly from the mean, standard deviation, and sample size.

More information

SEMBIT SQA Unit Code F9HM 04 Applying hypothesis testing

SEMBIT SQA Unit Code F9HM 04 Applying hypothesis testing Overview This unit covers the competences required for applying hypothesis testing. It involves calculating the correct sample size to ensure the statistical validity of the hypothesis test, and producing

More information

Measures of Central Tendency and Variability: Summarizing your Data for Others

Measures of Central Tendency and Variability: Summarizing your Data for Others Measures of Central Tendency and Variability: Summarizing your Data for Others 1 I. Measures of Central Tendency: -Allow us to summarize an entire data set with a single value (the midpoint). 1. Mode :

More information

Module 9: Nonparametric Tests. The Applied Research Center

Module 9: Nonparametric Tests. The Applied Research Center Module 9: Nonparametric Tests The Applied Research Center Module 9 Overview } Nonparametric Tests } Parametric vs. Nonparametric Tests } Restrictions of Nonparametric Tests } One-Sample Chi-Square Test

More information

Independent t- Test (Comparing Two Means)

Independent t- Test (Comparing Two Means) Independent t- Test (Comparing Two Means) The objectives of this lesson are to learn: the definition/purpose of independent t-test when to use the independent t-test the use of SPSS to complete an independent

More information

LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING

LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING In this lab you will explore the concept of a confidence interval and hypothesis testing through a simulation problem in engineering setting.

More information

DDBA 8438: The t Test for Independent Samples Video Podcast Transcript

DDBA 8438: The t Test for Independent Samples Video Podcast Transcript DDBA 8438: The t Test for Independent Samples Video Podcast Transcript JENNIFER ANN MORROW: Welcome to The t Test for Independent Samples. My name is Dr. Jennifer Ann Morrow. In today's demonstration,

More information

Non-Inferiority Tests for Two Means using Differences

Non-Inferiority Tests for Two Means using Differences Chapter 450 on-inferiority Tests for Two Means using Differences Introduction This procedure computes power and sample size for non-inferiority tests in two-sample designs in which the outcome is a continuous

More information

Factor B: Curriculum New Math Control Curriculum (B (B 1 ) Overall Mean (marginal) Females (A 1 ) Factor A: Gender Males (A 2) X 21

Factor B: Curriculum New Math Control Curriculum (B (B 1 ) Overall Mean (marginal) Females (A 1 ) Factor A: Gender Males (A 2) X 21 1 Factorial ANOVA The ANOVA designs we have dealt with up to this point, known as simple ANOVA or oneway ANOVA, had only one independent grouping variable or factor. However, oftentimes a researcher has

More information

SUBMODELS (NESTED MODELS) AND ANALYSIS OF VARIANCE OF REGRESSION MODELS

SUBMODELS (NESTED MODELS) AND ANALYSIS OF VARIANCE OF REGRESSION MODELS 1 SUBMODELS (NESTED MODELS) AND ANALYSIS OF VARIANCE OF REGRESSION MODELS We will assume we have data (x 1, y 1 ), (x 2, y 2 ),, (x n, y n ) and make the usual assumptions of independence and normality.

More information

UNDERSTANDING ANALYSIS OF COVARIANCE (ANCOVA)

UNDERSTANDING ANALYSIS OF COVARIANCE (ANCOVA) UNDERSTANDING ANALYSIS OF COVARIANCE () In general, research is conducted for the purpose of explaining the effects of the independent variable on the dependent variable, and the purpose of research design

More information

Basic Statistcs Formula Sheet

Basic Statistcs Formula Sheet Basic Statistcs Formula Sheet Steven W. ydick May 5, 0 This document is only intended to review basic concepts/formulas from an introduction to statistics course. Only mean-based procedures are reviewed,

More information

Chapter 1 Hypothesis Testing

Chapter 1 Hypothesis Testing Chapter 1 Hypothesis Testing Principles of Hypothesis Testing tests for one sample case 1 Statistical Hypotheses They are defined as assertion or conjecture about the parameter or parameters of a population,

More information

Descriptive Statistics

Descriptive Statistics Descriptive Statistics Primer Descriptive statistics Central tendency Variation Relative position Relationships Calculating descriptive statistics Descriptive Statistics Purpose to describe or summarize

More information

c. The factor is the type of TV program that was watched. The treatment is the embedded commercials in the TV programs.

c. The factor is the type of TV program that was watched. The treatment is the embedded commercials in the TV programs. STAT E-150 - Statistical Methods Assignment 9 Solutions Exercises 12.8, 12.13, 12.75 For each test: Include appropriate graphs to see that the conditions are met. Use Tukey's Honestly Significant Difference

More information

Additional sources Compilation of sources: http://lrs.ed.uiuc.edu/tseportal/datacollectionmethodologies/jin-tselink/tselink.htm

Additional sources Compilation of sources: http://lrs.ed.uiuc.edu/tseportal/datacollectionmethodologies/jin-tselink/tselink.htm Mgt 540 Research Methods Data Analysis 1 Additional sources Compilation of sources: http://lrs.ed.uiuc.edu/tseportal/datacollectionmethodologies/jin-tselink/tselink.htm http://web.utk.edu/~dap/random/order/start.htm

More information

Chapter 7. One-way ANOVA

Chapter 7. One-way ANOVA Chapter 7 One-way ANOVA One-way ANOVA examines equality of population means for a quantitative outcome and a single categorical explanatory variable with any number of levels. The t-test of Chapter 6 looks

More information

Data Analysis. Lecture Empirical Model Building and Methods (Empirische Modellbildung und Methoden) SS Analysis of Experiments - Introduction

Data Analysis. Lecture Empirical Model Building and Methods (Empirische Modellbildung und Methoden) SS Analysis of Experiments - Introduction Data Analysis Lecture Empirical Model Building and Methods (Empirische Modellbildung und Methoden) Prof. Dr. Dr. h.c. Dieter Rombach Dr. Andreas Jedlitschka SS 2014 Analysis of Experiments - Introduction

More information

Introduction to Hypothesis Testing. Hypothesis Testing. Step 1: State the Hypotheses

Introduction to Hypothesis Testing. Hypothesis Testing. Step 1: State the Hypotheses Introduction to Hypothesis Testing 1 Hypothesis Testing A hypothesis test is a statistical procedure that uses sample data to evaluate a hypothesis about a population Hypothesis is stated in terms of the

More information

Paired vs. 2 sample comparisons. Comparing means. Paired comparisons allow us to account for a lot of extraneous variation.

Paired vs. 2 sample comparisons. Comparing means. Paired comparisons allow us to account for a lot of extraneous variation. Comparing means! Tests with one categorical and one numerical variable Paired vs. sample comparisons! Goal: to compare the mean of a numerical variable for different groups. Paired comparisons allow us

More information

ANOVA Analysis of Variance

ANOVA Analysis of Variance ANOVA Analysis of Variance What is ANOVA and why do we use it? Can test hypotheses about mean differences between more than 2 samples. Can also make inferences about the effects of several different IVs,

More information

e = random error, assumed to be normally distributed with mean 0 and standard deviation σ

e = random error, assumed to be normally distributed with mean 0 and standard deviation σ 1 Linear Regression 1.1 Simple Linear Regression Model The linear regression model is applied if we want to model a numeric response variable and its dependency on at least one numeric factor variable.

More information

Recall this chart that showed how most of our course would be organized:

Recall this chart that showed how most of our course would be organized: Chapter 4 One-Way ANOVA Recall this chart that showed how most of our course would be organized: Explanatory Variable(s) Response Variable Methods Categorical Categorical Contingency Tables Categorical

More information

INCOME AND HAPPINESS 1. Income and Happiness

INCOME AND HAPPINESS 1. Income and Happiness INCOME AND HAPPINESS 1 Income and Happiness Abstract Are wealthier people happier? The research study employed simple linear regression analysis to confirm the positive relationship between income and

More information

MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS

MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS MSR = Mean Regression Sum of Squares MSE = Mean Squared Error RSS = Regression Sum of Squares SSE = Sum of Squared Errors/Residuals α = Level of Significance

More information

EFFECT SIZE, POWER, AND SAMPLE SIZE ERSH 8310

EFFECT SIZE, POWER, AND SAMPLE SIZE ERSH 8310 EFFECT SIZE, POWER, AND SAMPLE SIZE ERSH 8310 Today s Class Effect Size Power Sample Size Effect Size Descriptive Measures of Effect Size The report of any study should include a description of the pattern

More information

Hypothesis Testing or How to Decide to Decide Edpsy 580

Hypothesis Testing or How to Decide to Decide Edpsy 580 Hypothesis Testing or How to Decide to Decide Edpsy 580 Carolyn J. Anderson Department of Educational Psychology University of Illinois at Urbana-Champaign Hypothesis Testing or How to Decide to Decide

More information

Chapter 16 Multiple Choice Questions (The answers are provided after the last question.)

Chapter 16 Multiple Choice Questions (The answers are provided after the last question.) Chapter 16 Multiple Choice Questions (The answers are provided after the last question.) 1. Which of the following symbols represents a population parameter? a. SD b. σ c. r d. 0 2. If you drew all possible

More information

Comparing Means in Two Populations

Comparing Means in Two Populations Comparing Means in Two Populations Overview The previous section discussed hypothesis testing when sampling from a single population (either a single mean or two means from the same population). Now we

More information

Chapter 5 Analysis of variance SPSS Analysis of variance

Chapter 5 Analysis of variance SPSS Analysis of variance Chapter 5 Analysis of variance SPSS Analysis of variance Data file used: gss.sav How to get there: Analyze Compare Means One-way ANOVA To test the null hypothesis that several population means are equal,

More information

Lesson 1: Comparison of Population Means Part c: Comparison of Two- Means

Lesson 1: Comparison of Population Means Part c: Comparison of Two- Means Lesson : Comparison of Population Means Part c: Comparison of Two- Means Welcome to lesson c. This third lesson of lesson will discuss hypothesis testing for two independent means. Steps in Hypothesis

More information

HYPOTHESIS TESTING: CONFIDENCE INTERVALS, T-TESTS, ANOVAS, AND REGRESSION

HYPOTHESIS TESTING: CONFIDENCE INTERVALS, T-TESTS, ANOVAS, AND REGRESSION HYPOTHESIS TESTING: CONFIDENCE INTERVALS, T-TESTS, ANOVAS, AND REGRESSION HOD 2990 10 November 2010 Lecture Background This is a lightning speed summary of introductory statistical methods for senior undergraduate

More information

Permutation Tests for Comparing Two Populations

Permutation Tests for Comparing Two Populations Permutation Tests for Comparing Two Populations Ferry Butar Butar, Ph.D. Jae-Wan Park Abstract Permutation tests for comparing two populations could be widely used in practice because of flexibility of

More information

Hypothesis Testing for Two Variances

Hypothesis Testing for Two Variances Hypothesis Testing for Two Variances The standard version of the two-sample t test is used when the variances of the underlying populations are either known or assumed to be equal In other situations,

More information

Outline. Topic 4 - Analysis of Variance Approach to Regression. Partitioning Sums of Squares. Total Sum of Squares. Partitioning sums of squares

Outline. Topic 4 - Analysis of Variance Approach to Regression. Partitioning Sums of Squares. Total Sum of Squares. Partitioning sums of squares Topic 4 - Analysis of Variance Approach to Regression Outline Partitioning sums of squares Degrees of freedom Expected mean squares General linear test - Fall 2013 R 2 and the coefficient of correlation

More information

Statistics courses often teach the two-sample t-test, linear regression, and analysis of variance

Statistics courses often teach the two-sample t-test, linear regression, and analysis of variance 2 Making Connections: The Two-Sample t-test, Regression, and ANOVA In theory, there s no difference between theory and practice. In practice, there is. Yogi Berra 1 Statistics courses often teach the two-sample

More information

NCSS Statistical Software

NCSS Statistical Software Chapter 06 Introduction This procedure provides several reports for the comparison of two distributions, including confidence intervals for the difference in means, two-sample t-tests, the z-test, the

More information

Chapter 9. Two-Sample Tests. Effect Sizes and Power Paired t Test Calculation

Chapter 9. Two-Sample Tests. Effect Sizes and Power Paired t Test Calculation Chapter 9 Two-Sample Tests Paired t Test (Correlated Groups t Test) Effect Sizes and Power Paired t Test Calculation Summary Independent t Test Chapter 9 Homework Power and Two-Sample Tests: Paired Versus

More information

UNDERSTANDING THE ONE-WAY ANOVA

UNDERSTANDING THE ONE-WAY ANOVA UNDERSTANDING The One-way Analysis of Variance (ANOVA) is a procedure for testing the hypothesis that K population means are equal, where K >. The One-way ANOVA compares the means of the samples or groups

More information

THE FIRST SET OF EXAMPLES USE SUMMARY DATA... EXAMPLE 7.2, PAGE 227 DESCRIBES A PROBLEM AND A HYPOTHESIS TEST IS PERFORMED IN EXAMPLE 7.

THE FIRST SET OF EXAMPLES USE SUMMARY DATA... EXAMPLE 7.2, PAGE 227 DESCRIBES A PROBLEM AND A HYPOTHESIS TEST IS PERFORMED IN EXAMPLE 7. THERE ARE TWO WAYS TO DO HYPOTHESIS TESTING WITH STATCRUNCH: WITH SUMMARY DATA (AS IN EXAMPLE 7.17, PAGE 236, IN ROSNER); WITH THE ORIGINAL DATA (AS IN EXAMPLE 8.5, PAGE 301 IN ROSNER THAT USES DATA FROM

More information

t Tests in Excel The Excel Statistical Master By Mark Harmon Copyright 2011 Mark Harmon

t Tests in Excel The Excel Statistical Master By Mark Harmon Copyright 2011 Mark Harmon t-tests in Excel By Mark Harmon Copyright 2011 Mark Harmon No part of this publication may be reproduced or distributed without the express permission of the author. mark@excelmasterseries.com www.excelmasterseries.com

More information

Chapter 9: Hypothesis Testing Sections

Chapter 9: Hypothesis Testing Sections Chapter 9: Hypothesis Testing Sections 9.1 Problems of Testing Hypotheses Skip: 9.2 Testing Simple Hypotheses Skip: 9.3 Uniformly Most Powerful Tests Skip: 9.4 Two-Sided Alternatives 9.6 Comparing the

More information

ANOVA ANOVA. Two-Way ANOVA. One-Way ANOVA. When to use ANOVA ANOVA. Analysis of Variance. Chapter 16. A procedure for comparing more than two groups

ANOVA ANOVA. Two-Way ANOVA. One-Way ANOVA. When to use ANOVA ANOVA. Analysis of Variance. Chapter 16. A procedure for comparing more than two groups ANOVA ANOVA Analysis of Variance Chapter 6 A procedure for comparing more than two groups independent variable: smoking status non-smoking one pack a day > two packs a day dependent variable: number of

More information

Biostatistics: DESCRIPTIVE STATISTICS: 2, VARIABILITY

Biostatistics: DESCRIPTIVE STATISTICS: 2, VARIABILITY Biostatistics: DESCRIPTIVE STATISTICS: 2, VARIABILITY 1. Introduction Besides arriving at an appropriate expression of an average or consensus value for observations of a population, it is important to

More information

HYPOTHESIS TESTING: POWER OF THE TEST

HYPOTHESIS TESTING: POWER OF THE TEST HYPOTHESIS TESTING: POWER OF THE TEST The first 6 steps of the 9-step test of hypothesis are called "the test". These steps are not dependent on the observed data values. When planning a research project,

More information

Regression Analysis Prof. Soumen Maity Department of Mathematics Indian Institute of Technology, Kharagpur

Regression Analysis Prof. Soumen Maity Department of Mathematics Indian Institute of Technology, Kharagpur Regression Analysis Prof. Soumen Maity Department of Mathematics Indian Institute of Technology, Kharagpur Lecture - 7 Multiple Linear Regression (Contd.) This is my second lecture on Multiple Linear Regression

More information

TRANSCRIPT: In this lecture, we will talk about both theoretical and applied concepts related to hypothesis testing.

TRANSCRIPT: In this lecture, we will talk about both theoretical and applied concepts related to hypothesis testing. This is Dr. Chumney. The focus of this lecture is hypothesis testing both what it is, how hypothesis tests are used, and how to conduct hypothesis tests. 1 In this lecture, we will talk about both theoretical

More information

One-Way Analysis of Variance: A Guide to Testing Differences Between Multiple Groups

One-Way Analysis of Variance: A Guide to Testing Differences Between Multiple Groups One-Way Analysis of Variance: A Guide to Testing Differences Between Multiple Groups In analysis of variance, the main research question is whether the sample means are from different populations. The

More information

9-3.4 Likelihood ratio test. Neyman-Pearson lemma

9-3.4 Likelihood ratio test. Neyman-Pearson lemma 9-3.4 Likelihood ratio test Neyman-Pearson lemma 9-1 Hypothesis Testing 9-1.1 Statistical Hypotheses Statistical hypothesis testing and confidence interval estimation of parameters are the fundamental

More information

5. Linear Regression

5. Linear Regression 5. Linear Regression Outline.................................................................... 2 Simple linear regression 3 Linear model............................................................. 4

More information

12: Analysis of Variance. Introduction

12: Analysis of Variance. Introduction 1: Analysis of Variance Introduction EDA Hypothesis Test Introduction In Chapter 8 and again in Chapter 11 we compared means from two independent groups. In this chapter we extend the procedure to consider

More information

Business Statistics. Lecture 8: More Hypothesis Testing

Business Statistics. Lecture 8: More Hypothesis Testing Business Statistics Lecture 8: More Hypothesis Testing 1 Goals for this Lecture Review of t-tests Additional hypothesis tests Two-sample tests Paired tests 2 The Basic Idea of Hypothesis Testing Start

More information

CHAPTER 13. Experimental Design and Analysis of Variance

CHAPTER 13. Experimental Design and Analysis of Variance CHAPTER 13 Experimental Design and Analysis of Variance CONTENTS STATISTICS IN PRACTICE: BURKE MARKETING SERVICES, INC. 13.1 AN INTRODUCTION TO EXPERIMENTAL DESIGN AND ANALYSIS OF VARIANCE Data Collection

More information

ANSWERS TO EXERCISES AND REVIEW QUESTIONS

ANSWERS TO EXERCISES AND REVIEW QUESTIONS ANSWERS TO EXERCISES AND REVIEW QUESTIONS PART FIVE: STATISTICAL TECHNIQUES TO COMPARE GROUPS Before attempting these questions read through the introduction to Part Five and Chapters 16-21 of the SPSS

More information

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( ) Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates

More information

T-tests. Daniel Boduszek

T-tests. Daniel Boduszek T-tests Daniel Boduszek d.boduszek@interia.eu danielboduszek.com Presentation Outline Introduction to T-tests Types of t-tests Assumptions Independent samples t-test SPSS procedure Interpretation of SPSS

More information

INTERPRETING THE REPEATED-MEASURES ANOVA

INTERPRETING THE REPEATED-MEASURES ANOVA INTERPRETING THE REPEATED-MEASURES ANOVA USING THE SPSS GENERAL LINEAR MODEL PROGRAM RM ANOVA In this scenario (based on a RM ANOVA example from Leech, Barrett, and Morgan, 2005) each of 12 participants

More information

Sampling and Hypothesis Testing

Sampling and Hypothesis Testing Population and sample Sampling and Hypothesis Testing Allin Cottrell Population : an entire set of objects or units of observation of one sort or another. Sample : subset of a population. Parameter versus

More information

Inferences About Differences Between Means Edpsy 580

Inferences About Differences Between Means Edpsy 580 Inferences About Differences Between Means Edpsy 580 Carolyn J. Anderson Department of Educational Psychology University of Illinois at Urbana-Champaign Inferences About Differences Between Means Slide

More information

Box plots & t-tests. Example

Box plots & t-tests. Example Box plots & t-tests Box Plots Box plots are a graphical representation of your sample (easy to visualize descriptive statistics); they are also known as box-and-whisker diagrams. Any data that you can

More information

Chapter V Health Insurance Plans of Indian General Insurance Companies: A Comparative Study with Reference to Coverage and Non Coverage

Chapter V Health Insurance Plans of Indian General Insurance Companies: A Comparative Study with Reference to Coverage and Non Coverage Chapter V Health Insurance Plans of Indian General Insurance Companies: A Comparative Study with Reference to Coverage and Non Coverage As far as the status of health insurance in Indian insurance market

More information

Statistiek II. John Nerbonne. October 1, 2010. Dept of Information Science j.nerbonne@rug.nl

Statistiek II. John Nerbonne. October 1, 2010. Dept of Information Science j.nerbonne@rug.nl Dept of Information Science j.nerbonne@rug.nl October 1, 2010 Course outline 1 One-way ANOVA. 2 Factorial ANOVA. 3 Repeated measures ANOVA. 4 Correlation and regression. 5 Multiple regression. 6 Logistic

More information

PASS Sample Size Software. Linear Regression

PASS Sample Size Software. Linear Regression Chapter 855 Introduction Linear regression is a commonly used procedure in statistical analysis. One of the main objectives in linear regression analysis is to test hypotheses about the slope (sometimes

More information

Hypothesis Testing Summary

Hypothesis Testing Summary Hypothesis Testing Summary Hypothesis testing begins with the drawing of a sample and calculating its characteristics (aka, statistics ). A statistical test (a specific form of a hypothesis test) is an

More information

" Y. Notation and Equations for Regression Lecture 11/4. Notation:

 Y. Notation and Equations for Regression Lecture 11/4. Notation: Notation: Notation and Equations for Regression Lecture 11/4 m: The number of predictor variables in a regression Xi: One of multiple predictor variables. The subscript i represents any number from 1 through

More information

Statistics for Management II-STAT 362-Final Review

Statistics for Management II-STAT 362-Final Review Statistics for Management II-STAT 362-Final Review Multiple Choice Identify the letter of the choice that best completes the statement or answers the question. 1. The ability of an interval estimate to

More information

Variance of OLS Estimators and Hypothesis Testing. Randomness in the model. GM assumptions. Notes. Notes. Notes. Charlie Gibbons ARE 212.

Variance of OLS Estimators and Hypothesis Testing. Randomness in the model. GM assumptions. Notes. Notes. Notes. Charlie Gibbons ARE 212. Variance of OLS Estimators and Hypothesis Testing Charlie Gibbons ARE 212 Spring 2011 Randomness in the model Considering the model what is random? Y = X β + ɛ, β is a parameter and not random, X may be

More information

Introduction to Fixed Effects Methods

Introduction to Fixed Effects Methods Introduction to Fixed Effects Methods 1 1.1 The Promise of Fixed Effects for Nonexperimental Research... 1 1.2 The Paired-Comparisons t-test as a Fixed Effects Method... 2 1.3 Costs and Benefits of Fixed

More information

Simple Linear Regression Inference

Simple Linear Regression Inference Simple Linear Regression Inference 1 Inference requirements The Normality assumption of the stochastic term e is needed for inference even if it is not a OLS requirement. Therefore we have: Interpretation

More information

Testing Group Differences using T-tests, ANOVA, and Nonparametric Measures

Testing Group Differences using T-tests, ANOVA, and Nonparametric Measures Testing Group Differences using T-tests, ANOVA, and Nonparametric Measures Jamie DeCoster Department of Psychology University of Alabama 348 Gordon Palmer Hall Box 870348 Tuscaloosa, AL 35487-0348 Phone:

More information

COMPARISONS OF CUSTOMER LOYALTY: PUBLIC & PRIVATE INSURANCE COMPANIES.

COMPARISONS OF CUSTOMER LOYALTY: PUBLIC & PRIVATE INSURANCE COMPANIES. 277 CHAPTER VI COMPARISONS OF CUSTOMER LOYALTY: PUBLIC & PRIVATE INSURANCE COMPANIES. This chapter contains a full discussion of customer loyalty comparisons between private and public insurance companies

More information

Introduction to General and Generalized Linear Models

Introduction to General and Generalized Linear Models Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby

More information

How to Conduct a Hypothesis Test

How to Conduct a Hypothesis Test How to Conduct a Hypothesis Test The idea of hypothesis testing is relatively straightforward. In various studies we observe certain events. We must ask, is the event due to chance alone, or is there some

More information

Multiple Hypothesis Testing: The F-test

Multiple Hypothesis Testing: The F-test Multiple Hypothesis Testing: The F-test Matt Blackwell December 3, 2008 1 A bit of review When moving into the matrix version of linear regression, it is easy to lose sight of the big picture and get lost

More information

MAT140: Applied Statistical Methods Summary of Calculating Confidence Intervals and Sample Sizes for Estimating Parameters

MAT140: Applied Statistical Methods Summary of Calculating Confidence Intervals and Sample Sizes for Estimating Parameters MAT140: Applied Statistical Methods Summary of Calculating Confidence Intervals and Sample Sizes for Estimating Parameters Inferences about a population parameter can be made using sample statistics for

More information

Analysis of Variance ANOVA

Analysis of Variance ANOVA Analysis of Variance ANOVA Overview We ve used the t -test to compare the means from two independent groups. Now we ve come to the final topic of the course: how to compare means from more than two populations.

More information

We have already discussed hypothesis testing in study unit 13. In this

We have already discussed hypothesis testing in study unit 13. In this 14 study unit fourteen hypothesis tests applied to means: two related samples We have already discussed hypothesis testing in study unit 13. In this study unit we shall test a hypothesis empirically in

More information

About Hypothesis Testing

About Hypothesis Testing About Hypothesis Testing TABLE OF CONTENTS About Hypothesis Testing... 1 What is a HYPOTHESIS TEST?... 1 Hypothesis Testing... 1 Hypothesis Testing... 1 Steps in Hypothesis Testing... 2 Steps in Hypothesis

More information

Categorical Variables in Regression: Implementation and Interpretation By Dr. Jon Starkweather, Research and Statistical Support consultant

Categorical Variables in Regression: Implementation and Interpretation By Dr. Jon Starkweather, Research and Statistical Support consultant Interpretation and Implementation 1 Categorical Variables in Regression: Implementation and Interpretation By Dr. Jon Starkweather, Research and Statistical Support consultant Use of categorical variables

More information

Unit 29 Chi-Square Goodness-of-Fit Test

Unit 29 Chi-Square Goodness-of-Fit Test Unit 29 Chi-Square Goodness-of-Fit Test Objectives: To perform the chi-square hypothesis test concerning proportions corresponding to more than two categories of a qualitative variable To perform the Bonferroni

More information

Chapter Additional: Standard Deviation and Chi- Square

Chapter Additional: Standard Deviation and Chi- Square Chapter Additional: Standard Deviation and Chi- Square Chapter Outline: 6.4 Confidence Intervals for the Standard Deviation 7.5 Hypothesis testing for Standard Deviation Section 6.4 Objectives Interpret

More information