# 1 Basic ANOVA concepts

Size: px
Start display at page:

Transcription

1 Math 143 ANOVA 1 Analysis of Variance (ANOVA) Recall, when we wanted to compare two population means, we used the 2-sample t procedures. Now let s expand this to compare k 3 population means. As with the t-test, we can graphically get an idea of what is going on by looking at side-by-side boxplots. (See Example 12.3, p. 748, along with Figure 12.3, p. 749.) 1 Basic ANOVA concepts 1.1 The Setting Generally, we are considering a quantitative response variable as it relates to one or more explanatory variables, usually categorical. Questions which fit this setting: (i) Which academic department in the sciences gives out the lowest average grades? (Explanatory variable: department; Response variable: student GPA s for individual courses) (ii) Which kind of promotional campaign leads to greatest store income at Christmas time? (Explanatory variable: promotion type; Response variable: daily store income) (iii) How do the type of career and marital status of a person relate to the total cost in annual claims she/he is likely to make on her health insurance. (Explanatory variables: career and marital status; Response variable: health insurance payouts) Each value of the explanatory variable (or value-pair, if there is more than one explanatory variable) represents a population or group. In the Physicians Health Study of Example 3.3, p. 238, there are two factors (explanatory variables): aspirin (values are taking it or not taking it ) and beta carotene (values again are taking it or not taking it ), and this divides the subjects into four groups corresponding to the four cells of Figure 3.1 (p. 239). Had the response variable for this study been quantitative like systolic blood pressure level rather than categorical, it would have been an appropriate scenario in which to apply (2-way) ANOVA. 1.2 Hypotheses of ANOVA These are always the same. H 0 : The (population) means of all groups under consideration are equal. H a : The (pop.) means are not all equal. (Note: This is different than saying they are all unequal!) 1.3 Basic Idea of ANOVA Analysis of variance is a perfectly descriptive name of what is actually done to analyze sample data acquired to answer problems such as those described in Section 1.1. Take a look at Figures 12.2(a) and 12.2(b) (p. 746) in your text. Side-by-side boxplots like these in both figures reveal differences between samples taken from three populations. However, variations like those depicted in 12.2(a) are much less convincing that the population means for the three populations are different than if the variations are as in 12.2(b). The reason is because the ratio of variation between groups to variation within groups is much smaller for 12.2(a) than it is for 12.2(b).

2 Math 143 ANOVA Assumptions of ANOVA Like so many of our inference procedures, ANOVA has some underlying assumptions which should be in place in order to make the results of calculations completely trustworthy. They include: (i) Subjects are chosen via a simple random sample. (ii) Within each group/population, the response variable is normally distributed. (iii) While the population means may be different from one group to the next, the population standard deviation is the same for all groups. Fortunately, ANOVA is somewhat robust (i.e., results remain fairly trustworthy despite mild violations of these assumptions). Assumptions (ii) and (iii) are close enough to being true if, after gathering SRS samples from each group, you: (ii) look at normal quantile plots for each group and, in each case, see that the data points fall close to a line. (iii) compute the standard deviations for each group sample, and see that the ratio of the largest to the smallest group sample s.d. is no more than two. 2 One-Way ANOVA When there is just one explanatory variable, we refer to the analysis of variance as one-way ANOVA. 2.1 Notation Here is a key to symbols you may see as you read through this section. k = the number of groups/populations/values of the explanatory variable/levels of treatment n i = the sample size taken from group i x i j = the jth response sampled from the ith group/population. x i = the sample mean of responses from the ith group = 1 n i n i x i j j=1 s i = the sample standard deviation from the ith group = n = the (total) sample, irrespective of groups = k i=1 n i. 1 n i 1 n i j=1 (x i j x i ) 2 x = the mean of all responses, irrespective of groups = 1 n i j x i j 2.2 Splitting the Total Variability into Parts Viewed as one sample (rather than k samples from the individual groups/populations), one might measure the total amount of variability among observations by summing the squares of the differences between each x i j and x: SST (stands for sum of squares total) = k n i i=1 j=1 (x i j x) 2.

3 Math 143 ANOVA 3 This variability has two sources: 1. Variability between group means (specifically, variation around the overall mean x) SSG := k i=1 n i ( x i x) 2, and 2. Variability within groups means (specifically, variation of observations about their group mean x i ) SSE := k n i i=1 j=1 (x i j x i ) 2 = k i=1 (n i 1)s 2 i. It is the case that SST = SSG + SSE. 2.3 The Calculations If the variability between groups/treatments is large relative to the variability within groups/treatments, then the data suggest that the means of the populations from which the data were drawn are significantly different. That is, in fact, how the F statistic is computed: it is a measure of the variability between treatments divided by a measure of the variability within treatments. If F is large, the variability between treatments is large relative to the variation within treatments, and we reject the null hypothesis of equal means. If F is small, the variability between treatments is small relative to the variation within treatments, and we do not reject the null hypothesis of equal means. (In this case, the sample data is consistent with the hypothesis that population means are equal between groups.) To compute this ratio (the F statistic) is difficult and time consuming. Therefore we are always going to let the computer do this for us. The computer generates what is called an ANOVA table: Source SS df MS F Model/Group SSG k 1 MSG = SSG k 1 MSG MSE Residual/Error SSE n k MSE = SSE n k Total SST n 1 What are these things? The source (of variability) column tells us SS=Sum of Squares (sum of squared deviations): SST measures variation of the data around the overall mean x SSG measures variation of the group means around the overall mean SSE measures the variation of each observation around its group mean x i Degrees of freedom k 1 for SSG, since it measures the variation of the k group means about the overall mean n k for SSE, since it measures the variation of the n observations about k group means n 1 for SST, since it measures the variation of all n observations about the overall mean

4 Math 143 ANOVA 4 MS = Mean Square = SS/df : This is like a standard deviation. Look at the formula we learned back in Chapter 1 for sample standard deviation (p. 51). Its numerator was a sum of squared deviations (just like our SS formulas), and it was divided by the appropriate number of degrees of freedom. It is interesting to note that another formula for MSE is MSE = (n 1 1)s (n 2 1)s (n k 1)s 2 k, (n 1 1) + (n 2 1) + + (n k 1) which may remind you of the pooled sample estimate for the population variance for 2-sample procedures (when we believe the two populations have the same variance). In fact, the quantity MSE is also called s 2 p. The F statistic = MSG/MSE If the null hypothesis is true, the F statistic has an F distribution with k 1 and n k degrees of freedom in the numerator/denominator respectively. If the alternate hypothesis is true, then F tends to be large. We reject H 0 in favor of H a if the F statistic is sufficiently large. As with other hypothesis tests, we determine whether the F statistic is large by finding a corresponding P-value. For this, we use Table E. Since the alternative hypothesis is always the same (no 1-sided vs. 2-sided distinction), the test is single-tailed (like the chi-squared test). Neverthless, to read the correct P-value from the table requires knowledge of the number of degrees of freedom associated with both the numerator (MSG) and denominator (MSE) of the F-value. Look at Table E. On the top are the numerator df, and down the left side are the denominator df. In the table are the F values, and the P-values (the probability of getting an F statistic larger than that if the null hypothesis is true) are down the left side. Example: Determine P(F 3,6 > 9.78) = 0.01 P(F 2,20 > 5) = between 0.01 and Example: A firm wishes to compare four programs for training workers to perform a certain manual task. Twenty new employees are randomly assigned to the training programs, with 5 in each program. At the end of the training period, a test is conducted to see how quickly trainees can perform the task. The number of times the task is performed per minute is recorded for each trainee. Program 1 Program 2 Program 3 Program The data was entered into Stata which produced the following ANOVA table. Analysis of Variance Source SS df MS F Prob > F Between groups XXXXXX Within groups Total Bartlett s test for equal variances: chi2(3) = Prob>chi2 = The X-ed out P-value is between and 0.01.

5 Math 143 ANOVA 5 Stata gives us a line at the bottom the one about Bartlett s test which reinforces our belief that the variances are the same for all groups. Example: In an experiment to investigate the performance of six different types of spark plugs intended for use on a two-stroke motorcycle, ten plugs of each brand were tested and the number of driving miles (at a constant speed) until plug failure was recorded. A partial ANOVA table appears below. Fill in the missing values: Source df SS MS F Brand Error , Total , Note: One more thing you will often find on an ANOVA table is R 2 (the coefficient of determination). It indicates the ratio of the variability between group means in the sample to the overall sample variability, meaning that it has a similar interpretation to that for R 2 in linear regression. 2.4 Multiple Comparisons As with other tests of significance, one-way ANOVA has the following steps: 1. State the hypotheses (see Section 1.2) 2. Compute a test statistic (here it is F df numer., df denom. ), and use it to determine a probability of getting a sample as extreme or more so under the null hypothesis. 3. Apply a decision rule: At the α level of significance, reject H 0 if P(F k 1,n k > F computed ) < α. Do not reject H 0 if P > α. If P > α, then we have no reason to reject the null hypothesis. We state this as our conclusion along with the relevant information (F-value, df-numerator, df-denominator, P-value). Ideally, a person conducting the study will have some preconceived hypotheses (more specialized than the H 0, H a we stated for ANOVA, and ones which she held before ever collecting/looking at the data) about the group means that she wishes to investigate. When this is the case, she may go ahead and explore them (even if ANOVA did not indicate an overall difference in group means), often employing the method of contrasts. We will not learn this method as a class, but if you wish to know more, some information is given on pp When we have no such preconceived leanings about the group means, it is, generally speaking, inappropriate to continue searching for evidence of a difference in means if our F-value from ANOVA was not significant. If, however, P < α, then we know that at least two means are not equal, and the door is open to our trying to determine which ones. In this case, we follow up a significant F-statistic with pairwise comparisons of the means, to see which are significantly different from each other. This involves doing a t-test between each pair of means. This we do using the pooled estimate for the (assumed) common standard deviation of all groups (see the MS bullet in Section 2.3): t i j = x i x j. s p 1/n i + 1/n j To determine if this t i j is statistically significant, we could just go to Table D with n k degrees of freedom (the df associated with s p ). However, depending on the number k of groups, we might be doing many comparisons, and recall that statistical significance can occur simply by chance (that, in fact, is built into the interpretation of the P-value), and it becomes more and more likely to occur as the number of tests we conduct on the same dataset grows. If we are going to conduct many tests, and want the overall probability of rejecting any of the null hypotheses (equal means between group pairs) in the process to be no more than α, then we must adjust the significance level for each individual comparison to be much smaller than α. There are a number of different approaches which have been proposed for choosing the individual-test

6 Math way ANOVA 6 significance level so as to get an overall family significance of α, and we need not concern ourselves with the details of such proposals. When software is available that can carry out the details and report the results to us, we are most likely agreeable to using which ever proposal(s) have been incorporated into the software. The most common method of adjusting for the fact that you are doing multiple comparisons is a method developed by Tukey. Stata provides, among others, the Bonferroni approach for pairwise comparisons, which is an approach mentioned in our text, pp Example: Recall our data from a company looking at training programs for manual tasks carried out by its employees. The results were statistically significant to conclude that not all of the training programs had the same mean result. As a next step, we use Bonferroni multiple comparisons, providing here the results as reported by Stata Comparison of post-test by program (Bonferroni) Row Mean- Col Mean Where the row labeled 2 meets the column labeled 1, we are told that the sample mean response for Program 2 was 3 lower than the mean response for Program 1 (Row Mean - Col Mean = -3), and that the adjusted Bonferroni probability is Thus, this difference is not statistically significant at the 5% level to conclude the mean response from Program 1 is actually different than the mean response from Program 2. Which programs have statistically significant (at significance level 5%) mean responses? Program 3 is different from program 2, with program 3 apparently better. Program 4 is different from program 1, with program 1 apparently better. Program 4 is different from program 3, with program 3 apparently better. Apparently, programs 1 and 3 are the most successful, with no statistically-significant difference between them. At this stage, other factors, such as how much it will cost the company to implement the two programs, may be used to determine which program will be set in place. Note: Sometimes instead of giving P-values, a software package will generate confidence intervals for the differences between means. Just remember that if the CI includes 0, there is no statistically significant difference between the means. 3 Two-Way ANOVA Two-way ANOVA allows to compare population means when the populations are classified according to two (categorical) factors. Example. We might like to look at SAT scores of students who are male or female (first factor) and either have or have not had a preparatory course (second factor). Example. A researcher wants to investigate the effects of the amounts of calcium and magnesium in a rat s diet on the rat s blood pressure. Diets including high, medium and low amounts of each mineral (but otherwise identical) will be fed to the rats. And after a specified time on the diet, the blood pressure wil be

7 Math way ANOVA 7 measured. Notice that the design includes nine different treatments because there are three levels to each of the two factors. Comparisons of Two-way to One-Factor-at-a-Time usually have a smaller total sample size, since you re studying two things at once [rat diet example, p. 800] removes some of the random variability (some of the random variability is now explained by the second factor, so you can more easily find significant differences) we can look at interactions between factors (a significant interaction means the effect of one variable changes depending on the level of the other factor). Examples of (potential) interaction. Radon (high/medium/low) and smoking. High radon levels increase the rate of lung cancer somewhat. Smoking increases the risk of lung cancer. But if you are exposed to radon and smoke, then your lung cancer rates skyrocket. Therefore, the effect of radon on lung cancer rates is small for non-smokers but big for smokers. We can t talk about the effect of radon without talking about whether or not the person is a smoker. age of person (0-10, 11-20, 21+) and effect of pesticides (low/high) gender and effect of different legal drugs (different standard doses) Two-way ANOVA table Below is the outline of a two-way ANOVA table, with factors A and B, having I and J groups, respectively. Source df SS MS F p-value A I 1 SSA MSA MSA/MSE B J 1 SSB MSB MSB/MSE A B (I 1)(J 1) SSAB MSAB MSAB/MSE Error n IJ SSE MSE Total n 1 SST

8 Math way ANOVA 8 The general layout of the ANOVA table should be familiar to us from the ANOVA tables we have seen for regression and one-way ANOVA. Notice that this time we are dividing the variation into four components: 1. the variation explained by factor A 2. the variation explained by factor B 3. the variation explained by the interaction of A and B 4. the variation explained by randomness Since there are three different values of F, we must be doing three different hypothesis tests at once. We ll get to the hypotheses of these tests shortly. The Two-way ANOVA model The model for two-way ANOVA is that each of the IJ groups has a normal distribution with potentially different means (µ i j ), but with a common standard deviation (σ). That is, x i jk = µ i j + ɛ i jk, where ɛ i jk N(0,σ) }{{} }{{} group mean residual As usual, we will use two-way ANOVA provided it is reasonable to assume normal group distributions and the ratio of the largest group standard deviation to the smallest group standard deviation is at most 2. Main Effects Example. We consider whether the classifying by diagnosis (anxiety, depression, CDFS/Court referred) and prior abuse (yes/no) is related to mean BC (Being Cautious) score. Below is a table where each cell contains he mean BC score for people who were in that group. Diagnosis Abused Not abused Mean Anxiety Depression DCFS/Court Referred Mean Here is the ANOVA table: df SS MS F p-value Diagnosis Ever abused * D * E Error Total 67

9 Math way ANOVA 9 The table has three P-values, corresponding to three tests of significance: I. H 0 : The mean BC score is the same for each of the three diagnoses. H a : The mean BC score is not the same for all three diagnoses. The evidence here is not significant to reject the null hypothesis. (F = 2.33, df 1 = 2, df 2 = 62, P = 0.11) II. H 0 : There is no main effect due to ever being abused. H a : There is a main effect due to being abused. The evidence is significant to conclude a main effect exists. (F = 17.2, df 1 = 1, df 2 = 62, P = ) III. H 0 : There is no interaction effect between diagnosis and ever being abused. H a : There is an interaction effect between the two variables. The evidence here is not significant to reject the null hypothesis. (F = 1.73, df 1 = 2, df 2 = 62, P = 0.186) When a main effect has been found for just one variable without variable interactions, we might combine data across diagnoses and perform one of the other tests we know that is applicable. (Two-sample t or Oneway ANOVA are both options here, since the combining of information leaves us with just two groups.) But we might also perform a simpler task: Draw a plot of the main effects due to abuse.

10 Math way ANOVA 10 Interaction Effects Example. We consider whether the mean BSI (Belonging/Social Interest) is the same after classifying people on the basis of whether or not they ve been abused and diagnosis. Diagnosis Abused Not abused Mean Anxiety Depression DCFS/Court Referred Mean ANOVA table: df SS MS F p-value Diagnosis Ever abused * D X E * Error Total 67 Since the interaction is significant, let s look at the individual mean BSI at each level of diagnosis on an interaction plot. Knowing how many people were reflected in each category (information that is not provided here) would allow us to conduct 2-sample t tests at each level of diagnosis. Such tests reveal that there is a significant difference in mean BSI between those who have ever been abused and those not abused only for those who have a DCFS/Court Referred disorder. There is no statistically significant difference between these two groups for those with a Depressive or Anxiety disorder (though it s pretty close for those with an Anxiety disorder).

11 Math way ANOVA 11 Example. Promotional fliers. [Exercise 13.15, p. 821 in Moore/McCabe] Means: discount promos Standard Deviations: discount promos Counts: discount promos Sum Sq Df F value Pr(>F) promos <2e-16 discount <2e-16 promos:discount Residuals discount promos mean of expprice mean of expprice promos discount Question: Was it worth plotting the interaction effects, or would we have learned the same things plotting only the main effect?

### Statistiek II. John Nerbonne. October 1, 2010. Dept of Information Science j.nerbonne@rug.nl

Dept of Information Science j.nerbonne@rug.nl October 1, 2010 Course outline 1 One-way ANOVA. 2 Factorial ANOVA. 3 Repeated measures ANOVA. 4 Correlation and regression. 5 Multiple regression. 6 Logistic

### Class 19: Two Way Tables, Conditional Distributions, Chi-Square (Text: Sections 2.5; 9.1)

Spring 204 Class 9: Two Way Tables, Conditional Distributions, Chi-Square (Text: Sections 2.5; 9.) Big Picture: More than Two Samples In Chapter 7: We looked at quantitative variables and compared the

### UNDERSTANDING THE TWO-WAY ANOVA

UNDERSTANDING THE e have seen how the one-way ANOVA can be used to compare two or more sample means in studies involving a single independent variable. This can be extended to two independent variables

### Section 13, Part 1 ANOVA. Analysis Of Variance

Section 13, Part 1 ANOVA Analysis Of Variance Course Overview So far in this course we ve covered: Descriptive statistics Summary statistics Tables and Graphs Probability Probability Rules Probability

### Analysis of Variance ANOVA

Analysis of Variance ANOVA Overview We ve used the t -test to compare the means from two independent groups. Now we ve come to the final topic of the course: how to compare means from more than two populations.

### INTERPRETING THE ONE-WAY ANALYSIS OF VARIANCE (ANOVA)

INTERPRETING THE ONE-WAY ANALYSIS OF VARIANCE (ANOVA) As with other parametric statistics, we begin the one-way ANOVA with a test of the underlying assumptions. Our first assumption is the assumption of

### Introduction to Analysis of Variance (ANOVA) Limitations of the t-test

Introduction to Analysis of Variance (ANOVA) The Structural Model, The Summary Table, and the One- Way ANOVA Limitations of the t-test Although the t-test is commonly used, it has limitations Can only

### 1.5 Oneway Analysis of Variance

Statistics: Rosie Cornish. 200. 1.5 Oneway Analysis of Variance 1 Introduction Oneway analysis of variance (ANOVA) is used to compare several means. This method is often used in scientific or medical experiments

### 1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96

1 Final Review 2 Review 2.1 CI 1-propZint Scenario 1 A TV manufacturer claims in its warranty brochure that in the past not more than 10 percent of its TV sets needed any repair during the first two years

### Chapter 5 Analysis of variance SPSS Analysis of variance

Chapter 5 Analysis of variance SPSS Analysis of variance Data file used: gss.sav How to get there: Analyze Compare Means One-way ANOVA To test the null hypothesis that several population means are equal,

### Recall this chart that showed how most of our course would be organized:

Chapter 4 One-Way ANOVA Recall this chart that showed how most of our course would be organized: Explanatory Variable(s) Response Variable Methods Categorical Categorical Contingency Tables Categorical

### N-Way Analysis of Variance

N-Way Analysis of Variance 1 Introduction A good example when to use a n-way ANOVA is for a factorial design. A factorial design is an efficient way to conduct an experiment. Each observation has data

### Examining Differences (Comparing Groups) using SPSS Inferential statistics (Part I) Dwayne Devonish

Examining Differences (Comparing Groups) using SPSS Inferential statistics (Part I) Dwayne Devonish Statistics Statistics are quantitative methods of describing, analysing, and drawing inferences (conclusions)

### Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression

Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression Objectives: To perform a hypothesis test concerning the slope of a least squares line To recognize that testing for a

### Lesson 1: Comparison of Population Means Part c: Comparison of Two- Means

Lesson : Comparison of Population Means Part c: Comparison of Two- Means Welcome to lesson c. This third lesson of lesson will discuss hypothesis testing for two independent means. Steps in Hypothesis

### An analysis method for a quantitative outcome and two categorical explanatory variables.

Chapter 11 Two-Way ANOVA An analysis method for a quantitative outcome and two categorical explanatory variables. If an experiment has a quantitative outcome and two categorical explanatory variables that

### CHAPTER 14 NONPARAMETRIC TESTS

CHAPTER 14 NONPARAMETRIC TESTS Everything that we have done up until now in statistics has relied heavily on one major fact: that our data is normally distributed. We have been able to make inferences

### 3.4 Statistical inference for 2 populations based on two samples

3.4 Statistical inference for 2 populations based on two samples Tests for a difference between two population means The first sample will be denoted as X 1, X 2,..., X m. The second sample will be denoted

### Comparing Means in Two Populations

Comparing Means in Two Populations Overview The previous section discussed hypothesis testing when sampling from a single population (either a single mean or two means from the same population). Now we

### II. DISTRIBUTIONS distribution normal distribution. standard scores

Appendix D Basic Measurement And Statistics The following information was developed by Steven Rothke, PhD, Department of Psychology, Rehabilitation Institute of Chicago (RIC) and expanded by Mary F. Schmidt,

### CS 147: Computer Systems Performance Analysis

CS 147: Computer Systems Performance Analysis One-Factor Experiments CS 147: Computer Systems Performance Analysis One-Factor Experiments 1 / 42 Overview Introduction Overview Overview Introduction Finding

### Linear Models in STATA and ANOVA

Session 4 Linear Models in STATA and ANOVA Page Strengths of Linear Relationships 4-2 A Note on Non-Linear Relationships 4-4 Multiple Linear Regression 4-5 Removal of Variables 4-8 Independent Samples

### General Method: Difference of Means. 3. Calculate df: either Welch-Satterthwaite formula or simpler df = min(n 1, n 2 ) 1.

General Method: Difference of Means 1. Calculate x 1, x 2, SE 1, SE 2. 2. Combined SE = SE1 2 + SE2 2. ASSUMES INDEPENDENT SAMPLES. 3. Calculate df: either Welch-Satterthwaite formula or simpler df = min(n

### Introduction. Hypothesis Testing. Hypothesis Testing. Significance Testing

Introduction Hypothesis Testing Mark Lunt Arthritis Research UK Centre for Ecellence in Epidemiology University of Manchester 13/10/2015 We saw last week that we can never know the population parameters

### 12: Analysis of Variance. Introduction

1: Analysis of Variance Introduction EDA Hypothesis Test Introduction In Chapter 8 and again in Chapter 11 we compared means from two independent groups. In this chapter we extend the procedure to consider

### Chapter 19 Split-Plot Designs

Chapter 19 Split-Plot Designs Split-plot designs are needed when the levels of some treatment factors are more difficult to change during the experiment than those of others. The designs have a nested

### CHAPTER 13. Experimental Design and Analysis of Variance

CHAPTER 13 Experimental Design and Analysis of Variance CONTENTS STATISTICS IN PRACTICE: BURKE MARKETING SERVICES, INC. 13.1 AN INTRODUCTION TO EXPERIMENTAL DESIGN AND ANALYSIS OF VARIANCE Data Collection

### CHAPTER 11 CHI-SQUARE AND F DISTRIBUTIONS

CHAPTER 11 CHI-SQUARE AND F DISTRIBUTIONS CHI-SQUARE TESTS OF INDEPENDENCE (SECTION 11.1 OF UNDERSTANDABLE STATISTICS) In chi-square tests of independence we use the hypotheses. H0: The variables are independent

### One-Way Analysis of Variance (ANOVA) Example Problem

One-Way Analysis of Variance (ANOVA) Example Problem Introduction Analysis of Variance (ANOVA) is a hypothesis-testing technique used to test the equality of two or more population (or treatment) means

### SCHOOL OF HEALTH AND HUMAN SCIENCES DON T FORGET TO RECODE YOUR MISSING VALUES

SCHOOL OF HEALTH AND HUMAN SCIENCES Using SPSS Topics addressed today: 1. Differences between groups 2. Graphing Use the s4data.sav file for the first part of this session. DON T FORGET TO RECODE YOUR

### Statistics 2014 Scoring Guidelines

AP Statistics 2014 Scoring Guidelines College Board, Advanced Placement Program, AP, AP Central, and the acorn logo are registered trademarks of the College Board. AP Central is the official online home

### Chapter 7. One-way ANOVA

Chapter 7 One-way ANOVA One-way ANOVA examines equality of population means for a quantitative outcome and a single categorical explanatory variable with any number of levels. The t-test of Chapter 6 looks

### The Analysis of Variance ANOVA

-3σ -σ -σ +σ +σ +3σ The Analysis of Variance ANOVA Lecture 0909.400.0 / 0909.400.0 Dr. P. s Clinic Consultant Module in Probability & Statistics in Engineering Today in P&S -3σ -σ -σ +σ +σ +3σ Analysis

### Two-sample hypothesis testing, II 9.07 3/16/2004

Two-sample hypothesis testing, II 9.07 3/16/004 Small sample tests for the difference between two independent means For two-sample tests of the difference in mean, things get a little confusing, here,

### The ANOVA for 2x2 Independent Groups Factorial Design

The ANOVA for 2x2 Independent Groups Factorial Design Please Note: In the analyses above I have tried to avoid using the terms "Independent Variable" and "Dependent Variable" (IV and DV) in order to emphasize

### One-Way Analysis of Variance: A Guide to Testing Differences Between Multiple Groups

One-Way Analysis of Variance: A Guide to Testing Differences Between Multiple Groups In analysis of variance, the main research question is whether the sample means are from different populations. The

### UNDERSTANDING ANALYSIS OF COVARIANCE (ANCOVA)

UNDERSTANDING ANALYSIS OF COVARIANCE () In general, research is conducted for the purpose of explaining the effects of the independent variable on the dependent variable, and the purpose of research design

### Descriptive Statistics

Descriptive Statistics Primer Descriptive statistics Central tendency Variation Relative position Relationships Calculating descriptive statistics Descriptive Statistics Purpose to describe or summarize

### " Y. Notation and Equations for Regression Lecture 11/4. Notation:

Notation: Notation and Equations for Regression Lecture 11/4 m: The number of predictor variables in a regression Xi: One of multiple predictor variables. The subscript i represents any number from 1 through

### CALCULATIONS & STATISTICS

CALCULATIONS & STATISTICS CALCULATION OF SCORES Conversion of 1-5 scale to 0-100 scores When you look at your report, you will notice that the scores are reported on a 0-100 scale, even though respondents

### MULTIPLE REGRESSION EXAMPLE

MULTIPLE REGRESSION EXAMPLE For a sample of n = 166 college students, the following variables were measured: Y = height X 1 = mother s height ( momheight ) X 2 = father s height ( dadheight ) X 3 = 1 if

### Chapter 23. Two Categorical Variables: The Chi-Square Test

Chapter 23. Two Categorical Variables: The Chi-Square Test 1 Chapter 23. Two Categorical Variables: The Chi-Square Test Two-Way Tables Note. We quickly review two-way tables with an example. Example. Exercise

### Simple Linear Regression Inference

Simple Linear Regression Inference 1 Inference requirements The Normality assumption of the stochastic term e is needed for inference even if it is not a OLS requirement. Therefore we have: Interpretation

### BA 275 Review Problems - Week 6 (10/30/06-11/3/06) CD Lessons: 53, 54, 55, 56 Textbook: pp. 394-398, 404-408, 410-420

BA 275 Review Problems - Week 6 (10/30/06-11/3/06) CD Lessons: 53, 54, 55, 56 Textbook: pp. 394-398, 404-408, 410-420 1. Which of the following will increase the value of the power in a statistical test

### Study Guide for the Final Exam

Study Guide for the Final Exam When studying, remember that the computational portion of the exam will only involve new material (covered after the second midterm), that material from Exam 1 will make

### Difference of Means and ANOVA Problems

Difference of Means and Problems Dr. Tom Ilvento FREC 408 Accounting Firm Study An accounting firm specializes in auditing the financial records of large firm It is interested in evaluating its fee structure,particularly

### MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS

MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS MSR = Mean Regression Sum of Squares MSE = Mean Squared Error RSS = Regression Sum of Squares SSE = Sum of Squared Errors/Residuals α = Level of Significance

### One-Way ANOVA using SPSS 11.0. SPSS ANOVA procedures found in the Compare Means analyses. Specifically, we demonstrate

1 One-Way ANOVA using SPSS 11.0 This section covers steps for testing the difference between three or more group means using the SPSS ANOVA procedures found in the Compare Means analyses. Specifically,

### HYPOTHESIS TESTING: CONFIDENCE INTERVALS, T-TESTS, ANOVAS, AND REGRESSION

HYPOTHESIS TESTING: CONFIDENCE INTERVALS, T-TESTS, ANOVAS, AND REGRESSION HOD 2990 10 November 2010 Lecture Background This is a lightning speed summary of introductory statistical methods for senior undergraduate

### Parametric and non-parametric statistical methods for the life sciences - Session I

Why nonparametric methods What test to use? Rank Tests Parametric and non-parametric statistical methods for the life sciences - Session I Liesbeth Bruckers Geert Molenberghs Interuniversity Institute

### Solutions to Homework 10 Statistics 302 Professor Larget

s to Homework 10 Statistics 302 Professor Larget Textbook Exercises 7.14 Rock-Paper-Scissors (Graded for Accurateness) In Data 6.1 on page 367 we see a table, reproduced in the table below that shows the

### Statistics courses often teach the two-sample t-test, linear regression, and analysis of variance

2 Making Connections: The Two-Sample t-test, Regression, and ANOVA In theory, there s no difference between theory and practice. In practice, there is. Yogi Berra 1 Statistics courses often teach the two-sample

### Final Exam Practice Problem Answers

Final Exam Practice Problem Answers The following data set consists of data gathered from 77 popular breakfast cereals. The variables in the data set are as follows: Brand: The brand name of the cereal

### Estimation of σ 2, the variance of ɛ

Estimation of σ 2, the variance of ɛ The variance of the errors σ 2 indicates how much observations deviate from the fitted surface. If σ 2 is small, parameters β 0, β 1,..., β k will be reliably estimated

### 13: Additional ANOVA Topics. Post hoc Comparisons

13: Additional ANOVA Topics Post hoc Comparisons ANOVA Assumptions Assessing Group Variances When Distributional Assumptions are Severely Violated Kruskal-Wallis Test Post hoc Comparisons In the prior

### Unit 31: One-Way ANOVA

Unit 31: One-Way ANOVA Summary of Video A vase filled with coins takes center stage as the video begins. Students will be taking part in an experiment organized by psychology professor John Kelly in which

### Hypothesis Testing: Two Means, Paired Data, Two Proportions

Chapter 10 Hypothesis Testing: Two Means, Paired Data, Two Proportions 10.1 Hypothesis Testing: Two Population Means and Two Population Proportions 1 10.1.1 Student Learning Objectives By the end of this

### One-Way Analysis of Variance

One-Way Analysis of Variance Note: Much of the math here is tedious but straightforward. We ll skim over it in class but you should be sure to ask questions if you don t understand it. I. Overview A. We

### Chi-square test Fisher s Exact test

Lesson 1 Chi-square test Fisher s Exact test McNemar s Test Lesson 1 Overview Lesson 11 covered two inference methods for categorical data from groups Confidence Intervals for the difference of two proportions

### ANOVA ANOVA. Two-Way ANOVA. One-Way ANOVA. When to use ANOVA ANOVA. Analysis of Variance. Chapter 16. A procedure for comparing more than two groups

ANOVA ANOVA Analysis of Variance Chapter 6 A procedure for comparing more than two groups independent variable: smoking status non-smoking one pack a day > two packs a day dependent variable: number of

### 2. Simple Linear Regression

Research methods - II 3 2. Simple Linear Regression Simple linear regression is a technique in parametric statistics that is commonly used for analyzing mean response of a variable Y which changes according

### COMPARISONS OF CUSTOMER LOYALTY: PUBLIC & PRIVATE INSURANCE COMPANIES.

277 CHAPTER VI COMPARISONS OF CUSTOMER LOYALTY: PUBLIC & PRIVATE INSURANCE COMPANIES. This chapter contains a full discussion of customer loyalty comparisons between private and public insurance companies

### Topic 8. Chi Square Tests

BE540W Chi Square Tests Page 1 of 5 Topic 8 Chi Square Tests Topics 1. Introduction to Contingency Tables. Introduction to the Contingency Table Hypothesis Test of No Association.. 3. The Chi Square Test

### Simple Regression Theory II 2010 Samuel L. Baker

SIMPLE REGRESSION THEORY II 1 Simple Regression Theory II 2010 Samuel L. Baker Assessing how good the regression equation is likely to be Assignment 1A gets into drawing inferences about how close the

### Multiple Linear Regression

Multiple Linear Regression A regression with two or more explanatory variables is called a multiple regression. Rather than modeling the mean response as a straight line, as in simple regression, it is

### Chapter 7 Notes - Inference for Single Samples. You know already for a large sample, you can invoke the CLT so:

Chapter 7 Notes - Inference for Single Samples You know already for a large sample, you can invoke the CLT so: X N(µ, ). Also for a large sample, you can replace an unknown σ by s. You know how to do a

### THE KRUSKAL WALLLIS TEST

THE KRUSKAL WALLLIS TEST TEODORA H. MEHOTCHEVA Wednesday, 23 rd April 08 THE KRUSKAL-WALLIS TEST: The non-parametric alternative to ANOVA: testing for difference between several independent groups 2 NON

### DEPARTMENT OF PSYCHOLOGY UNIVERSITY OF LANCASTER MSC IN PSYCHOLOGICAL RESEARCH METHODS ANALYSING AND INTERPRETING DATA 2 PART 1 WEEK 9

DEPARTMENT OF PSYCHOLOGY UNIVERSITY OF LANCASTER MSC IN PSYCHOLOGICAL RESEARCH METHODS ANALYSING AND INTERPRETING DATA 2 PART 1 WEEK 9 Analysis of covariance and multiple regression So far in this course,

### ABSORBENCY OF PAPER TOWELS

ABSORBENCY OF PAPER TOWELS 15. Brief Version of the Case Study 15.1 Problem Formulation 15.2 Selection of Factors 15.3 Obtaining Random Samples of Paper Towels 15.4 How will the Absorbency be measured?

### individualdifferences

1 Simple ANalysis Of Variance (ANOVA) Oftentimes we have more than two groups that we want to compare. The purpose of ANOVA is to allow us to compare group means from several independent samples. In general,

### Analysing Questionnaires using Minitab (for SPSS queries contact -) Graham.Currell@uwe.ac.uk

Analysing Questionnaires using Minitab (for SPSS queries contact -) Graham.Currell@uwe.ac.uk Structure As a starting point it is useful to consider a basic questionnaire as containing three main sections:

### BA 275 Review Problems - Week 5 (10/23/06-10/27/06) CD Lessons: 48, 49, 50, 51, 52 Textbook: pp. 380-394

BA 275 Review Problems - Week 5 (10/23/06-10/27/06) CD Lessons: 48, 49, 50, 51, 52 Textbook: pp. 380-394 1. Does vigorous exercise affect concentration? In general, the time needed for people to complete

### MULTIPLE REGRESSION WITH CATEGORICAL DATA

DEPARTMENT OF POLITICAL SCIENCE AND INTERNATIONAL RELATIONS Posc/Uapp 86 MULTIPLE REGRESSION WITH CATEGORICAL DATA I. AGENDA: A. Multiple regression with categorical variables. Coding schemes. Interpreting

### Topic 9. Factorial Experiments [ST&D Chapter 15]

Topic 9. Factorial Experiments [ST&D Chapter 5] 9.. Introduction In earlier times factors were studied one at a time, with separate experiments devoted to each factor. In the factorial approach, the investigator

### Research Methods & Experimental Design

Research Methods & Experimental Design 16.422 Human Supervisory Control April 2004 Research Methods Qualitative vs. quantitative Understanding the relationship between objectives (research question) and

### Good luck! BUSINESS STATISTICS FINAL EXAM INSTRUCTIONS. Name:

Glo bal Leadership M BA BUSINESS STATISTICS FINAL EXAM Name: INSTRUCTIONS 1. Do not open this exam until instructed to do so. 2. Be sure to fill in your name before starting the exam. 3. You have two hours

### THE FIRST SET OF EXAMPLES USE SUMMARY DATA... EXAMPLE 7.2, PAGE 227 DESCRIBES A PROBLEM AND A HYPOTHESIS TEST IS PERFORMED IN EXAMPLE 7.

THERE ARE TWO WAYS TO DO HYPOTHESIS TESTING WITH STATCRUNCH: WITH SUMMARY DATA (AS IN EXAMPLE 7.17, PAGE 236, IN ROSNER); WITH THE ORIGINAL DATA (AS IN EXAMPLE 8.5, PAGE 301 IN ROSNER THAT USES DATA FROM

### Once saved, if the file was zipped you will need to unzip it. For the files that I will be posting you need to change the preferences.

1 Commands in JMP and Statcrunch Below are a set of commands in JMP and Statcrunch which facilitate a basic statistical analysis. The first part concerns commands in JMP, the second part is for analysis

### HYPOTHESIS TESTING (ONE SAMPLE) - CHAPTER 7 1. used confidence intervals to answer questions such as...

HYPOTHESIS TESTING (ONE SAMPLE) - CHAPTER 7 1 PREVIOUSLY used confidence intervals to answer questions such as... You know that 0.25% of women have red/green color blindness. You conduct a study of men

### C. The null hypothesis is not rejected when the alternative hypothesis is true. A. population parameters.

Sample Multiple Choice Questions for the material since Midterm 2. Sample questions from Midterms and 2 are also representative of questions that may appear on the final exam.. A randomly selected sample

### Post-hoc comparisons & two-way analysis of variance. Two-way ANOVA, II. Post-hoc testing for main effects. Post-hoc testing 9.

Two-way ANOVA, II Post-hoc comparisons & two-way analysis of variance 9.7 4/9/4 Post-hoc testing As before, you can perform post-hoc tests whenever there s a significant F But don t bother if it s a main

### Association Between Variables

Contents 11 Association Between Variables 767 11.1 Introduction............................ 767 11.1.1 Measure of Association................. 768 11.1.2 Chapter Summary.................... 769 11.2 Chi

### Outline. Topic 4 - Analysis of Variance Approach to Regression. Partitioning Sums of Squares. Total Sum of Squares. Partitioning sums of squares

Topic 4 - Analysis of Variance Approach to Regression Outline Partitioning sums of squares Degrees of freedom Expected mean squares General linear test - Fall 2013 R 2 and the coefficient of correlation

### Statistical Models in R

Statistical Models in R Some Examples Steven Buechler Department of Mathematics 276B Hurley Hall; 1-6233 Fall, 2007 Outline Statistical Models Structure of models in R Model Assessment (Part IA) Anova

### Two-sample t-tests. - Independent samples - Pooled standard devation - The equal variance assumption

Two-sample t-tests. - Independent samples - Pooled standard devation - The equal variance assumption Last time, we used the mean of one sample to test against the hypothesis that the true mean was a particular

### NCSS Statistical Software

Chapter 06 Introduction This procedure provides several reports for the comparison of two distributions, including confidence intervals for the difference in means, two-sample t-tests, the z-test, the

### 10. Analysis of Longitudinal Studies Repeat-measures analysis

Research Methods II 99 10. Analysis of Longitudinal Studies Repeat-measures analysis This chapter builds on the concepts and methods described in Chapters 7 and 8 of Mother and Child Health: Research methods.

### Elementary Statistics Sample Exam #3

Elementary Statistics Sample Exam #3 Instructions. No books or telephones. Only the supplied calculators are allowed. The exam is worth 100 points. 1. A chi square goodness of fit test is considered to

### Regression step-by-step using Microsoft Excel

Step 1: Regression step-by-step using Microsoft Excel Notes prepared by Pamela Peterson Drake, James Madison University Type the data into the spreadsheet The example used throughout this How to is a regression

### Biostatistics: DESCRIPTIVE STATISTICS: 2, VARIABILITY

Biostatistics: DESCRIPTIVE STATISTICS: 2, VARIABILITY 1. Introduction Besides arriving at an appropriate expression of an average or consensus value for observations of a population, it is important to

### SHORT ANSWER. Write the word or phrase that best completes each statement or answers the question.

Ch. 10 Chi SquareTests and the F-Distribution 10.1 Goodness of Fit 1 Find Expected Frequencies Provide an appropriate response. 1) The frequency distribution shows the ages for a sample of 100 employees.

### research/scientific includes the following: statistical hypotheses: you have a null and alternative you accept one and reject the other

1 Hypothesis Testing Richard S. Balkin, Ph.D., LPC-S, NCC 2 Overview When we have questions about the effect of a treatment or intervention or wish to compare groups, we use hypothesis testing Parametric

### 1 Nonparametric Statistics

1 Nonparametric Statistics When finding confidence intervals or conducting tests so far, we always described the population with a model, which includes a set of parameters. Then we could make decisions

### Chapter 7: Simple linear regression Learning Objectives

Chapter 7: Simple linear regression Learning Objectives Reading: Section 7.1 of OpenIntro Statistics Video: Correlation vs. causation, YouTube (2:19) Video: Intro to Linear Regression, YouTube (5:18) -

### ECON 142 SKETCH OF SOLUTIONS FOR APPLIED EXERCISE #2

University of California, Berkeley Prof. Ken Chay Department of Economics Fall Semester, 005 ECON 14 SKETCH OF SOLUTIONS FOR APPLIED EXERCISE # Question 1: a. Below are the scatter plots of hourly wages