# UNDERSTANDING THE DEPENDENT-SAMPLES t TEST

Size: px
Start display at page:

Transcription

1 UNDERSTANDING THE DEPENDENT-SAMPLES t TEST A dependent-samples t test (a.k.a. matched or paired-samples, matched-pairs, samples, or subjects, simple repeated-measures or within-groups, or correlated groups) assesses whether the mean difference between paired/matched observations is significantly different from zero. That is, the dependent-samples t test procedure evaluates whether there is a significant difference between the means of the two variables (test occasions or events). This design is also referred to as a correlated groups design because the participants in the groups are not independently assigned. The participants are either the same individuals tested (assessed) on two occasions or under two conditions on one measure, or there are two groups of participants that are matched (paired) on one or more characteristics (e.g., IQ, age, gender, etc.) and tested on one measure. HYPOTHESES FOR THE DEPENDENT-SAMPLES t TEST Null Hypothesis: H 0 : µ 1 = µ 2 where µ 1 stands for the mean for the first variable/occasion/events and µ 2 stands for the mean for the second variable/occasion/event. -or- H 0 : µ 1 µ 2 = 0 If we think of the data as being the set of difference scores, the null hypothesis becomes the hypothesis that the mean of a population of difference scores (denoted µ D or δ) equals 0. Because it can be shown that µ D = µ 1 µ 2, we can write H 0 : µ D = µ 1 µ 2 = 0 or (H 0 : δ = µ 1 µ 2 = 0). The hypothesized population parameter, defined by the null hypothesis will be δ = 0, where δ (delta) is defined as the mean of the difference scores across the two measurements. Alternative (Non-Directional) Hypothesis: H a : µ 1 µ 2 -or- H a : µ 1 µ 2 0 Alternative (Directional) Hypothesis: H a : µ 1 < µ 2 -or- H a : µ 1 > µ 2 (depending on direction) NOTE: the subscripts (1 and 2) can be substituted with the variable/occasion/event identifiers. For example: H 0 : µ pre = µ post H a : µ pre µ post ASSUMPTIONS UNDERLYING THE DEPENDENT-SAMPLES t TEST 1. The dependent variable (difference scores) is normally distributed in the two conditions. 2. The independent variable is dichotomous and its levels (groups or occasions) are paired, or matched, in some way (e.g., pre-post, concern for pay-concern for security, etc.). When there is an extreme violation of the normality assumption or when the data are not of appropriate scaling, the Wilcoxon Matched-Pairs Signed Ranks Test should be used. DEGREES OF FREEDOM Because we are working with difference (paired) scores, N will be equal to the number of differences (or the number of pairs of observations). We will lose (restrict) one df to the mean and have N 1 df. In other words, df = number of pairs minus the 1 restriction.

2 EFFECT SIZE STATISTICS FOR THE DEPENDENT-SAMPLES t TEST Cohen s d (which can range in value from negative infinity to positive infinity) evaluates the degree (measured in standard deviation units) that the mean of the difference scores is equal to zero. If the calculated d equals 0, the mean of the difference scores is equal to zero. However, as d deviates from 0, the effect size becomes larger. The d statistic may be computed using the following equation: Mean d = where the pooled Mean and the Std. Deviation are reported in the SD SPSS output under Paired Differences The d statistic can also be computed from the reported values for t (obtained t value) and N (the number of pairs) as follows: d = t N So what does this Cohen s d mean? Statistically, it means that the difference between the two sample means is (e.g.,.52) standard deviation units (in absolute value terms) from zero, which is the hypothesized difference between the two population means. Effect sizes provide a measure of the magnitude of the difference expressed in standard deviation units in the original measurement. It is a measure of the practical importance of a significant finding. SAMPLE APA RESULTS Using an alpha level of.05, a dependent-samples t test was conducted to evaluate whether students performance using two methods of mathematics instruction differed significantly. The results indicated that the students average performance (score out of 10) using the first method of mathematics instruction (M = 5.67, SD = 1.49) was significantly higher than their average performance using the second method (M = 4.50, SD = 1.83), with t(29) = 2.83, p <.05, d =.52. The 95% confidence interval for the mean difference between the two methods of instruction was.32 to Note: there are several ways to interpret the results, the key is to indicate that there was a significant difference between the two methods at the.05 alpha level and include, at a minimum, reference to the group means, effect size, and the statistical strand. t(29) = 2.83, p <.05, d =.52 t Indicates that we are using a t-test (29) Indicates the degrees of freedom associated with this t-test 2.83 Indicates the obtained t statistic value (t obt ) p <.05 d =.52 Indicates the probability of obtaining the given t value by chance alone Indicates the effect size for the significant effect (the magnitude of the effect is measured in standard deviation units) THE DEPENDENT-SAMPLES t TEST PAGE 2

3 INTERPRETING THE DEPENDENT-SAMPLES t TEST The first table, (PAIRED SAMPLES STATISTICS) shows descriptive statistics that can be used to compare (describe) the alcohol and no alcohol reaction time conditions. Note that the means for the two conditions look somewhat different. This might be due to chance (sampling fluctuation), so we will want to test this with the t test to determine if the difference is significant. The second table, (PAIRED SAMPLES CORRELATIONS) provides correlations between the two paired scores. In our example, the correlation between alcohol and no alcohol is r =.949, which is a very high positive (imperfect) relationship. With a Sig. value of.000, this indicates that the relationship is significantly different from 0 (no relationship) at the.001 alpha level. Note, however, that this DOES NOT tell you whether there is a significant difference between the alcohol and no alcohol reaction time. That is what the t in the third table tells us. The third table, (PAIRED SAMPLES TEST) shows information on paired differences and the paired samples t test information. The Mean, indicates the mean difference between the two conditions. In our example, we see an 11.57, which indicates the mean difference between alcohol reaction time (42.07) and no alcohol reaction time (30.50). That is, the reaction time for the alcohol condition is (hundredths) seconds longer (slower) than the reaction time for the no alcohol condition. The Std. Deviation is the pooled standard deviation for the pairs. The Std. Error Mean is the pooled standard error of the mean for the pairs. This table also provides the Lower and Upper values for our confidence interval. For our example, we used an alpha level of.05; therefore, our confidence interval is 95, which results in a lower value of and an upper value of We also see the obtained t value (27.00 for our example) for the test statistic. The degrees of freedom (df) for this example is 27, which is n 1 (where n = number of pairs). For our example we had 28 pairs and when we subtract the one restriction we get df = 27. The Sig. provides the actual probability level for our example, which is shown to be.000 (i.e., <.001). Note: If the Paired Mean Differences and the Obtained Test Statistic (t) had been negative it simply would have meant that the second value (condition) was higher than the first value (condition). We know that there is a statistically significant difference between the two conditions. That is, the mean difference is significantly different from zero (0). How do we know this? METHOD ONE (most commonly used): comparing the Sig. (probability) value to the a priori alpha level. If p < α we reject the null hypothesis of no difference. If p > α we retain the null hypothesis of no difference. In our example, p is shown to be.000 (i.e., <.001) and α =.05 therefore, p < α indicating that we should reject the null hypothesis of no difference and conclude that the average reaction time for the alcohol condition (M = 42.07) was significantly longer (slower) than the average reaction time for the no alcohol condition (M = 30.50). METHOD TWO: comparing the obtained t statistic value (t obt = for our example) to the t critical value (t cv ). Knowing that we are using a two-tailed (non-directional) t test, with an alpha level of.05 (α =.05), with df = 27, and looking at the Student s t Distribution Table we find the critical value for this example to be If t obt > t cv we reject the null hypothesis of no difference. If t obt < t cv we retain the null hypothesis of no difference. For

4 our example, t obt = and t cv = 2.052, therefore, t obt > t cv so we reject the null hypothesis and conclude that there is a statistically significant difference between the two conditions. That is, the average reaction time for the alcohol condition (M = 42.07) was significantly longer (slower) than the average reaction time for the no alcohol condition (M = 30.50). METHOD THREE: examining the confidence interval and determining whether zero (the hypothesized mean difference) is contained within the lower and upper boundaries. If the confidence intervals do not contain zero we reject the null hypothesis of no difference. If the confidence intervals do contain zero we retain the null hypothesis of no difference. In our example, the lower boundary is and the upper boundary is 12.45, which does not contain zero therefore, we reject the null hypothesis of no difference. That is, the reaction time for the alcohol condition (M = 42.07) was significantly longer (slower) than the reaction time for the no alcohol condition (M = 30.50). Note that if the upper and lower bounds of the confidence intervals have the same sign (+ and + or and ), we know that the difference is statistically significant because this means that the null finding of zero difference lies outside of the confidence interval. As shown above, we reject the null hypothesis in favor of the alternative hypothesis. This indicates that the participant s reaction time for the alcohol condition (M = 42.07) was, on average, significantly higher (slower) than their reaction time for the no alcohol condition (M = 30.50). Further this indicates that the mean difference (11.57) between the two conditions was significantly different from zero. CALCULATING AN EFFECT SIZE: Since we concluded that there was a significant difference we will need to calculate an effect size. Had we not found a significant difference typically, no effect size would have to be calculated. A non-significant finding would have indicated that the participant s reaction time in the two conditions only differed due to random fluctuation or chance or that the mean difference was not significantly different from zero. To calculate the effect size for our example, we can use either of two formulas: Mean For: d = = = = 5.10 SD where the pooled Mean and the Std. Deviation are reported in the SPSS output under Paired Differences The d statistic can also be computed from the reported values for t (obtained t value) and N (the number of pairs) as follows: t For: d = = = = = 5.10 N THE DEPENDENT-SAMPLES t TEST PAGE 4

5 Dependent-Samples T-Test Example Paired Samples Statistics Pair 1 alcohol no_alcohol Std. Error Mean N Std. Deviation Mean Paired Samples Correlations Pair 1 alcohol & no_alcohol N Correlation Sig Pair 1 alcohol - no_alcohol Paired Samples Test Paired Differences 95% Confidence Interval of the Std. Error Difference Mean Std. Deviation Mean Lower Upper t df Sig. (2-tailed) Drop-Down Syntax for Dependent-Samples T-Test Example T-TEST PAIRS = alcohol WITH no_alcohol (PAIRED) /CRITERIA = CI(.95) /MISSING = ANALYSIS. Alternative Syntax for Dependent-Samples T-Test Example t-test / pairs = alcohol no_alcohol.

6 UNDERSTANDING THE DEPENDENT-SAMPLES t TEST SAMPLE APA RESULTS Using an alpha level of.05, a dependent-samples t test was conducted to evaluate whether students reaction time differed significantly as a function of whether they had received no alcohol or received 12 oz. of alcohol prior to a timed task. The results indicated that the students average reaction time when given no alcohol (M = 30.50, SD = 7.17) was significantly lower (i.e., faster) than their average reaction time when they received 12 oz. alcohol (M = 42.07, SD = 7.08), with t(27) = 27.00, p <.05, d = The 95% confidence interval for the mean difference between the two conditions was to Note: there are several ways to interpret the results, the key is to indicate that there was a significant difference between the two methods at the.05 alpha level and include, at a minimum, reference to the group means, effect size, and the statistical strand. t(27) = 27.00, p <.05, d = 5.10 t Indicates that we are using a t-test (27) Indicates the degrees of freedom associated with this t-test Indicates the obtained t statistic value (t obt ) p <.05 d = 5.10 Indicates the probability of obtaining the given t value by chance alone Indicates the effect size for the significant effect (the magnitude of the effect is measured in standard deviation units) REFERENCES Cochran, W. G., & Cox, G. M. (1957). Experimental Designs. New York: John Wiley & Sons. Green, S. B., & Salkind, N. J. (2003). Using SPSS for Windows and Macintosh: Analyzing and Understanding Data (3 rd ed.). Upper Saddle River, NJ: Prentice Hall. Hinkle, D. E., Wiersma, W., & Jurs, S. G. (2003). Applied Statistics for the Behavioral Sciences (5 th ed.). New York: Houghton Mifflin Company. Howell, D. C. (2007). Statistical Methods in Psychology (6 th ed.). Belmont, CA: Thomson Wadsworth. Huck, S. W. (2004). Reading Statistics and Research (4 th ed.). New York: Pearson Education Inc. Morgan, G. A., Leech, N. L., Gloeckner, G. W., & Barrett, K. C. (2004). SPSS for Introductory Statistics: Use and Interpretation (2 nd ed.). Mahwah, NJ: Lawrence Erlbaum Associates. Pagano, R. R. (2007). Understanding Statistics in the Behavioral Sciences (8 th ed.). Belmont, CA: Thomson Wadsworth. Satterthwaite, F. W. (1946). An approximate distribution of estimates of variance components. Biometrics Bulletin, 2,

### UNDERSTANDING THE INDEPENDENT-SAMPLES t TEST

UNDERSTANDING The independent-samples t test evaluates the difference between the means of two independent or unrelated groups. That is, we evaluate whether the means for two independent groups are significantly

### EPS 625 INTERMEDIATE STATISTICS FRIEDMAN TEST

EPS 625 INTERMEDIATE STATISTICS The Friedman test is an extension of the Wilcoxon test. The Wilcoxon test can be applied to repeated-measures data if participants are assessed on two occasions or conditions

### Two Related Samples t Test

Two Related Samples t Test In this example 1 students saw five pictures of attractive people and five pictures of unattractive people. For each picture, the students rated the friendliness of the person

### Independent t- Test (Comparing Two Means)

Independent t- Test (Comparing Two Means) The objectives of this lesson are to learn: the definition/purpose of independent t-test when to use the independent t-test the use of SPSS to complete an independent

### UNDERSTANDING ANALYSIS OF COVARIANCE (ANCOVA)

UNDERSTANDING ANALYSIS OF COVARIANCE () In general, research is conducted for the purpose of explaining the effects of the independent variable on the dependent variable, and the purpose of research design

### INTERPRETING THE ONE-WAY ANALYSIS OF VARIANCE (ANOVA)

INTERPRETING THE ONE-WAY ANALYSIS OF VARIANCE (ANOVA) As with other parametric statistics, we begin the one-way ANOVA with a test of the underlying assumptions. Our first assumption is the assumption of

### Chapter 2 Probability Topics SPSS T tests

Chapter 2 Probability Topics SPSS T tests Data file used: gss.sav In the lecture about chapter 2, only the One-Sample T test has been explained. In this handout, we also give the SPSS methods to perform

### Chapter 7. Comparing Means in SPSS (t-tests) Compare Means analyses. Specifically, we demonstrate procedures for running Dependent-Sample (or

1 Chapter 7 Comparing Means in SPSS (t-tests) This section covers procedures for testing the differences between two means using the SPSS Compare Means analyses. Specifically, we demonstrate procedures

### UNDERSTANDING THE TWO-WAY ANOVA

UNDERSTANDING THE e have seen how the one-way ANOVA can be used to compare two or more sample means in studies involving a single independent variable. This can be extended to two independent variables

### Calculating, Interpreting, and Reporting Estimates of Effect Size (Magnitude of an Effect or the Strength of a Relationship)

1 Calculating, Interpreting, and Reporting Estimates of Effect Size (Magnitude of an Effect or the Strength of a Relationship) I. Authors should report effect sizes in the manuscript and tables when reporting

### DDBA 8438: The t Test for Independent Samples Video Podcast Transcript

DDBA 8438: The t Test for Independent Samples Video Podcast Transcript JENNIFER ANN MORROW: Welcome to The t Test for Independent Samples. My name is Dr. Jennifer Ann Morrow. In today's demonstration,

### Chapter 9. Two-Sample Tests. Effect Sizes and Power Paired t Test Calculation

Chapter 9 Two-Sample Tests Paired t Test (Correlated Groups t Test) Effect Sizes and Power Paired t Test Calculation Summary Independent t Test Chapter 9 Homework Power and Two-Sample Tests: Paired Versus

### Introduction to Hypothesis Testing. Hypothesis Testing. Step 1: State the Hypotheses

Introduction to Hypothesis Testing 1 Hypothesis Testing A hypothesis test is a statistical procedure that uses sample data to evaluate a hypothesis about a population Hypothesis is stated in terms of the

### Introduction to Analysis of Variance (ANOVA) Limitations of the t-test

Introduction to Analysis of Variance (ANOVA) The Structural Model, The Summary Table, and the One- Way ANOVA Limitations of the t-test Although the t-test is commonly used, it has limitations Can only

### Lesson 1: Comparison of Population Means Part c: Comparison of Two- Means

Lesson : Comparison of Population Means Part c: Comparison of Two- Means Welcome to lesson c. This third lesson of lesson will discuss hypothesis testing for two independent means. Steps in Hypothesis

### THE KRUSKAL WALLLIS TEST

THE KRUSKAL WALLLIS TEST TEODORA H. MEHOTCHEVA Wednesday, 23 rd April 08 THE KRUSKAL-WALLIS TEST: The non-parametric alternative to ANOVA: testing for difference between several independent groups 2 NON

### Using Excel for inferential statistics

FACT SHEET Using Excel for inferential statistics Introduction When you collect data, you expect a certain amount of variation, just caused by chance. A wide variety of statistical tests can be applied

### SCHOOL OF HEALTH AND HUMAN SCIENCES DON T FORGET TO RECODE YOUR MISSING VALUES

SCHOOL OF HEALTH AND HUMAN SCIENCES Using SPSS Topics addressed today: 1. Differences between groups 2. Graphing Use the s4data.sav file for the first part of this session. DON T FORGET TO RECODE YOUR

### Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression

Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression Objectives: To perform a hypothesis test concerning the slope of a least squares line To recognize that testing for a

### HYPOTHESIS TESTING: POWER OF THE TEST

HYPOTHESIS TESTING: POWER OF THE TEST The first 6 steps of the 9-step test of hypothesis are called "the test". These steps are not dependent on the observed data values. When planning a research project,

### Hypothesis testing - Steps

Hypothesis testing - Steps Steps to do a two-tailed test of the hypothesis that β 1 0: 1. Set up the hypotheses: H 0 : β 1 = 0 H a : β 1 0. 2. Compute the test statistic: t = b 1 0 Std. error of b 1 =

### Descriptive Statistics

Descriptive Statistics Primer Descriptive statistics Central tendency Variation Relative position Relationships Calculating descriptive statistics Descriptive Statistics Purpose to describe or summarize

### Hypothesis Testing: Two Means, Paired Data, Two Proportions

Chapter 10 Hypothesis Testing: Two Means, Paired Data, Two Proportions 10.1 Hypothesis Testing: Two Population Means and Two Population Proportions 1 10.1.1 Student Learning Objectives By the end of this

### Two-sample hypothesis testing, II 9.07 3/16/2004

Two-sample hypothesis testing, II 9.07 3/16/004 Small sample tests for the difference between two independent means For two-sample tests of the difference in mean, things get a little confusing, here,

### Study Guide for the Final Exam

Study Guide for the Final Exam When studying, remember that the computational portion of the exam will only involve new material (covered after the second midterm), that material from Exam 1 will make

### HYPOTHESIS TESTING: CONFIDENCE INTERVALS, T-TESTS, ANOVAS, AND REGRESSION

HYPOTHESIS TESTING: CONFIDENCE INTERVALS, T-TESTS, ANOVAS, AND REGRESSION HOD 2990 10 November 2010 Lecture Background This is a lightning speed summary of introductory statistical methods for senior undergraduate

### Examining Differences (Comparing Groups) using SPSS Inferential statistics (Part I) Dwayne Devonish

Examining Differences (Comparing Groups) using SPSS Inferential statistics (Part I) Dwayne Devonish Statistics Statistics are quantitative methods of describing, analysing, and drawing inferences (conclusions)

### Lecture Notes Module 1

Lecture Notes Module 1 Study Populations A study population is a clearly defined collection of people, animals, plants, or objects. In psychological research, a study population usually consists of a specific

### 1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96

1 Final Review 2 Review 2.1 CI 1-propZint Scenario 1 A TV manufacturer claims in its warranty brochure that in the past not more than 10 percent of its TV sets needed any repair during the first two years

### Outline. Definitions Descriptive vs. Inferential Statistics The t-test - One-sample t-test

The t-test Outline Definitions Descriptive vs. Inferential Statistics The t-test - One-sample t-test - Dependent (related) groups t-test - Independent (unrelated) groups t-test Comparing means Correlation

### An Introduction to Statistics Course (ECOE 1302) Spring Semester 2011 Chapter 10- TWO-SAMPLE TESTS

The Islamic University of Gaza Faculty of Commerce Department of Economics and Political Sciences An Introduction to Statistics Course (ECOE 130) Spring Semester 011 Chapter 10- TWO-SAMPLE TESTS Practice

### research/scientific includes the following: statistical hypotheses: you have a null and alternative you accept one and reject the other

1 Hypothesis Testing Richard S. Balkin, Ph.D., LPC-S, NCC 2 Overview When we have questions about the effect of a treatment or intervention or wish to compare groups, we use hypothesis testing Parametric

### Chapter 8 Hypothesis Testing Chapter 8 Hypothesis Testing 8-1 Overview 8-2 Basics of Hypothesis Testing

Chapter 8 Hypothesis Testing 1 Chapter 8 Hypothesis Testing 8-1 Overview 8-2 Basics of Hypothesis Testing 8-3 Testing a Claim About a Proportion 8-5 Testing a Claim About a Mean: s Not Known 8-6 Testing

### Biostatistics: DESCRIPTIVE STATISTICS: 2, VARIABILITY

Biostatistics: DESCRIPTIVE STATISTICS: 2, VARIABILITY 1. Introduction Besides arriving at an appropriate expression of an average or consensus value for observations of a population, it is important to

### " Y. Notation and Equations for Regression Lecture 11/4. Notation:

Notation: Notation and Equations for Regression Lecture 11/4 m: The number of predictor variables in a regression Xi: One of multiple predictor variables. The subscript i represents any number from 1 through

### Nonparametric Statistics

Nonparametric Statistics J. Lozano University of Goettingen Department of Genetic Epidemiology Interdisciplinary PhD Program in Applied Statistics & Empirical Methods Graduate Seminar in Applied Statistics

### Introduction to. Hypothesis Testing CHAPTER LEARNING OBJECTIVES. 1 Identify the four steps of hypothesis testing.

Introduction to Hypothesis Testing CHAPTER 8 LEARNING OBJECTIVES After reading this chapter, you should be able to: 1 Identify the four steps of hypothesis testing. 2 Define null hypothesis, alternative

### Part 3. Comparing Groups. Chapter 7 Comparing Paired Groups 189. Chapter 8 Comparing Two Independent Groups 217

Part 3 Comparing Groups Chapter 7 Comparing Paired Groups 189 Chapter 8 Comparing Two Independent Groups 217 Chapter 9 Comparing More Than Two Groups 257 188 Elementary Statistics Using SAS Chapter 7 Comparing

### 2 Sample t-test (unequal sample sizes and unequal variances)

Variations of the t-test: Sample tail Sample t-test (unequal sample sizes and unequal variances) Like the last example, below we have ceramic sherd thickness measurements (in cm) of two samples representing

### Non-Parametric Tests (I)

Lecture 5: Non-Parametric Tests (I) KimHuat LIM lim@stats.ox.ac.uk http://www.stats.ox.ac.uk/~lim/teaching.html Slide 1 5.1 Outline (i) Overview of Distribution-Free Tests (ii) Median Test for Two Independent

### Statistics Review PSY379

Statistics Review PSY379 Basic concepts Measurement scales Populations vs. samples Continuous vs. discrete variable Independent vs. dependent variable Descriptive vs. inferential stats Common analyses

### Chapter 7 Section 7.1: Inference for the Mean of a Population

Chapter 7 Section 7.1: Inference for the Mean of a Population Now let s look at a similar situation Take an SRS of size n Normal Population : N(, ). Both and are unknown parameters. Unlike what we used

### Stat 411/511 THE RANDOMIZATION TEST. Charlotte Wickham. stat511.cwick.co.nz. Oct 16 2015

Stat 411/511 THE RANDOMIZATION TEST Oct 16 2015 Charlotte Wickham stat511.cwick.co.nz Today Review randomization model Conduct randomization test What about CIs? Using a t-distribution as an approximation

### 3.4 Statistical inference for 2 populations based on two samples

3.4 Statistical inference for 2 populations based on two samples Tests for a difference between two population means The first sample will be denoted as X 1, X 2,..., X m. The second sample will be denoted

### Consider a study in which. How many subjects? The importance of sample size calculations. An insignificant effect: two possibilities.

Consider a study in which How many subjects? The importance of sample size calculations Office of Research Protections Brown Bag Series KB Boomer, Ph.D. Director, boomer@stat.psu.edu A researcher conducts

### LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING

LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING In this lab you will explore the concept of a confidence interval and hypothesis testing through a simulation problem in engineering setting.

### Confidence Intervals for the Difference Between Two Means

Chapter 47 Confidence Intervals for the Difference Between Two Means Introduction This procedure calculates the sample size necessary to achieve a specified distance from the difference in sample means

### One-Way Analysis of Variance

One-Way Analysis of Variance Note: Much of the math here is tedious but straightforward. We ll skim over it in class but you should be sure to ask questions if you don t understand it. I. Overview A. We

### Having a coin come up heads or tails is a variable on a nominal scale. Heads is a different category from tails.

Chi-square Goodness of Fit Test The chi-square test is designed to test differences whether one frequency is different from another frequency. The chi-square test is designed for use with data on a nominal

### Testing Group Differences using T-tests, ANOVA, and Nonparametric Measures

Testing Group Differences using T-tests, ANOVA, and Nonparametric Measures Jamie DeCoster Department of Psychology University of Alabama 348 Gordon Palmer Hall Box 870348 Tuscaloosa, AL 35487-0348 Phone:

### Calculating P-Values. Parkland College. Isela Guerra Parkland College. Recommended Citation

Parkland College A with Honors Projects Honors Program 2014 Calculating P-Values Isela Guerra Parkland College Recommended Citation Guerra, Isela, "Calculating P-Values" (2014). A with Honors Projects.

### Difference tests (2): nonparametric

NST 1B Experimental Psychology Statistics practical 3 Difference tests (): nonparametric Rudolf Cardinal & Mike Aitken 10 / 11 February 005; Department of Experimental Psychology University of Cambridge

### Point Biserial Correlation Tests

Chapter 807 Point Biserial Correlation Tests Introduction The point biserial correlation coefficient (ρ in this chapter) is the product-moment correlation calculated between a continuous random variable

### The Dummy s Guide to Data Analysis Using SPSS

The Dummy s Guide to Data Analysis Using SPSS Mathematics 57 Scripps College Amy Gamble April, 2001 Amy Gamble 4/30/01 All Rights Rerserved TABLE OF CONTENTS PAGE Helpful Hints for All Tests...1 Tests

### Trust, Job Satisfaction, Organizational Commitment, and the Volunteer s Psychological Contract

Trust, Job Satisfaction, Commitment, and the Volunteer s Psychological Contract Becky J. Starnes, Ph.D. Austin Peay State University Clarksville, Tennessee, USA starnesb@apsu.edu Abstract Studies indicate

### THE FIRST SET OF EXAMPLES USE SUMMARY DATA... EXAMPLE 7.2, PAGE 227 DESCRIBES A PROBLEM AND A HYPOTHESIS TEST IS PERFORMED IN EXAMPLE 7.

THERE ARE TWO WAYS TO DO HYPOTHESIS TESTING WITH STATCRUNCH: WITH SUMMARY DATA (AS IN EXAMPLE 7.17, PAGE 236, IN ROSNER); WITH THE ORIGINAL DATA (AS IN EXAMPLE 8.5, PAGE 301 IN ROSNER THAT USES DATA FROM

### Chapter 13 Introduction to Linear Regression and Correlation Analysis

Chapter 3 Student Lecture Notes 3- Chapter 3 Introduction to Linear Regression and Correlation Analsis Fall 2006 Fundamentals of Business Statistics Chapter Goals To understand the methods for displaing

### SPSS Guide: Regression Analysis

SPSS Guide: Regression Analysis I put this together to give you a step-by-step guide for replicating what we did in the computer lab. It should help you run the tests we covered. The best way to get familiar

### Two-Sample T-Tests Allowing Unequal Variance (Enter Difference)

Chapter 45 Two-Sample T-Tests Allowing Unequal Variance (Enter Difference) Introduction This procedure provides sample size and power calculations for one- or two-sided two-sample t-tests when no assumption

### A full analysis example Multiple correlations Partial correlations

A full analysis example Multiple correlations Partial correlations New Dataset: Confidence This is a dataset taken of the confidence scales of 41 employees some years ago using 4 facets of confidence (Physical,

### Pearson's Correlation Tests

Chapter 800 Pearson's Correlation Tests Introduction The correlation coefficient, ρ (rho), is a popular statistic for describing the strength of the relationship between two variables. The correlation

### Two-Sample T-Tests Assuming Equal Variance (Enter Means)

Chapter 4 Two-Sample T-Tests Assuming Equal Variance (Enter Means) Introduction This procedure provides sample size and power calculations for one- or two-sided two-sample t-tests when the variances of

### Multivariate Analysis of Variance. The general purpose of multivariate analysis of variance (MANOVA) is to determine

2 - Manova 4.3.05 25 Multivariate Analysis of Variance What Multivariate Analysis of Variance is The general purpose of multivariate analysis of variance (MANOVA) is to determine whether multiple levels

### Projects Involving Statistics (& SPSS)

Projects Involving Statistics (& SPSS) Academic Skills Advice Starting a project which involves using statistics can feel confusing as there seems to be many different things you can do (charts, graphs,

### Non-Inferiority Tests for Two Means using Differences

Chapter 450 on-inferiority Tests for Two Means using Differences Introduction This procedure computes power and sample size for non-inferiority tests in two-sample designs in which the outcome is a continuous

### Reporting Statistics in Psychology

This document contains general guidelines for the reporting of statistics in psychology research. The details of statistical reporting vary slightly among different areas of science and also among different

### T-TESTS: There are two versions of the t-test:

Research Skills, Graham Hole - February 009: Page 1: T-TESTS: When to use a t-test: The simplest experimental design is to have two conditions: an "experimental" condition in which subjects receive some

### NCSS Statistical Software

Chapter 06 Introduction This procedure provides several reports for the comparison of two distributions, including confidence intervals for the difference in means, two-sample t-tests, the z-test, the

### SIMPLE LINEAR CORRELATION. r can range from -1 to 1, and is independent of units of measurement. Correlation can be done on two dependent variables.

SIMPLE LINEAR CORRELATION Simple linear correlation is a measure of the degree to which two variables vary together, or a measure of the intensity of the association between two variables. Correlation

### Statistiek II. John Nerbonne. October 1, 2010. Dept of Information Science j.nerbonne@rug.nl

Dept of Information Science j.nerbonne@rug.nl October 1, 2010 Course outline 1 One-way ANOVA. 2 Factorial ANOVA. 3 Repeated measures ANOVA. 4 Correlation and regression. 5 Multiple regression. 6 Logistic

### ANOVA ANOVA. Two-Way ANOVA. One-Way ANOVA. When to use ANOVA ANOVA. Analysis of Variance. Chapter 16. A procedure for comparing more than two groups

ANOVA ANOVA Analysis of Variance Chapter 6 A procedure for comparing more than two groups independent variable: smoking status non-smoking one pack a day > two packs a day dependent variable: number of

### Confidence Intervals on Effect Size David C. Howell University of Vermont

Confidence Intervals on Effect Size David C. Howell University of Vermont Recent years have seen a large increase in the use of confidence intervals and effect size measures such as Cohen s d in reporting

### Introduction to Quantitative Methods

Introduction to Quantitative Methods October 15, 2009 Contents 1 Definition of Key Terms 2 2 Descriptive Statistics 3 2.1 Frequency Tables......................... 4 2.2 Measures of Central Tendencies.................

### Skewed Data and Non-parametric Methods

0 2 4 6 8 10 12 14 Skewed Data and Non-parametric Methods Comparing two groups: t-test assumes data are: 1. Normally distributed, and 2. both samples have the same SD (i.e. one sample is simply shifted

### Introduction. Statistics Toolbox

Introduction A hypothesis test is a procedure for determining if an assertion about a characteristic of a population is reasonable. For example, suppose that someone says that the average price of a gallon

### What is Effect Size? effect size in Information Technology, Learning, and Performance Journal manuscripts.

The Incorporation of Effect Size in Information Technology, Learning, and Performance Research Joe W. Kotrlik Heather A. Williams Research manuscripts published in the Information Technology, Learning,

### Hypothesis Testing. Reminder of Inferential Statistics. Hypothesis Testing: Introduction

Hypothesis Testing PSY 360 Introduction to Statistics for the Behavioral Sciences Reminder of Inferential Statistics All inferential statistics have the following in common: Use of some descriptive statistic

### 1.5 Oneway Analysis of Variance

Statistics: Rosie Cornish. 200. 1.5 Oneway Analysis of Variance 1 Introduction Oneway analysis of variance (ANOVA) is used to compare several means. This method is often used in scientific or medical experiments

### Confidence intervals

Confidence intervals Today, we re going to start talking about confidence intervals. We use confidence intervals as a tool in inferential statistics. What this means is that given some sample statistics,

### 12: Analysis of Variance. Introduction

1: Analysis of Variance Introduction EDA Hypothesis Test Introduction In Chapter 8 and again in Chapter 11 we compared means from two independent groups. In this chapter we extend the procedure to consider

### Analysis of Data. Organizing Data Files in SPSS. Descriptive Statistics

Analysis of Data Claudia J. Stanny PSY 67 Research Design Organizing Data Files in SPSS All data for one subject entered on the same line Identification data Between-subjects manipulations: variable to

### Difference of Means and ANOVA Problems

Difference of Means and Problems Dr. Tom Ilvento FREC 408 Accounting Firm Study An accounting firm specializes in auditing the financial records of large firm It is interested in evaluating its fee structure,particularly

### TABLE OF CONTENTS. About Chi Squares... 1. What is a CHI SQUARE?... 1. Chi Squares... 1. Hypothesis Testing with Chi Squares... 2

About Chi Squares TABLE OF CONTENTS About Chi Squares... 1 What is a CHI SQUARE?... 1 Chi Squares... 1 Goodness of fit test (One-way χ 2 )... 1 Test of Independence (Two-way χ 2 )... 2 Hypothesis Testing

### Impact of Enrollment Timing on Performance: The Case of Students Studying the First Course in Accounting

Journal of Accounting, Finance and Economics Vol. 5. No. 1. September 2015. Pp. 1 9 Impact of Enrollment Timing on Performance: The Case of Students Studying the First Course in Accounting JEL Code: M41

### This chapter discusses some of the basic concepts in inferential statistics.

Research Skills for Psychology Majors: Everything You Need to Know to Get Started Inferential Statistics: Basic Concepts This chapter discusses some of the basic concepts in inferential statistics. Details

### General Method: Difference of Means. 3. Calculate df: either Welch-Satterthwaite formula or simpler df = min(n 1, n 2 ) 1.

General Method: Difference of Means 1. Calculate x 1, x 2, SE 1, SE 2. 2. Combined SE = SE1 2 + SE2 2. ASSUMES INDEPENDENT SAMPLES. 3. Calculate df: either Welch-Satterthwaite formula or simpler df = min(n

### STATISTICS FOR PSYCHOLOGISTS

STATISTICS FOR PSYCHOLOGISTS SECTION: STATISTICAL METHODS CHAPTER: REPORTING STATISTICS Abstract: This chapter describes basic rules for presenting statistical results in APA style. All rules come from

### COMPARISONS OF CUSTOMER LOYALTY: PUBLIC & PRIVATE INSURANCE COMPANIES.

277 CHAPTER VI COMPARISONS OF CUSTOMER LOYALTY: PUBLIC & PRIVATE INSURANCE COMPANIES. This chapter contains a full discussion of customer loyalty comparisons between private and public insurance companies

### individualdifferences

1 Simple ANalysis Of Variance (ANOVA) Oftentimes we have more than two groups that we want to compare. The purpose of ANOVA is to allow us to compare group means from several independent samples. In general,

### Non-Inferiority Tests for One Mean

Chapter 45 Non-Inferiority ests for One Mean Introduction his module computes power and sample size for non-inferiority tests in one-sample designs in which the outcome is distributed as a normal random

### Chapter 5 Analysis of variance SPSS Analysis of variance

Chapter 5 Analysis of variance SPSS Analysis of variance Data file used: gss.sav How to get there: Analyze Compare Means One-way ANOVA To test the null hypothesis that several population means are equal,

### Opgaven Onderzoeksmethoden, Onderdeel Statistiek

Opgaven Onderzoeksmethoden, Onderdeel Statistiek 1. What is the measurement scale of the following variables? a Shoe size b Religion c Car brand d Score in a tennis game e Number of work hours per week

### Psychology 60 Fall 2013 Practice Exam Actual Exam: Next Monday. Good luck!

Psychology 60 Fall 2013 Practice Exam Actual Exam: Next Monday. Good luck! Name: 1. The basic idea behind hypothesis testing: A. is important only if you want to compare two populations. B. depends on

### Section 13, Part 1 ANOVA. Analysis Of Variance

Section 13, Part 1 ANOVA Analysis Of Variance Course Overview So far in this course we ve covered: Descriptive statistics Summary statistics Tables and Graphs Probability Probability Rules Probability

### Comparing Means in Two Populations

Comparing Means in Two Populations Overview The previous section discussed hypothesis testing when sampling from a single population (either a single mean or two means from the same population). Now we

### Permutation Tests for Comparing Two Populations

Permutation Tests for Comparing Two Populations Ferry Butar Butar, Ph.D. Jae-Wan Park Abstract Permutation tests for comparing two populations could be widely used in practice because of flexibility of

### Introduction. Hypothesis Testing. Hypothesis Testing. Significance Testing

Introduction Hypothesis Testing Mark Lunt Arthritis Research UK Centre for Ecellence in Epidemiology University of Manchester 13/10/2015 We saw last week that we can never know the population parameters