DDBA 8438: The T Test for Related Samples Video Podcast Transcript

Similar documents
DDBA 8438: The t Test for Independent Samples Video Podcast Transcript

Nonparametric Tests. Chi-Square Test for Independence

DDBA 8438: Introduction to Hypothesis Testing Video Podcast Transcript

Chapter 7. Comparing Means in SPSS (t-tests) Compare Means analyses. Specifically, we demonstrate procedures for running Dependent-Sample (or

UNDERSTANDING THE DEPENDENT-SAMPLES t TEST

One-Way ANOVA using SPSS SPSS ANOVA procedures found in the Compare Means analyses. Specifically, we demonstrate

Biostatistics: DESCRIPTIVE STATISTICS: 2, VARIABILITY

Lesson 1: Comparison of Population Means Part c: Comparison of Two- Means

1.5 Oneway Analysis of Variance

EPS 625 INTERMEDIATE STATISTICS FRIEDMAN TEST

Two Related Samples t Test

Measures of Central Tendency and Variability: Summarizing your Data for Others

INTERPRETING THE ONE-WAY ANALYSIS OF VARIANCE (ANOVA)

Calculating P-Values. Parkland College. Isela Guerra Parkland College. Recommended Citation

LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING

Bill Burton Albert Einstein College of Medicine April 28, 2014 EERS: Managing the Tension Between Rigor and Resources 1

The correlation coefficient

This chapter will demonstrate how to perform multiple linear regression with IBM SPSS

SCHOOL OF HEALTH AND HUMAN SCIENCES DON T FORGET TO RECODE YOUR MISSING VALUES

The Kruskal-Wallis test:

Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression

THE FIRST SET OF EXAMPLES USE SUMMARY DATA... EXAMPLE 7.2, PAGE 227 DESCRIBES A PROBLEM AND A HYPOTHESIS TEST IS PERFORMED IN EXAMPLE 7.

12: Analysis of Variance. Introduction

Having a coin come up heads or tails is a variable on a nominal scale. Heads is a different category from tails.

Chapter 9. Two-Sample Tests. Effect Sizes and Power Paired t Test Calculation

1.6 The Order of Operations

Using Excel for inferential statistics

Point Biserial Correlation Tests

Independent samples t-test. Dr. Tom Pierce Radford University

10. Comparing Means Using Repeated Measures ANOVA

Association Between Variables

Two-sample t-tests. - Independent samples - Pooled standard devation - The equal variance assumption

Module 4 (Effect of Alcohol on Worms): Data Analysis

Simple Regression Theory II 2010 Samuel L. Baker

HYPOTHESIS TESTING: CONFIDENCE INTERVALS, T-TESTS, ANOVAS, AND REGRESSION

Chapter 23. Inferences for Regression

t Tests in Excel The Excel Statistical Master By Mark Harmon Copyright 2011 Mark Harmon

Odds ratio, Odds ratio test for independence, chi-squared statistic.

Independent t- Test (Comparing Two Means)

Multiple-Comparison Procedures

UNDERSTANDING THE INDEPENDENT-SAMPLES t TEST

Outline. Definitions Descriptive vs. Inferential Statistics The t-test - One-sample t-test

Non-Parametric Tests (I)

Recall this chart that showed how most of our course would be organized:

Reliability Analysis

7. Comparing Means Using t-tests.

Pearson's Correlation Tests

TI-Inspire manual 1. Instructions. Ti-Inspire for statistics. General Introduction

CALCULATIONS & STATISTICS

8 6 X 2 Test for a Variance or Standard Deviation

HYPOTHESIS TESTING: POWER OF THE TEST

Chapter 8: Hypothesis Testing for One Population Mean, Variance, and Proportion

Homework 11. Part 1. Name: Score: / null

2.5 Zeros of a Polynomial Functions

Chapter 2 Probability Topics SPSS T tests

Randomized Block Analysis of Variance

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

The Dummy s Guide to Data Analysis Using SPSS

Sample Size and Power in Clinical Trials

StatCrunch and Nonparametric Statistics

Introduction to. Hypothesis Testing CHAPTER LEARNING OBJECTIVES. 1 Identify the four steps of hypothesis testing.

" Y. Notation and Equations for Regression Lecture 11/4. Notation:

SPSS/Excel Workshop 3 Summer Semester, 2010

UNDERSTANDING THE TWO-WAY ANOVA

individualdifferences

Part 3. Comparing Groups. Chapter 7 Comparing Paired Groups 189. Chapter 8 Comparing Two Independent Groups 217

EXCEL Analysis TookPak [Statistical Analysis] 1. First of all, check to make sure that the Analysis ToolPak is installed. Here is how you do it:

Pre-Algebra Lecture 6

The Chi-Square Test. STAT E-50 Introduction to Statistics

One-Way Analysis of Variance

Simple linear regression

A Basic Guide to Analyzing Individual Scores Data with SPSS

Standard Deviation Estimator

CHAPTER 11 CHI-SQUARE AND F DISTRIBUTIONS

Ad Hoc Advanced Table of Contents

Reporting Statistics in Psychology

Multiple Regression in SPSS This example shows you how to perform multiple regression. The basic command is regression : linear.

Using MS Excel to Analyze Data: A Tutorial

Statistical tests for SPSS

THE KRUSKAL WALLLIS TEST

Study Guide for the Final Exam

Lecture Notes Module 1

Testing Group Differences using T-tests, ANOVA, and Nonparametric Measures

January 26, 2009 The Faculty Center for Teaching and Learning

1/27/2013. PSY 512: Advanced Statistics for Psychological and Behavioral Research 2

The $200 A Day Cash Machine System

Hypothesis testing - Steps

Chapter 7 Section 7.1: Inference for the Mean of a Population

2 Sample t-test (unequal sample sizes and unequal variances)

Non-Inferiority Tests for One Mean

Comparing Two Groups. Standard Error of ȳ 1 ȳ 2. Setting. Two Independent Samples

Statistical skills example sheet: Spearman s Rank

T-TESTS: There are two versions of the t-test:

Quadratics - Build Quadratics From Roots

Difference tests (2): nonparametric

Testing for differences I exercises with SPSS

Preview. What is a correlation? Las Cucarachas. Equal Footing. z Distributions 2/12/2013. Correlation

MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS

SPSS Explore procedure

Test-Retest Reliability and The Birkman Method Frank R. Larkey & Jennifer L. Knight, 2002

Transcription:

DDBA 8438: The T Test for Related Samples Video Podcast Transcript JENNIFER ANN MORROW: Welcome to "The T Test for Related Samples." My name is Dr. Jennifer Ann Morrow. In today's demonstration, I will go over with you the definition of a T test for related samples. I will give you some of the alternative names for this statistic. I will give you a couple of sample research questions. I will talk about the advantages and disadvantages of this statistic. I will give you the formulas. I'll discuss the assumptions of this analysis. And I will show you how to calculate the effect size, and I will give you examples using both the formula and SPSS. Okay, let's get started. Definition JENNIFER ANN MORROW: A T test for related samples is a statistic that is used when you have only one group of participants and you want to measure them twice on the same dependent variable. Your sample will then contain two scores for each participant. Alternative Names JENNIFER ANN MORROW: There are many different names for a T test for related samples: dependent T test, repeated samples T test, paired samples T test, correlated samples T test, within subjects T test, and within groups T test. These are all the same thing. Sample Research Questions JENNIFER ANN MORROW: Some examples of research questions that can be addressed by using a T test for related samples are: Question one: Does self-esteem change from the beginning to the end of treatment? My independent variable would be time before treatment and end of treatment are the two groups or two levels of that independent variable, and my dependent variable would be selfesteem. Question two: Does alcohol impact memory? My independent variable here would be alcohol, and my two levels or groups would be none and alcohol, and my dependent variable would

be memory. And, again, don't forget, the same participants are in both groups or levels of your independent variable. Advantages and Disadvantages JENNIFER ANN MORROW: There are many advantages of a related samples T test. First, you need fewer participants than you do for a T test for independent samples. You are able to study changes over time. It also reduces the impact of individual differences. And lastly, it is more powerful or more sensitive than a T test for independent samples. There are also some disadvantages. The first disadvantage is called "carryover effects," and these can occur when the participant's response in the second treatment is altered by the lingering aftereffects of the first treatment. For example, if you were looking at the impact of caffeine on memory and you first gave your participants 100 milligrams of caffeine, and then you tested their memory and then gave those same participants 200 milligrams of caffeine and then tested their memory, if you don't leave enough time in between your treatments, participants could still have some remnants of caffeine in their body when they get that second treatment, and that is considered carryover effects. The second disadvantage of a related samples T test is called "progressive error," and this can occur when the participant's performance changes consistently over time due to fatigue or practice. For example, you're testing the impact of exercise on stress. You first have your participants run a mile, and then you measure their stress level. And then you have them do 30 minutes of cardio, and then you measure their stress level. And, again, if you don't leave enough time in between the two treatments, your participants can be very fatigued during your second treatment. And, again, this is called "progressive error." Formulas Basic Formula JENNIFER ANN MORROW: The basic formula for a T test for related samples is as follows: X sub 2 minus X sub 1, where X sub 2 is the mean of the second treatment or second group minus the mean of the first group or first treatment. And then you divide that by the standard error of the difference. Your mean of the second treatment

minus the mean of the first treatment is also known as the difference score. And your standard error of the difference is your sample variance divided by N, and you take the square root of that. The degrees of freedom for a T test for related samples is N minus 1. There's also another formula that you can use. Computational Formula JENNIFER ANN MORROW: The computational formula is as follows: First, you calculate the standard error of the difference. You sum up the different scores, the squared different scores, subtract from it the sum of all the different scores, square that, divide by N, divide that by N times N minus 1, and that becomes the denominator in your T statistic. You then take your different score, or the mean of the second group minus the mean of the first group, and divide that by your standard error of the difference. And that is your T test for related samples. Recap JENNIFER ANN MORROW: All right, let's recap. So far, I've gone over with you the definition for a T test for related samples. I've given you some of the alternative names for the statistic. I've given you two examples of sample research questions that can be addressed using this analysis. I've talked about the advantages and disadvantages of this analysis. And I've given you the formulas to calculate a T test for related samples. Now let's review T test for related samples in more detail. Assumptions JENNIFER ANN MORROW: There are two basic assumptions that must be addressed. The first one is that observations within each treatment condition must be independent. So no two scores within one group or level should be related to each other. The second assumption is that the population distribution of difference scores, or D values, must be normally distributed. If you violate either of these assumptions, you should not be using a T test for related samples.

Effect Size JENNIFER ANN MORROW: For a T test for related samples, you can report two measures of effect size: Cohen's D and percentage of variance explained. Cohen's D is the mean of the second group minus the mean of the first group divided by your standard deviation. And the percentage of variance explained is equal to T squared divided by T squared plus your degrees of freedom. Now let's go over a couple of examples. Examples Using Formula JENNIFER ANN MORROW: For my first example, we'll use the formula to calculate the T test for related samples. My research question is: Does expressive writing reduce students' drinking? My null hypothesis is that the mean difference is equal to zero. Expressive writing has no impact on drinking. My alternative hypothesis, or my research hypothesis, is that the mean difference doesn't equal zero. Expressive writing has an impact on drinking. Now let's go through the example. I have five participants. So my degrees of freedom is N minus 1, which is 4. Here, my independent variable is equal to time before and after expressive writing, and my dependent variable is number of drinks per week. So for time 1, my five participants have these scores for drinking: 9, 4, 5, 4, and 5. Then those same participants at time 2 after they have gone through the expressive writing intervention report drinking 4 drinks, 1, 5, 0, and 1. The mean number of drinks for the first time is 5.4, and the mean number of drinks for the second time is 2.2. I have a standard deviation of 3.7. I have an alpha level that I'm choosing a priori of 0.05. And I'm going to choose a two-tailed test. So I look in my T distribution table, and then I find that my critical value that I must surpass is equal to T plus or minus 2.776. So that's the critical value that I must surpass in order for me to say that I have a significant result. My formula for my T test for related samples is equal to T is equal to the mean of the second group minus the mean of the first group divided by the square root of the standard deviation over N equal to, then, negative 3.2 over the square root of 0.74. That comes out to be T equals negative 3.72. So how do I interpret this? It would be T and 4 degrees of freedom equals negative 3.72, comma, P less than 0.05, because it is significant, two-tailed. So what would that be

saying? It would be saying that the number of drinks significantly decreased from time 1, or a mean of 5.4, to time 2, a mean of 2.2. Expressive writing had an impact. I can also calculate the percent of variance explained. Again, that's T squared divided by T squared plus my degrees of freedom, and in this case, my percent of variance explained is equal to 0.78. Using SPSS JENNIFER ANN MORROW: Now let's move on to an example using SPSS. First open up your SPSS program. Now find the data set that you want to use to conduct your analysis. Click on File, Open, Data. Now find the data set that you want to use. Once you've found the data set, click on it, and then click on Open. And make sure your data view window appears on your screen. For this example, I want to do a T test for related samples to test if there's a change in self-esteem from first semester to second semester. My independent variable would be time, and my two levels would be first semester and second semester, and my dependent variable would be self-esteem. I'm going to choose an alpha level of 0.05 and a two-tailed test. So to calculate a T test for related samples, click on Analyze, Compare Means, Paired Samples T Test. Once you do that, your T test for related samples or your paired samples T test dialog box will appear on your screen. And now what you have to do is click the two levels of your independent variable in the box on the left. So scroll down. We need to find our first level, self-esteem first semester. Click on that. Once you click on that, you'll see here at the bottom left under Current Selections, it's now put the variable S esteem, which is selfesteem first semester. Go back up to the box and click on your second level, self-esteem second semester. Once you've clicked on that, you'll see here, it also says, "Variable 2, S esteem 2," which is the variable self-esteem second semester. Once both of the variables appear in the Current Selections box, go up to the right arrow to click it. And now your two variables appear in your Paired Variables dialogue box on the right. Now click Okay. As you'll see in your output window, SPSS will give you the results of your T test for related samples. Here's self-esteem first semester. Your mean is 1.72. You have 149 participants. Your standard deviation is 0.74, and your standard error of a mean is 0.06. Your second level, self-esteem second semester, your mean is 1.36. Your sample size is the same, 149, because the same participants are in both levels of your independent variable. Your standard deviation is 0.60, and your

standard error of the mean is 0.05. Now let's scroll down in your output box. We'll ignore the second table. It's not important. And the third table is your results for your T test for related samples. The first thing it will give you is the mean difference. Here in this case, it's 0.36. If you scroll to the right, you'll see your T value of 7.84, your degrees of freedom, N minus 1, 148, and your significance is 0.000. So how do we interpret this? You would have T, your degrees of freedom, 148, equals 7.84, comma-- You could do one of two things. You could put the exact P value, P equals 0.000, or you could put P less than 0.001. Again, both are APA format. Both are acceptable. And then comma, two-tailed. So what is this saying? It's saying that self-esteem significantly decreased from first semester, mean equals 1.72, and my standard deviation, 0.74, to second semester, and my mean is 1.36, and my standard deviation is 0.60. I can also calculate the percent of variance explained. And, again, that's equal to T squared divided by T squared plus your degrees of freedom. So here in this case, my percentage of variance explained would be 0.29. Recap JENNIFER ANN MORROW: Okay, let's recap. I went over with you the assumptions for a T test for related samples. I showed you how to calculate the different measures of effect size, and we did an example using a formula and using SPSS. We have now come to the end of our demonstration. Again, don't forget to practice on your own calculating a T test for related samples, both using the formula and SPSS. Thank you, and have a great day.