EPS 625 INTERMEDIATE STATISTICS FRIEDMAN TEST



Similar documents
UNDERSTANDING THE DEPENDENT-SAMPLES t TEST

THE KRUSKAL WALLLIS TEST

INTERPRETING THE ONE-WAY ANALYSIS OF VARIANCE (ANOVA)

Nonparametric Statistics

SCHOOL OF HEALTH AND HUMAN SCIENCES DON T FORGET TO RECODE YOUR MISSING VALUES

Rank-Based Non-Parametric Tests

Study Guide for the Final Exam

Two Related Samples t Test

Research Methodology: Tools

UNDERSTANDING THE INDEPENDENT-SAMPLES t TEST

Nonparametric tests these test hypotheses that are not statements about population parameters (e.g.,

Independent t- Test (Comparing Two Means)

SPSS Tests for Versions 9 to 13

1 Nonparametric Statistics

Difference tests (2): nonparametric

13: Additional ANOVA Topics. Post hoc Comparisons

Chapter 5 Analysis of variance SPSS Analysis of variance

Overview of Non-Parametric Statistics PRESENTER: ELAINE EISENBEISZ OWNER AND PRINCIPAL, OMEGA STATISTICS

The Chi-Square Test. STAT E-50 Introduction to Statistics

Statistics. One-two sided test, Parametric and non-parametric test statistics: one group, two groups, and more than two groups samples

HYPOTHESIS TESTING WITH SPSS:

Chapter 2 Probability Topics SPSS T tests

An introduction to IBM SPSS Statistics

SPSS Guide How-to, Tips, Tricks & Statistical Techniques

SPSS 3: COMPARING MEANS

Testing for differences I exercises with SPSS

Bivariate Statistics Session 2: Measuring Associations Chi-Square Test

NCSS Statistical Software

Non-Parametric Tests (I)

Come scegliere un test statistico

SPSS Explore procedure

UNDERSTANDING ANALYSIS OF COVARIANCE (ANCOVA)

The Kruskal-Wallis test:

UNIVERSITY OF NAIROBI

TABLE OF CONTENTS. About Chi Squares What is a CHI SQUARE? Chi Squares Hypothesis Testing with Chi Squares... 2

Introduction to Analysis of Variance (ANOVA) Limitations of the t-test

Simple Tricks for Using SPSS for Windows

Non-parametric Tests Using SPSS

Multiple Regression in SPSS This example shows you how to perform multiple regression. The basic command is regression : linear.

Friedman's Two-way Analysis of Variance by Ranks -- Analysis of k-within-group Data with a Quantitative Response Variable

STA-201-TE. 5. Measures of relationship: correlation (5%) Correlation coefficient; Pearson r; correlation and causation; proportion of common variance

Descriptive Statistics

A Basic Guide to Analyzing Individual Scores Data with SPSS

Using Excel for inferential statistics

Chapter 7 Section 7.1: Inference for the Mean of a Population

T-test & factor analysis

The Statistics Tutor s Quick Guide to

The Dummy s Guide to Data Analysis Using SPSS

SPSS Guide: Regression Analysis

Projects Involving Statistics (& SPSS)

Post-hoc comparisons & two-way analysis of variance. Two-way ANOVA, II. Post-hoc testing for main effects. Post-hoc testing 9.

1.5 Oneway Analysis of Variance

NONPARAMETRIC STATISTICS 1. depend on assumptions about the underlying distribution of the data (or on the Central Limit Theorem)

Chapter G08 Nonparametric Statistics

UNDERSTANDING THE TWO-WAY ANOVA

Research Methods & Experimental Design

Additional sources Compilation of sources:

One-Way Analysis of Variance (ANOVA) Example Problem

Outline. Definitions Descriptive vs. Inferential Statistics The t-test - One-sample t-test

Analysis of Data. Organizing Data Files in SPSS. Descriptive Statistics

Statistical tests for SPSS

Data analysis process

Chapter Seven. Multiple regression An introduction to multiple regression Performing a multiple regression on SPSS

Statistics for Sports Medicine

Comparing Means in Two Populations

Examining Differences (Comparing Groups) using SPSS Inferential statistics (Part I) Dwayne Devonish

SPSS TUTORIAL & EXERCISE BOOK

One-Way ANOVA using SPSS SPSS ANOVA procedures found in the Compare Means analyses. Specifically, we demonstrate

Opgaven Onderzoeksmethoden, Onderdeel Statistiek

Part 3. Comparing Groups. Chapter 7 Comparing Paired Groups 189. Chapter 8 Comparing Two Independent Groups 217

Nonparametric Statistics

Types of Data, Descriptive Statistics, and Statistical Tests for Nominal Data. Patrick F. Smith, Pharm.D. University at Buffalo Buffalo, New York

Two-Way ANOVA tests. I. Definition and Applications...2. II. Two-Way ANOVA prerequisites...2. III. How to use the Two-Way ANOVA tool?...

COEFFICIENT OF CONCORDANCE

Parametric and non-parametric statistical methods for the life sciences - Session I

Chapter 12 Nonparametric Tests. Chapter Table of Contents

Introduction to Quantitative Methods

CHAPTER 5 COMPARISON OF DIFFERENT TYPE OF ONLINE ADVERTSIEMENTS. Table: 8 Perceived Usefulness of Different Advertisement Types

statistics Chi-square tests and nonparametric Summary sheet from last time: Hypothesis testing Summary sheet from last time: Confidence intervals

CHAPTER 14 NONPARAMETRIC TESTS

Nonparametric Tests. Chi-Square Test for Independence

DDBA 8438: The t Test for Independent Samples Video Podcast Transcript

Calculating P-Values. Parkland College. Isela Guerra Parkland College. Recommended Citation

NCSS Statistical Software

Pearson s Correlation

NCSS Statistical Software. One-Sample T-Test

The Friedman Test with MS Excel. In 3 Simple Steps. Kilem L. Gwet, Ph.D.

MASTER COURSE SYLLABUS-PROTOTYPE PSYCHOLOGY 2317 STATISTICAL METHODS FOR THE BEHAVIORAL SCIENCES

This chapter will demonstrate how to perform multiple linear regression with IBM SPSS

Nonparametric Two-Sample Tests. Nonparametric Tests. Sign Test

Analysis of Variance ANOVA

Data Analysis for Marketing Research - Using SPSS

We are often interested in the relationship between two variables. Do people with more years of full-time education earn higher salaries?

Odds ratio, Odds ratio test for independence, chi-squared statistic.

Linear Models in STATA and ANOVA

This can dilute the significance of a departure from the null hypothesis. We can focus the test on departures of a particular form.

January 26, 2009 The Faculty Center for Teaching and Learning

QUANTITATIVE METHODS BIOLOGY FINAL HONOUR SCHOOL NON-PARAMETRIC TESTS

Transcription:

EPS 625 INTERMEDIATE STATISTICS The Friedman test is an extension of the Wilcoxon test. The Wilcoxon test can be applied to repeated-measures data if participants are assessed on two occasions or conditions or matchedsubjects data if participants are matched in pairs. In contrast, the Friedman test allows for the analysis of repeated-measures data if participants are assessed on two or more occasions or conditions or to matched-subjects data if participants are matched in pairs, triplets, or in some greater number. The Friedman test is applicable to problems with repeated-measures designs or matched-subjects designs. With repeated-measures designs, each participant is a case in the SPSS data file and has scores on K variables, the score obtained on each of the K occasions or conditions. A researcher is interested in determining if subjects changed significantly across occasions (or conditions). For a matched-subjects design, participants are matched in sets of K participants, and each participant in a set is assessed once on a measure. Each set of participants is a case in the SPSS data file and has scores on K variables, the scores obtained on the measure by the participants within a set. If the independent variable has only two occasions or conditions, no additional significance tests need to be conducted beyond the Friedman test. However, if a factor has more than two occasions or conditions and the overall test is significant, follow-up tests are usually conducted. These follow-up tests most frequently involve comparisons between pairs of group medians. For the Friedman test, we could use the Wilcoxon test to examine unique pairs. UNDERSTANDING THE For the Friedman test, the observations come in sets of K observations. Within a set, the observations are dependent, but between sets the observations are independent. For the Friedman test, the dependent variable must be measured on at least an ordinal scale, and the null hypothesis states that the population medians are equal for the K levels of a factor. The parametric alternative to the Friedman test is the one-way repeated-measures analysis of variance (RM-ANOVA). For either test, analyses involve a factor with K levels, and we are interested in evaluating whether scores differ significantly across the levels of the factor. The tests take into account the dependency among scores introduced by the repeated-measures or matched-subjects characteristics of the design. To help understand how the Friedman test evaluates differences in medians among conditions, we will look at an example provided by Green and Salkind (2008). First, we must describe what data are being analyzed in this test. We will be using an example dealing with concerns for job pay, climate, and security to demonstrate the Friedman test (Lesson 45 from Green & Salkind). For this example: is defined as concern for job pay on a rating scale from 1, indicating no concern, to 10, indicating ultimate concern; is defined as concern for job climate on a rating scale from 1, indicating no concern, to 10, indicating ultimate concern; and, which is defined as concern for job security on a rating scale from 1, indicating no concern, to 10, indicating ultimate concern.

ASSUMPTIONS UNDERLYING A Assumption 1: Each set of K observations must represent a random sample from a population and must be independent of every other set of K observations. If the data are from a repeated-measures design, the scores for each participant must be independent of the scores from any other participant. If the data are from a matchedsubjects design, the sets of scores from any matched set of participants must be independent of the scores of any other matched set of participants. If the independence assumption is violated, the test is likely to yield inaccurate results. It should be noted that the analysis permits dependency among scores within a set. Assumption 2: The Chi-Square values for the Friedman test yield relatively accurate results to the extent that the sample size is large. The results for the tests should be fairly accurate if the sample size is or greater. Assumption 3: The Distribution of the differences scores between any pair of levels is continuous and symmetrical in the population. This assumption is required to avoid ties and to ensure that the test evaluates difference in medians rather than other characteristics of the distribution. PAGE 2

EFFECT SIZE STATISTICS FOR THE SPSS computes Kendall s coefficient of concordance (Kendall s W), a strength-of-relationship index. The coefficient of concordance ranges from 0 to 1, with higher values indicating a stronger relationship. THE RESEARCH QUESTION The research question used in this example is asked to reflect differences in medians between conditions. Do employees medians on concern for job pay, job climate, and job security ratings differ in the population? CONDUCTING THE K RELATED-SAMPLES (FRIEDMAN) TEST IN SPSS To conduct the Friedman test and follow-up tests using the Wilcoxon test in SPSS, use the following steps: Open the dataset in SPSS to be used for the Friedman Test analysis Click Analyze, click (mouse over) Nonparametric Tests, and then click K Related Samples o You should now be in the Test for Several Related Samples dialog box Click, then holding down the Ctrl key, click, and, and click to move them to the Test Variables: box Under Test Type be sure that Friedman is selected [ ] this should be the default Under Test Type select [ ] Kendall s W Click Statistics Select [ ] Descriptive Click Continue Click OK You are now ready to analyze the output data PAGE 3

SPSS OUTPUT NPar Tests Descriptive Statistics N Mean Std. Deviation Minimum Maximum 5.67 1.493 3 9 4.20 1.750 1 8 4.50 1.834 1 8 Friedman Test Ranks Mean Rank 2.50 1.68 1.82 Test Statistics a N Chi-Square 13.960 df 2 Asymp. Sig..001 a. Friedman Test Kendall's W Test Ranks Mean Rank 2.50 1.68 1.82 Test Statistics N Kendall's W a.233 Chi-Square 13.960 df 2 Asymp. Sig..001 a. Kendall's Coefficient of Concordance PAGE 4

The Friedman test, which evaluated differences in medians among the three job concerns, is significant χ 2 (2, N = ) = 13.96, p <.01. Kendall s W is.23, indicating fairly strong differences among the three concerns. Next, follow-up tests will need to be conducted to evaluate comparisons between pairs of medians. As indicated earlier, we will use the Wilcoxon test. CONDUCTING PAIRWISE COMPARISONS AFTER OBTAINING A SIGNIFICANT IN SPSS To conduct the Wilcoxon test in SPSS, use the following steps: Click Analyze, click (mouse over) Nonparametric Tests, and then click 2 Related Samples o You should now be in the Two-Related Samples Tests dialog box Holding down the Ctrl key, click and, and click to move them to the Test Pair(s) List: box Holding down the Ctrl key, click and, and click to move them to the Test Pair(s) List: box Holding down the Ctrl key, click and, and click to move them to the Test Pair(s) List: box Under Test Type be sure that Wilcoxon is selected [ ] this should be the default Click Options Under Statistics o Select [ ] Descriptive Click Continue Click OK You are now ready to analyze the output data SPSS OUTPUT NPar Tests Descriptive Statistics N Mean Std. Deviation Minimum Maximum 5.67 1.493 3 9 4.20 1.750 1 8 4.50 1.834 1 8 PAGE 5

Wilcoxon Signed Ranks Test Ranks - Negative Ranks Positive Ranks Ties Total - Negative Ranks Positive Ranks Ties Total - Negative Ranks Positive Ranks Ties Total a. < b. > c. = d. < e. > f. = g. < h. > i. = N Mean Rank Sum of Ranks 19 a 11.21 213.00 3 b 13.33 40.00 8 c 20 d 13.88 277.50 6 e 12.25 73.50 4 f 10 g 10.85 108.50 13 h 12.88 167.50 7 i Test Statistics c - - - Z -2.822 a -2.626 a -.923 b Asymp. Sig. (2-tailed).005.009.356 a. Based on positive ranks. b. Based on negative ranks. c. Wilcoxon Signed Ranks Test The Least Significant Difference (LSD) procedure controls adequately for Type I error across pairwise comparisons if there are three levels and the overall test is significant. In our example, two of three comparisons (- and -) were significant at the.05 alpha level. If the number of levels exceeds three, then the Bonferroni or Holm s Sequential Bonferroni method would be required to control adequately for Type I error. PAGE 6

APA RESULTS Based on the results produced from the above example, the APA results would be: A Friedman test was conducted to evaluate differences in medians among the job concerns for pay (Median = 5.50), for climate (Median = 4.00), and for security (Median = 4.00). The test was significant χ 2 (2, N = ) = 13.96, p <.01, and the Kindall s coefficient of concordance of.23 indicated fairly strong differences among the three concerns. Follow-up pairwise comparisons were conducted using a Wilcoxon test and controlling for the Type I errors across theses comparisons at the.05 level using the LSD procedure. The median concern for pay was significantly greater than the median concern for climate, p <.01, and the median concern for security, p <.01, but the median concern for climate did not differ significantly from the median concern for security p =.356. * The following Statistics Table would be needed to write the above results this is obtained by clicking on Analyze, clicking Descriptive Statistics and then clicking Frequencies. Select the variable and request their Median under the Statistics option. Statistics N Median Valid Missing 0 0 0 5.50 4.00 4.00 REFERENCE Green, S. B., & Salkind, N. J. (2008). Using SPSS for Window and Macintosh: Analyzing and understanding data (5th ed.). Upper Saddle River, NJ: Pearson Prentice Hall. PAGE 7