Statistical Analysis of Independent Groups in SPSS
|
|
- Leonard Chandler
- 7 years ago
- Views:
Transcription
1 Statistical Analysis of Independent Groups in SPSS Peter Samuels 30 th October 2015 Based on materials provided by Coventry University and Loughborough University under a National HE STEM Programme Practice Transfer Adopters grant Overview Lab session teaching you how to analyse differences in the means/medians of two or more independent samples of a single scale variable Common student activity Self contained: only a finite number of possibilities 1
2 Workshop outline Two groups: Descriptives Assumption checking (for parametric tests) Independent samples t-test Mann Whitney U test Several groups: Descriptives Assumption checking (for parametric tests) One-way ANOVA Kruskall Wallis test Post hoc testing The data analysis process for 2 independent groups Descriptive statistics Pass Assumption checking Fail Parametric testing: t-test Nonparametric testing: Mann-Whitney U test 2
3 Example 1: 2 stool designs A research project involving two different designs of stool Tested by 40 people Each person was assigned to assess one product, providing in an overall performance score out of people per stool Create an error bar chart Open the file TwoStools.spv Graphs > Legacy Dialogs > Error Bar Click on Define Put PerformanceScore as the Variable and Design as the Category Axis Click OK Go to the output window 3
4 Interpretation: Confidence intervals of the means of the performance scores Means of samples are the circles 95% confident means of populations lie between whiskers As the intervals overlap we should suspect the test will come back negative (informal, not failsafe!) Also observe the intervals are roughly equal Robustness Parameter-based statistical tests make certain assumptions in their underlying models However, they often work well in other situations where these assumptions are violated This is known as robustness Robustness conditions depend upon the test being used There are different opinions on robustness conditions 4
5 Assumption checking Parametric tests are more sensitive than nonparametric tests but require certain assumptions to hold to be used Thus we need to check these assumptions first Not required with this test for equal group sizes 25 due to robustness exceptions (Sawilowsky and Blair, 1992) Here our groups were equal but only of size 20 so we need to test for normality For small sample sizes the best test is Shapiro-Wilk Reference: Sawilowsky, S. S. and Blair, R. C. (1992) A more realistic look at the robustness and Type II error properties of the t test to departures from population normality. Psychological Bulletin, 111(2), pp Assumption checking in SPSS Analyze > Descriptive Statistics > Explore Put PerformanceScore in the Dependent List and Design in the Factor List and select Plots Remove Stemand-leaf, select Histogram and Normality plots with tests 5
6 Add a fitted normal curve to the histograms Double click on the first histogram in the output window this opens the Chart Editor window Select this button Close the Properties widow and the Chart Editor window Repeat with the other histogram Design 1 histogram appears to be approximately normally distributed Design 2 histogram appears to be a bit skewed to the right. However its skewness < twice its standard error. 6
7 The null and alternative hypotheses Statistical testing is about making a decision about the significance of a data feature or summary statistic We usually assume that this was just a random event then seek to measure how unlikely such an event was The statement of this position is known as the null hypothesis and is written H 0 In statistical testing we make a decision about whether to accept or reject the null hypothesis based on the probability (or P- ) value of the test statistic The logical opposite of the null hypothesis is known as the alternative hypothesis 7
8 Standard significance levels and the null hypothesis (H 0 ) P-value of test statistic Significant? Formal action > 0.1 No Retain H 0 < 0.1 and > 0.05 < 0.05 and > 0.01 < 0.01 and > < No Retain H 0 Yes: at 95% Yes: at 99% Yes: at 99.9% Reject H 0 at 95% confidence Reject H 0 at 99% confidence Reject H 0 at 99.9% confidence Informal interpretation No evidence to reject H 0 Weak evidence to reject H 0 Evidence to reject H 0 Strong evidence to reject H 0 Very strong evidence to reject H 0 Example Chris Froome Plebgate libel trial Climate change Plebgate police trial Higgs boson The Shapiro-Wilk test is negative for both designs as the Sig. (or probability) values are both > 0.05 Therefore we can use the appropriate parametric test (the independent samples t-test) Both these tests are not very sensitive with small sample sizes and over sensitive with larger samples (e.g. > 100) For large samples the probability values should be interpreted alongside the histograms with fitted normal curves and Q-Q plots see normality checking sheet 8
9 The independent samples t-test Applies to the different (independent) subjects with one scaled-based data value Tests the difference between the means of the two samples The samples can be different sizes Here: Product scores for Designs 1 and 2 Assumes normality Null hypothesis (H 0 ): The means of the performance scores for the two designs are equal Two variants: Depends upon whether the variances of the two designs can be assumed to be equal (use Levene s test first, H 0 : Variances are equal) all done together in SPSS Analyze > Compare Means > Independent Samples T- Test Add PerformanceScore as the Test Variable and Design as the Grouping Variable Select Define Groups and add 1 for Group 1 and 2 for Group 2 9
10 Automatically computes Levene s test and outputs both versions: Not significant at 95% (so we retain H 0 ) So equal variances can be assumed (now look at this row) t-test significant at 95% (between 0.05 and 0.01) Interpretation: There is evidence that the mean performance scores for the stool designs are different (this is different from our informal interpretation of the error bar chart) Nonparametric testing A type of statistical inference which does not make any assumptions about the data coming from a distribution Often applies to category-based data (nominal and ordinal) but can also apply to scale-based data if test assumptions are not met Advantage: no need to check assumptions Disadvantages: Results are generally less sensitive (higher p- values) Cannot handle more complex data structures (such as two-way ANOVA) Appropriate test here is the Mann-Whitney U test 10
11 Mann-Whitney U Test A non-parametric test of two independent samples of ordinal or scale-based data Tests whether there is an increasing/decreasing relationship between two samples Need at least about 10 data categories for ordinal variables, otherwise use the Chi-squared test Alternative to a independent samples t-test for scalebased data if the assumptions are not met (not the case here just shown for illustration purposes) Samples can be different sizes Null hypothesis: Design 2 performance scores are equally likely to be higher or lower than Design 1 performance scores Running the Mann-Whitney U test Select: Analyze Nonparametric Tests Independent Samples On the Fields tab, add PerformanceScore in the Test Fields list and Design in the Groups list Select Run 11
12 The correct test has been run The Sig. value is about the same as the value for the independent samples t test (expected it to be higher) Helpfully states the null hypothesis decision Unhelpfully states the default significance level used (can be misleading) The data analysis process for several independent groups Descriptive statistics Pass Assumption checking Fail Parametric testing Nonparametric testing Significant differences Post hoc testing 12
13 Example 2: 3 stool designs A research project involving three different designs of a new product Tested by 60 people Each person was assigned to assess one product, providing in an overall performance score out of people per product Open the file ThreeStools.spv Create descriptive statistics and an error bar chart as before What is one-way ANOVA? An extension of t-tests to several groups Usually independent measures Accounts for variations both within and between groups 95% confidence intervals for 3 groups of measurements These confidence intervals do not overlap, but does this mean we can conclude they are not all from the same population? 13
14 Initial observations There appear to be differences between the sample means, i.e. variation between groups But there is also variation within groups Can we conclude that there are differences between groups (i.e. that they come from population with different means)? We need a systematic objective approach this is known as ANOVA Called ANOVA from ANalysis Of VAriance (The name is a bit confusing because it sounds like a variance test, not a means test) Introduction to ANOVA Better than doing lots of two sample tests, e.g. 6 groups would require 15 two sample tests For every test, there is a 0.05 probability that we reject H 0 when it should be retained (assuming H 0 is true) Doing several tests increases the probability of making a wrong inference of significance (Type I error) E.g. the probability of a Type I error with 6 groups, assuming they are all equally randomly distributed is = = 0.537, i.e. more than 1 in 2 14
15 The ANOVA model y ij m y ij denotes performance score for the j th measurement of the i th design The parameter m i denotes how the performance score for design i differs from the overall mean μ e ij denotes the error (or residual) for the j th measurement of the i th design The ANOVA model assumes that all these errors are normally distributed with zero mean and equal variances i e ij Test hypothesis In our example, we need to test the hypothesis: H 0 : m 1 = m 2 = m 3 = 0 Or, more simply, that the product score population means are the same. Intuitively, this is done by looking at the difference between means relative to the difference between observations, i.e. is the mean-to-mean variation greater than what you would expect by chance? 15
16 Assumptions (Similar to the independent sample t-test assumptions) 1. The measurements for each group are normally distributed. However, if there are many groups there is a danger of Type I errors. 2. The errors for the whole data set are normally distributed (this theoretically follows from Assumption 1, but it is worth testing separately with small samples). To calculate these errors we first need to estimate the group means. 3. The variances of each group are equal (we can still use a version of ANOVA even if this one fails) Assumption 1: Check normality of each group No evidence that individual groups are not normally distributed 16
17 Assumption 2: Testing errors for normality First create the residuals Select Analyze > General Linear Model > Univariate Add the variables as shown Select Save Choose Unstandardised residuals Based on estimates of m i Select Analyze > Descriptive Statistics > Explore Add the residual variable as shown but with no factor Select Plots and Histogram and Normality plots with tests as before Then add a normal curve to the histogram as before 17
18 Evidence that the residuals are not normally distributed from the Shapiro-Wilk test (p < 0.05) even though the degrees of freedom have been reduced slightly. The Kolmogorov-Smirnov test is even more significant. Kurtosis (peakedness) looks a bit high Formally we should compare the absolute value of the kurtosis with twice its standard error this is significant as it is higher 18
19 Assumption 3: Equal variances Analyze > Compare Means > One-Way ANOVA Add PerformanceScore to the Dependent List and Design as the factor Select Options and Homogeneity of variance test Carries out a Levene s test for homogeneity of variance (similar to the t-test) Null hypothesis: The variances are equal Significant at 95% (p-value < 0.05) so we have evidence to reject assumption of equality of variances 19
20 Robustness of ANOVA ANOVA is quite robust to changes in skewness but not to changes in kurtosis. Thus, it should not be used when: Kurtosis > 2 Standard Error of Kurtosis for any group or the errors. Otherwise, provided the group sizes are equal and there are at least 20 degrees of freedom, ANOVA is quite robust to violations of its assumptions However, the variances must still be equal Source: Field, A. (2013) Discovering Statistics using SPSS. 4 th edn. London: SAGE, pp Robustness calculation Group Kurtosis Standard Error of Kurtosis Condition met Design Yes Design Yes Design Yes Errors No Group sizes are equal Total degrees of freedom = = 59 > 20 Also standard ANOVA cannot be used because the variances are not equal 20
21 Summary of findings: ANOVA assumptions Assumption Finding 1. Normality of groups No evidence of non-normality 2. Normality of errors Evidence of non-normality 3. Equality of variances Evidence of non-equality Robustness Kurtosis of errors too high One-way ANOVA If all 3 assumptions (or the robustness exceptions to nonnormality) are OK then use standard one-way ANOVA Analyze > Compare Means > One-Way ANOVA Under Options select Descriptive Shown for illustration purposes 21
22 Significance level < So there is very strong evidence of differences in performance score between the three designs What if these assumptions are in doubt? If normality assumptions (or their robust exceptions) are in doubt: Use a nonparametric test: Kruskal-Wallis or median if there is no trend in the groups or Jonckheere- Terpstra if you are looking for a trend (e.g. mean of group 1 < mean of group 2 < mean of group 3, etc.) Available under Analyze Nonparametic Tests Independent Samples If equality of variances assumption in doubt: Use the Brown-Forsythe or Welch test Select ANOVA and click on Options button and select the Brown-Forsythe and Welch options 22
23 Nonparametric one-way ANOVA We should use the Kruskal-Wallis or median tests as there is no trend to observe between these designs The median test is cruder than Kruskal-Wallis and should only be preferred when ranges of extreme values have been summarised together, which was not the case here (see Select Analyze Nonparametric tests Independent Samples Add PerformanceScore as the Test Field and Design as the Groups variable on the Fields tab Select the Settings tab and Customize tests and the Kruskal-Wallis test on the Settings tab Then select Run Returns a significance value < (ignore the note below the result as before) Very strong evidence that there are differences between the groups (as before) 23
24 ANOVA with unequal variances Our data set violated the normality of errors assumption but there were also differences in variances The Brown-Forsythe and Welch tests should only be used with unequal variances if the data and errors are normally distributed (shown for illustration purposes here) Under Options select Brown-Forsythe and Welch tests Both tests are again significant at 99.9% Very strong evidence that the means are not equal Generally the Welch test is slightly better unless there is one group with an extreme mean and a large variance (which was not the case here, so the Welch test should be preferred) see (Field, 2013: 443) 24
25 Multiple comparisons What if we conclude there are differences between the groups? We don t know which pairs are different We can do post-hoc tests to compare each pair of groups Similar to 2-sample tests but adjusted significance levels for the multiple testing issue Note: You should only run post hoc tests if you obtain a positive result from the ANOVA (or equivalent) test Which post hoc test? For equal group sizes and similar variances, use Tukey (HSD) or REGWQ, or for guaranteed control over Type I errors (more conservative), use Bonferroni For slightly different group sizes, use Gabriel For very different group sizes, use Hochberg s GT2 For unequal variances, use Games-Howell (also recommended as a backup in other circumstances) Source: (Field, 2013: 459) 25
26 Our data set violated the normality of errors assumption but there were also significant differences in variances Try using the Games-Howell post hoc test (shown for illustration purposes only) Run the One-Way ANOVA as before Select Post Hoc and Games-Howell Very strong evidence of differences between groups 1 and 3 Evidence of differences between groups 1 and 2 Weak evidence of differences between groups 2 and 3 26
27 Nonparametric post hoc testing This is the correct post hoc testing for our data set Double click on this output box in the output window: This opens the Model Viewer window Change the View to Pairwise Comparisons 27
28 The output should then look like this. Concentrate on the Adjusted Sig. values: Weak evidence of a difference between Design 1 and Design 2 Very strong evidence of a difference between Design 1 and Design 3 No evidence of a difference between Design 2 and Design 3 Note SPSS version 22 does not use Mann-Whitney U test in its Kruskal-Wallis post hoc testing but a variant called the Dunn- Bonferroni test The Sig. values given by the pairwise comparison in the Model Viewer are higher that those for the Mann-Whitney U test (e.g. for Designs 1 and 2 was found earlier to be 0.013; note: need one more decimal place to calculate the correction) However, we can still use their relative size to decide which pairs to run an individual post hoc test using the Mann-Whitney U test For our dataset, we do not need to run Designs 2 and 3 because we know it will be non-significant even with a correction for this bug but we should run Designs 1 and 3 To obtain the adjusted Sig. values, multiply the Sig. value by the number of pairs 28
29 Legacy Mann-Whitney U test The Mann-Whitney U test will not work in the new dialog with three groups Use the legacy dialog instead: Analyze > Nonparametric Tests > Legacy Dialogs > 2 Independent Samples Choose groups 1 and 3 Mann-Whitney U is the default test Returns the value Double click on the Exact Sig. output to check it to one more decimal place Example 2: Summary of results Pair Games- Howell Kruskall- Wallis (Dunn- Bonferroni) Post hoc test 1 and and 3 < < < and Not tested Piarwise Mann- Whitney U with Bonferroni adjustment According to the preferred (second and third) tests there is: Very strong evidence of a difference between Designs 1 and 3 (Weak) evidence of a difference between Designs 1 and 2 No evidence of a difference between Designs 2 and 3 29
30 Recap: We have considered: Two groups: Descriptives Assumption checking (for parametric tests) Independent samples t-test Mann Whitney U test Several groups: Descriptives Assumption checking (for parametric tests) One-way ANOVA Kruskal-Wallis test Post hoc testing statstutor resources Normality checking (draft electronic copy provided) Normality checking solutions (draft electronic copy provided) Independent samples t-test (paper copy provided) Mann-Whitney U test (available from statstutor website) One way ANOVA (paper copy provided) One way ANOVA additional material (available from statstutor website) Kruskal-Wallis test (draft electronic copy provided) 30
31 References IBM (2014) Post hoc comparisons for the Kruskal-Wallis test ibm.com/support/docview.wss?uid=swg IBM developerworks (2015) Bonferroni with Mann-Whitney? ml/topic?id= ad0-4f26-9a ac4f. Field, A. (2013) Discovering Statistics using SPSS: (And sex and drugs and rock 'n' roll). 4 th edn. London: SAGE. Sawilowsky, S. S. and Blair, R. C. (1992) A more realistic look at the robustness and Type II error properties of the t test to departures from population normality. Psychological Bulletin, 111(2), pp Statistica (n.d.) Statistica Help: Nonparametric Statistics Notes Kruskall-Wallis ANOVA by Ranks and Median Test. 31
SPSS Explore procedure
SPSS Explore procedure One useful function in SPSS is the Explore procedure, which will produce histograms, boxplots, stem-and-leaf plots and extensive descriptive statistics. To run the Explore procedure,
More informationINTERPRETING THE ONE-WAY ANALYSIS OF VARIANCE (ANOVA)
INTERPRETING THE ONE-WAY ANALYSIS OF VARIANCE (ANOVA) As with other parametric statistics, we begin the one-way ANOVA with a test of the underlying assumptions. Our first assumption is the assumption of
More informationChapter 5 Analysis of variance SPSS Analysis of variance
Chapter 5 Analysis of variance SPSS Analysis of variance Data file used: gss.sav How to get there: Analyze Compare Means One-way ANOVA To test the null hypothesis that several population means are equal,
More informationProjects Involving Statistics (& SPSS)
Projects Involving Statistics (& SPSS) Academic Skills Advice Starting a project which involves using statistics can feel confusing as there seems to be many different things you can do (charts, graphs,
More informationSCHOOL OF HEALTH AND HUMAN SCIENCES DON T FORGET TO RECODE YOUR MISSING VALUES
SCHOOL OF HEALTH AND HUMAN SCIENCES Using SPSS Topics addressed today: 1. Differences between groups 2. Graphing Use the s4data.sav file for the first part of this session. DON T FORGET TO RECODE YOUR
More information13: Additional ANOVA Topics. Post hoc Comparisons
13: Additional ANOVA Topics Post hoc Comparisons ANOVA Assumptions Assessing Group Variances When Distributional Assumptions are Severely Violated Kruskal-Wallis Test Post hoc Comparisons In the prior
More informationJanuary 26, 2009 The Faculty Center for Teaching and Learning
THE BASICS OF DATA MANAGEMENT AND ANALYSIS A USER GUIDE January 26, 2009 The Faculty Center for Teaching and Learning THE BASICS OF DATA MANAGEMENT AND ANALYSIS Table of Contents Table of Contents... i
More informationStatistical tests for SPSS
Statistical tests for SPSS Paolo Coletti A.Y. 2010/11 Free University of Bolzano Bozen Premise This book is a very quick, rough and fast description of statistical tests and their usage. It is explicitly
More informationOne-Way ANOVA using SPSS 11.0. SPSS ANOVA procedures found in the Compare Means analyses. Specifically, we demonstrate
1 One-Way ANOVA using SPSS 11.0 This section covers steps for testing the difference between three or more group means using the SPSS ANOVA procedures found in the Compare Means analyses. Specifically,
More informationThe Statistics Tutor s Quick Guide to
statstutor community project encouraging academics to share statistics support resources All stcp resources are released under a Creative Commons licence The Statistics Tutor s Quick Guide to Stcp-marshallowen-7
More informationUNDERSTANDING THE TWO-WAY ANOVA
UNDERSTANDING THE e have seen how the one-way ANOVA can be used to compare two or more sample means in studies involving a single independent variable. This can be extended to two independent variables
More informationTHE KRUSKAL WALLLIS TEST
THE KRUSKAL WALLLIS TEST TEODORA H. MEHOTCHEVA Wednesday, 23 rd April 08 THE KRUSKAL-WALLIS TEST: The non-parametric alternative to ANOVA: testing for difference between several independent groups 2 NON
More informationChapter 7 Section 7.1: Inference for the Mean of a Population
Chapter 7 Section 7.1: Inference for the Mean of a Population Now let s look at a similar situation Take an SRS of size n Normal Population : N(, ). Both and are unknown parameters. Unlike what we used
More informationEPS 625 INTERMEDIATE STATISTICS FRIEDMAN TEST
EPS 625 INTERMEDIATE STATISTICS The Friedman test is an extension of the Wilcoxon test. The Wilcoxon test can be applied to repeated-measures data if participants are assessed on two occasions or conditions
More informationMultiple-Comparison Procedures
Multiple-Comparison Procedures References A good review of many methods for both parametric and nonparametric multiple comparisons, planned and unplanned, and with some discussion of the philosophical
More informationSPSS Tests for Versions 9 to 13
SPSS Tests for Versions 9 to 13 Chapter 2 Descriptive Statistic (including median) Choose Analyze Descriptive statistics Frequencies... Click on variable(s) then press to move to into Variable(s): list
More informationHow To Check For Differences In The One Way Anova
MINITAB ASSISTANT WHITE PAPER This paper explains the research conducted by Minitab statisticians to develop the methods and data checks used in the Assistant in Minitab 17 Statistical Software. One-Way
More informationComparing Means in Two Populations
Comparing Means in Two Populations Overview The previous section discussed hypothesis testing when sampling from a single population (either a single mean or two means from the same population). Now we
More informationUNDERSTANDING THE INDEPENDENT-SAMPLES t TEST
UNDERSTANDING The independent-samples t test evaluates the difference between the means of two independent or unrelated groups. That is, we evaluate whether the means for two independent groups are significantly
More informationTABLE OF CONTENTS. About Chi Squares... 1. What is a CHI SQUARE?... 1. Chi Squares... 1. Hypothesis Testing with Chi Squares... 2
About Chi Squares TABLE OF CONTENTS About Chi Squares... 1 What is a CHI SQUARE?... 1 Chi Squares... 1 Goodness of fit test (One-way χ 2 )... 1 Test of Independence (Two-way χ 2 )... 2 Hypothesis Testing
More informationAn introduction to IBM SPSS Statistics
An introduction to IBM SPSS Statistics Contents 1 Introduction... 1 2 Entering your data... 2 3 Preparing your data for analysis... 10 4 Exploring your data: univariate analysis... 14 5 Generating descriptive
More informationLAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING
LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING In this lab you will explore the concept of a confidence interval and hypothesis testing through a simulation problem in engineering setting.
More informationThe Dummy s Guide to Data Analysis Using SPSS
The Dummy s Guide to Data Analysis Using SPSS Mathematics 57 Scripps College Amy Gamble April, 2001 Amy Gamble 4/30/01 All Rights Rerserved TABLE OF CONTENTS PAGE Helpful Hints for All Tests...1 Tests
More informationSPSS/Excel Workshop 3 Summer Semester, 2010
SPSS/Excel Workshop 3 Summer Semester, 2010 In Assignment 3 of STATS 10x you may want to use Excel to perform some calculations in Questions 1 and 2 such as: finding P-values finding t-multipliers and/or
More informationUsing SPSS, Chapter 2: Descriptive Statistics
1 Using SPSS, Chapter 2: Descriptive Statistics Chapters 2.1 & 2.2 Descriptive Statistics 2 Mean, Standard Deviation, Variance, Range, Minimum, Maximum 2 Mean, Median, Mode, Standard Deviation, Variance,
More informationIBM SPSS Statistics for Beginners for Windows
ISS, NEWCASTLE UNIVERSITY IBM SPSS Statistics for Beginners for Windows A Training Manual for Beginners Dr. S. T. Kometa A Training Manual for Beginners Contents 1 Aims and Objectives... 3 1.1 Learning
More informationNCSS Statistical Software
Chapter 06 Introduction This procedure provides several reports for the comparison of two distributions, including confidence intervals for the difference in means, two-sample t-tests, the z-test, the
More informationMultivariate Analysis of Variance. The general purpose of multivariate analysis of variance (MANOVA) is to determine
2 - Manova 4.3.05 25 Multivariate Analysis of Variance What Multivariate Analysis of Variance is The general purpose of multivariate analysis of variance (MANOVA) is to determine whether multiple levels
More information1.5 Oneway Analysis of Variance
Statistics: Rosie Cornish. 200. 1.5 Oneway Analysis of Variance 1 Introduction Oneway analysis of variance (ANOVA) is used to compare several means. This method is often used in scientific or medical experiments
More informationTesting for differences I exercises with SPSS
Testing for differences I exercises with SPSS Introduction The exercises presented here are all about the t-test and its non-parametric equivalents in their various forms. In SPSS, all these tests can
More informationUsing Excel for inferential statistics
FACT SHEET Using Excel for inferential statistics Introduction When you collect data, you expect a certain amount of variation, just caused by chance. A wide variety of statistical tests can be applied
More informationRank-Based Non-Parametric Tests
Rank-Based Non-Parametric Tests Reminder: Student Instructional Rating Surveys You have until May 8 th to fill out the student instructional rating surveys at https://sakai.rutgers.edu/portal/site/sirs
More informationSimple linear regression
Simple linear regression Introduction Simple linear regression is a statistical method for obtaining a formula to predict values of one variable from another where there is a causal relationship between
More informationDescriptive Statistics
Descriptive Statistics Primer Descriptive statistics Central tendency Variation Relative position Relationships Calculating descriptive statistics Descriptive Statistics Purpose to describe or summarize
More informationChapter G08 Nonparametric Statistics
G08 Nonparametric Statistics Chapter G08 Nonparametric Statistics Contents 1 Scope of the Chapter 2 2 Background to the Problems 2 2.1 Parametric and Nonparametric Hypothesis Testing......................
More informationMain Effects and Interactions
Main Effects & Interactions page 1 Main Effects and Interactions So far, we ve talked about studies in which there is just one independent variable, such as violence of television program. You might randomly
More informationMultivariate analyses
14 Multivariate analyses Learning objectives By the end of this chapter you should be able to: Recognise when it is appropriate to use multivariate analyses (MANOVA) and which test to use (traditional
More informationMEASURES OF LOCATION AND SPREAD
Paper TU04 An Overview of Non-parametric Tests in SAS : When, Why, and How Paul A. Pappas and Venita DePuy Durham, North Carolina, USA ABSTRACT Most commonly used statistical procedures are based on the
More informationAnalysis of Data. Organizing Data Files in SPSS. Descriptive Statistics
Analysis of Data Claudia J. Stanny PSY 67 Research Design Organizing Data Files in SPSS All data for one subject entered on the same line Identification data Between-subjects manipulations: variable to
More informationDescriptive and Inferential Statistics
General Sir John Kotelawala Defence University Workshop on Descriptive and Inferential Statistics Faculty of Research and Development 14 th May 2013 1. Introduction to Statistics 1.1 What is Statistics?
More informationChapter 7. One-way ANOVA
Chapter 7 One-way ANOVA One-way ANOVA examines equality of population means for a quantitative outcome and a single categorical explanatory variable with any number of levels. The t-test of Chapter 6 looks
More informationDifference tests (2): nonparametric
NST 1B Experimental Psychology Statistics practical 3 Difference tests (): nonparametric Rudolf Cardinal & Mike Aitken 10 / 11 February 005; Department of Experimental Psychology University of Cambridge
More informationTwo-Sample T-Tests Allowing Unequal Variance (Enter Difference)
Chapter 45 Two-Sample T-Tests Allowing Unequal Variance (Enter Difference) Introduction This procedure provides sample size and power calculations for one- or two-sided two-sample t-tests when no assumption
More informationSPSS Manual for Introductory Applied Statistics: A Variable Approach
SPSS Manual for Introductory Applied Statistics: A Variable Approach John Gabrosek Department of Statistics Grand Valley State University Allendale, MI USA August 2013 2 Copyright 2013 John Gabrosek. All
More informationTutorial 5: Hypothesis Testing
Tutorial 5: Hypothesis Testing Rob Nicholls nicholls@mrc-lmb.cam.ac.uk MRC LMB Statistics Course 2014 Contents 1 Introduction................................ 1 2 Testing distributional assumptions....................
More informationNCSS Statistical Software
Chapter 06 Introduction This procedure provides several reports for the comparison of two distributions, including confidence intervals for the difference in means, two-sample t-tests, the z-test, the
More informationData Analysis Tools. Tools for Summarizing Data
Data Analysis Tools This section of the notes is meant to introduce you to many of the tools that are provided by Excel under the Tools/Data Analysis menu item. If your computer does not have that tool
More informationTwo-Sample T-Tests Assuming Equal Variance (Enter Means)
Chapter 4 Two-Sample T-Tests Assuming Equal Variance (Enter Means) Introduction This procedure provides sample size and power calculations for one- or two-sided two-sample t-tests when the variances of
More informationABSORBENCY OF PAPER TOWELS
ABSORBENCY OF PAPER TOWELS 15. Brief Version of the Case Study 15.1 Problem Formulation 15.2 Selection of Factors 15.3 Obtaining Random Samples of Paper Towels 15.4 How will the Absorbency be measured?
More informationHYPOTHESIS TESTING: CONFIDENCE INTERVALS, T-TESTS, ANOVAS, AND REGRESSION
HYPOTHESIS TESTING: CONFIDENCE INTERVALS, T-TESTS, ANOVAS, AND REGRESSION HOD 2990 10 November 2010 Lecture Background This is a lightning speed summary of introductory statistical methods for senior undergraduate
More informationIBM SPSS Statistics 20 Part 4: Chi-Square and ANOVA
CALIFORNIA STATE UNIVERSITY, LOS ANGELES INFORMATION TECHNOLOGY SERVICES IBM SPSS Statistics 20 Part 4: Chi-Square and ANOVA Summer 2013, Version 2.0 Table of Contents Introduction...2 Downloading the
More informationIntroduction to Quantitative Methods
Introduction to Quantitative Methods October 15, 2009 Contents 1 Definition of Key Terms 2 2 Descriptive Statistics 3 2.1 Frequency Tables......................... 4 2.2 Measures of Central Tendencies.................
More informationNonparametric Two-Sample Tests. Nonparametric Tests. Sign Test
Nonparametric Two-Sample Tests Sign test Mann-Whitney U-test (a.k.a. Wilcoxon two-sample test) Kolmogorov-Smirnov Test Wilcoxon Signed-Rank Test Tukey-Duckworth Test 1 Nonparametric Tests Recall, nonparametric
More informationII. DISTRIBUTIONS distribution normal distribution. standard scores
Appendix D Basic Measurement And Statistics The following information was developed by Steven Rothke, PhD, Department of Psychology, Rehabilitation Institute of Chicago (RIC) and expanded by Mary F. Schmidt,
More informationAnalysis of Variance ANOVA
Analysis of Variance ANOVA Overview We ve used the t -test to compare the means from two independent groups. Now we ve come to the final topic of the course: how to compare means from more than two populations.
More informationAdditional sources Compilation of sources: http://lrs.ed.uiuc.edu/tseportal/datacollectionmethodologies/jin-tselink/tselink.htm
Mgt 540 Research Methods Data Analysis 1 Additional sources Compilation of sources: http://lrs.ed.uiuc.edu/tseportal/datacollectionmethodologies/jin-tselink/tselink.htm http://web.utk.edu/~dap/random/order/start.htm
More informationChapter 23. Inferences for Regression
Chapter 23. Inferences for Regression Topics covered in this chapter: Simple Linear Regression Simple Linear Regression Example 23.1: Crying and IQ The Problem: Infants who cry easily may be more easily
More informationThe Chi-Square Test. STAT E-50 Introduction to Statistics
STAT -50 Introduction to Statistics The Chi-Square Test The Chi-square test is a nonparametric test that is used to compare experimental results with theoretical models. That is, we will be comparing observed
More informationMultivariate Analysis of Variance (MANOVA)
Chapter 415 Multivariate Analysis of Variance (MANOVA) Introduction Multivariate analysis of variance (MANOVA) is an extension of common analysis of variance (ANOVA). In ANOVA, differences among various
More informationWe are often interested in the relationship between two variables. Do people with more years of full-time education earn higher salaries?
Statistics: Correlation Richard Buxton. 2008. 1 Introduction We are often interested in the relationship between two variables. Do people with more years of full-time education earn higher salaries? Do
More informationStatCrunch and Nonparametric Statistics
StatCrunch and Nonparametric Statistics You can use StatCrunch to calculate the values of nonparametric statistics. It may not be obvious how to enter the data in StatCrunch for various data sets that
More informationSPSS TUTORIAL & EXERCISE BOOK
UNIVERSITY OF MISKOLC Faculty of Economics Institute of Business Information and Methods Department of Business Statistics and Economic Forecasting PETRA PETROVICS SPSS TUTORIAL & EXERCISE BOOK FOR BUSINESS
More informationDATA INTERPRETATION AND STATISTICS
PholC60 September 001 DATA INTERPRETATION AND STATISTICS Books A easy and systematic introductory text is Essentials of Medical Statistics by Betty Kirkwood, published by Blackwell at about 14. DESCRIPTIVE
More informationReporting Statistics in Psychology
This document contains general guidelines for the reporting of statistics in psychology research. The details of statistical reporting vary slightly among different areas of science and also among different
More informationBill Burton Albert Einstein College of Medicine william.burton@einstein.yu.edu April 28, 2014 EERS: Managing the Tension Between Rigor and Resources 1
Bill Burton Albert Einstein College of Medicine william.burton@einstein.yu.edu April 28, 2014 EERS: Managing the Tension Between Rigor and Resources 1 Calculate counts, means, and standard deviations Produce
More informationt Tests in Excel The Excel Statistical Master By Mark Harmon Copyright 2011 Mark Harmon
t-tests in Excel By Mark Harmon Copyright 2011 Mark Harmon No part of this publication may be reproduced or distributed without the express permission of the author. mark@excelmasterseries.com www.excelmasterseries.com
More informationResearch Methods & Experimental Design
Research Methods & Experimental Design 16.422 Human Supervisory Control April 2004 Research Methods Qualitative vs. quantitative Understanding the relationship between objectives (research question) and
More informationDescriptive Statistics. Purpose of descriptive statistics Frequency distributions Measures of central tendency Measures of dispersion
Descriptive Statistics Purpose of descriptive statistics Frequency distributions Measures of central tendency Measures of dispersion Statistics as a Tool for LIS Research Importance of statistics in research
More informationIntroduction to Statistics with GraphPad Prism (5.01) Version 1.1
Babraham Bioinformatics Introduction to Statistics with GraphPad Prism (5.01) Version 1.1 Introduction to Statistics with GraphPad Prism 2 Licence This manual is 2010-11, Anne Segonds-Pichon. This manual
More informationTesting Group Differences using T-tests, ANOVA, and Nonparametric Measures
Testing Group Differences using T-tests, ANOVA, and Nonparametric Measures Jamie DeCoster Department of Psychology University of Alabama 348 Gordon Palmer Hall Box 870348 Tuscaloosa, AL 35487-0348 Phone:
More informationSimple Tricks for Using SPSS for Windows
Simple Tricks for Using SPSS for Windows Chapter 14. Follow-up Tests for the Two-Way Factorial ANOVA The Interaction is Not Significant If you have performed a two-way ANOVA using the General Linear Model,
More informationOutline. Definitions Descriptive vs. Inferential Statistics The t-test - One-sample t-test
The t-test Outline Definitions Descriptive vs. Inferential Statistics The t-test - One-sample t-test - Dependent (related) groups t-test - Independent (unrelated) groups t-test Comparing means Correlation
More informationData Mining Techniques Chapter 5: The Lure of Statistics: Data Mining Using Familiar Tools
Data Mining Techniques Chapter 5: The Lure of Statistics: Data Mining Using Familiar Tools Occam s razor.......................................................... 2 A look at data I.........................................................
More informationMBA 611 STATISTICS AND QUANTITATIVE METHODS
MBA 611 STATISTICS AND QUANTITATIVE METHODS Part I. Review of Basic Statistics (Chapters 1-11) A. Introduction (Chapter 1) Uncertainty: Decisions are often based on incomplete information from uncertain
More informationDESCRIPTIVE STATISTICS AND EXPLORATORY DATA ANALYSIS
DESCRIPTIVE STATISTICS AND EXPLORATORY DATA ANALYSIS SEEMA JAGGI Indian Agricultural Statistics Research Institute Library Avenue, New Delhi - 110 012 seema@iasri.res.in 1. Descriptive Statistics Statistics
More informationAnalysis of Student Retention Rates and Performance Among Fall, 2013 Alternate College Option (ACO) Students
1 Analysis of Student Retention Rates and Performance Among Fall, 2013 Alternate College Option (ACO) Students A Study Commissioned by the Persistence Committee Policy Working Group prepared by Jeffrey
More information1 Nonparametric Statistics
1 Nonparametric Statistics When finding confidence intervals or conducting tests so far, we always described the population with a model, which includes a set of parameters. Then we could make decisions
More informationNon Parametric Inference
Maura Department of Economics and Finance Università Tor Vergata Outline 1 2 3 Inverse distribution function Theorem: Let U be a uniform random variable on (0, 1). Let X be a continuous random variable
More informationScatter Plots with Error Bars
Chapter 165 Scatter Plots with Error Bars Introduction The procedure extends the capability of the basic scatter plot by allowing you to plot the variability in Y and X corresponding to each point. Each
More informationStatistics for Sports Medicine
Statistics for Sports Medicine Suzanne Hecht, MD University of Minnesota (suzanne.hecht@gmail.com) Fellow s Research Conference July 2012: Philadelphia GOALS Try not to bore you to death!! Try to teach
More informationIntroduction to Analysis of Variance (ANOVA) Limitations of the t-test
Introduction to Analysis of Variance (ANOVA) The Structural Model, The Summary Table, and the One- Way ANOVA Limitations of the t-test Although the t-test is commonly used, it has limitations Can only
More informationStudy Guide for the Final Exam
Study Guide for the Final Exam When studying, remember that the computational portion of the exam will only involve new material (covered after the second midterm), that material from Exam 1 will make
More informationT test as a parametric statistic
KJA Statistical Round pissn 2005-619 eissn 2005-7563 T test as a parametric statistic Korean Journal of Anesthesiology Department of Anesthesia and Pain Medicine, Pusan National University School of Medicine,
More informationQUANTITATIVE METHODS BIOLOGY FINAL HONOUR SCHOOL NON-PARAMETRIC TESTS
QUANTITATIVE METHODS BIOLOGY FINAL HONOUR SCHOOL NON-PARAMETRIC TESTS This booklet contains lecture notes for the nonparametric work in the QM course. This booklet may be online at http://users.ox.ac.uk/~grafen/qmnotes/index.html.
More informationAn analysis method for a quantitative outcome and two categorical explanatory variables.
Chapter 11 Two-Way ANOVA An analysis method for a quantitative outcome and two categorical explanatory variables. If an experiment has a quantitative outcome and two categorical explanatory variables that
More informationDescriptive Statistics
Y520 Robert S Michael Goal: Learn to calculate indicators and construct graphs that summarize and describe a large quantity of values. Using the textbook readings and other resources listed on the web
More informationOnce saved, if the file was zipped you will need to unzip it. For the files that I will be posting you need to change the preferences.
1 Commands in JMP and Statcrunch Below are a set of commands in JMP and Statcrunch which facilitate a basic statistical analysis. The first part concerns commands in JMP, the second part is for analysis
More informationCOMPARING DATA ANALYSIS TECHNIQUES FOR EVALUATION DESIGNS WITH NON -NORMAL POFULP_TIOKS Elaine S. Jeffers, University of Maryland, Eastern Shore*
COMPARING DATA ANALYSIS TECHNIQUES FOR EVALUATION DESIGNS WITH NON -NORMAL POFULP_TIOKS Elaine S. Jeffers, University of Maryland, Eastern Shore* The data collection phases for evaluation designs may involve
More informationNAG C Library Chapter Introduction. g08 Nonparametric Statistics
g08 Nonparametric Statistics Introduction g08 NAG C Library Chapter Introduction g08 Nonparametric Statistics Contents 1 Scope of the Chapter... 2 2 Background to the Problems... 2 2.1 Parametric and Nonparametric
More informationPremaster Statistics Tutorial 4 Full solutions
Premaster Statistics Tutorial 4 Full solutions Regression analysis Q1 (based on Doane & Seward, 4/E, 12.7) a. Interpret the slope of the fitted regression = 125,000 + 150. b. What is the prediction for
More informationANALYSIS OF TREND CHAPTER 5
ANALYSIS OF TREND CHAPTER 5 ERSH 8310 Lecture 7 September 13, 2007 Today s Class Analysis of trends Using contrasts to do something a bit more practical. Linear trends. Quadratic trends. Trends in SPSS.
More informationFoundation of Quantitative Data Analysis
Foundation of Quantitative Data Analysis Part 1: Data manipulation and descriptive statistics with SPSS/Excel HSRS #10 - October 17, 2013 Reference : A. Aczel, Complete Business Statistics. Chapters 1
More information12: Analysis of Variance. Introduction
1: Analysis of Variance Introduction EDA Hypothesis Test Introduction In Chapter 8 and again in Chapter 11 we compared means from two independent groups. In this chapter we extend the procedure to consider
More informationLeast Squares Estimation
Least Squares Estimation SARA A VAN DE GEER Volume 2, pp 1041 1045 in Encyclopedia of Statistics in Behavioral Science ISBN-13: 978-0-470-86080-9 ISBN-10: 0-470-86080-4 Editors Brian S Everitt & David
More informationSample Size and Power in Clinical Trials
Sample Size and Power in Clinical Trials Version 1.0 May 011 1. Power of a Test. Factors affecting Power 3. Required Sample Size RELATED ISSUES 1. Effect Size. Test Statistics 3. Variation 4. Significance
More informationOne-Way Analysis of Variance (ANOVA) Example Problem
One-Way Analysis of Variance (ANOVA) Example Problem Introduction Analysis of Variance (ANOVA) is a hypothesis-testing technique used to test the equality of two or more population (or treatment) means
More informationSPSS 3: COMPARING MEANS
SPSS 3: COMPARING MEANS UNIVERSITY OF GUELPH LUCIA COSTANZO lcostanz@uoguelph.ca REVISED SEPTEMBER 2012 CONTENTS SPSS availability... 2 Goals of the workshop... 2 Data for SPSS Sessions... 3 Statistical
More informationA ew onparametric Levene Test for Equal Variances
Psicológica (200), 3, 40-430. A ew onparametric Test for Equal Variances David W. Nordstokke * () & Bruno D. Zumbo (2) () University of Calgary, Canada (2) University of British Columbia, Canada Tests
More informationUNDERSTANDING ANALYSIS OF COVARIANCE (ANCOVA)
UNDERSTANDING ANALYSIS OF COVARIANCE () In general, research is conducted for the purpose of explaining the effects of the independent variable on the dependent variable, and the purpose of research design
More information