Analysis of Variance: General Concepts
|
|
- Eugene Miles
- 7 years ago
- Views:
Transcription
1 Research Skills for Psychology Majors: Everything You Need to Know to Get Started Analysis of Variance: General Concepts This chapter is designed to present the most basic ideas in analysis of variance in a non-statistical manner. Its intent is to communicate the general idea of the analysis and provide enough information to begin to read research result sections that report ANOVA analyses. Analysis of Variance is a general-purpose statistical procedure that is used to analize a wide range of research designs and to investigate many complex problems. In this chapter we will only discuss the original, basic use of ANOVA: analysis of experiments that include more than two groups. When ANOVA is used in this simple sense, it follows directly from a still simpler procedure, the t-test. The t-test compares two groups, either in a between-subjects design (different subjects in the groups) or a repeated-measures design (same subjects assessed twice). ANOVA can be thought of as an extension of the t-test to situations in which there are more than two groups (one-way design) or where there is more than one independent variable (factorial design). These situations are the most common in research, so ANOVA is used far more frequently than t-tests. Variance is Analyzed The name analysis of variance is more representative of what the analysis is about than t-test because we are in fact focusing on analyzing variances. The conceptual model for ANOVA follows the familiar pattern first introduced in the Inferential Statistics chapter: a ratio is formed between the differences in the means of the groups and the error variance. In the same way that a variance (or standard deviation) can be calculated from a set of data, a variance can be calculated from a set of means. So the differences among the means is thought of as their variance: higher variance among the means indicates that there are more differences (which is good, right?). The variance among the group means is called the between-groups variance. A Deeper Truth Actually, the t-test is a special case of ANOVA. ANOVA is the real thing. A Still Deeper Truth Actually, ANOVA is a simplification of very complex correlations. Correlation is the real thing. The ratio, then, is between-groups variance divided by error variance. A larger ratio indicates that the differences between the groups are greater than the error or noise going on inside the groups. If this ratio, the F statistic, is large enough given the size of the sample, we can reject the null hypothesis. The whole story in ANOVA is figuring out how to calculate (and understand) these two types of variance W. K. Gabrenya Jr. Version: 1.0
2 Page 2 A Visual Example Here is an example of a one-way, between-groups design that would be analyzed using ANOVA. Four groups of participants are randomly sampled from four majors on campus. We will not identify the majors for the sake of interdepartmental harmony, but the identity of Group 4 is clear. Each sample includes five students. They are each administered the Wechsler Adult Intelligence Scale (WAIS-III) to obtain a measure if IQ. IQs have a mean of 100 in the population as a whole. Our question: which major is smarter? The following table presents the raw data (IQ scores), the means within each group, the standard deviation within each group, and the variance. The variance is simply the SD squared, a more useful number for certain aspects of the calcultions. It is normal that some of the SDs are larger than others. The gray bars below the scale represent the range of the IQs in each major, which is one indication of the within-group variability. (A wider range often produces a higher SD.) In the last column, the mean of the means (grand mean), the standard deviation of the means, and the variance of the means are presented. Data: 80, 85, 90, 95, 100 Group 1 Group 2 Group 3 Group 4 Grand Mean 90, 93, 96, 99, , 100, 103, 106, , 110, 115, 120, 125 Mean: Std. Dev.: Variance: Grand Mean (the group means could go here) Gp Group 4 Group 2 Group 3 What s the null hypothesis? The null condition is that there is no difference between the population means: H 0 : µ 1 = µ 2 = µ 3 = µ 4, where µ is mu, the population mean. Our task is to determine if the sample means, presented in the table above, are sufficiently different from each other compared to the error variance within the groups, to reject the null hypothesis. Of course the sample size will also affect the outcome because larger samples allow for better tests of the null hypothesis. In the language of ANOVA, we will look at the ratio of the between-group variance to the within-group (error) variance. Between-groups variance F = Error Variance Within Groups
3 Page 3 In the example, we have included the individual data for group 1 as circles inside the group 1 gray bar. The SD of group 1, 7.9, is computed from these 5 values. Recall that the SD is the variability of the individual data based on how distant each one is from the group mean (90.0). In other words, it is a measure of the extent to which the five students sampled for that major are not exactly of the same intelligence. The student in group 2 are more similar to each other and produce a SD of 4.7. The overall error variance for the sample is computed by combining these four SDs (see sidebar). Calculating the Variances Within-Groups (Error) Variance: The overall amount of error variance is the combined variances of the four groups. Combining the variances from several groups together is called pooling, so the resulting combined variance is termed the pooled variance. Averaging the variances in this study produces a pooled error variance of =42.5. Between-Groups Variance: Calculation of the between-groups variance is not as intuitive as the whithin-groups variance. Conceptually, it seems that the SD of the four group means would be a good measure. (The SD of the means is 10.7.) However, the actual between-groups SD is 24.0, so the between-groups variance is 24 2 = 577. The between-groups variability is computed in the same way, but we look at how much the group means vary from the grand mean (the mean of the means). The higher this variability, the more the means differ from each other and the more the null hyothesis looks rejectable. (See sidebar.) Finally, the ANOVA The ANOVA focuses on the ratio of the between group variance to the withingroups variance. SPSS produces an ANOVA source table to report the result of the analysis.this table is called a source table because it identifies the sources of variability in the data. As explained above, there are two kinds of variability, variability between group means, and variability within groups (error variance). The source table provides information about these two sources. The column numbers have been added for our use. Column 3, reflecting the number of groups and the sample size, is discussed in the sidebar. Column 4 presents the variance associated with the mean differences (between groups) and within-group error. These numbers are discussed in the Variances sidebar. Column 5 is the ratio of these two values, the F statistic. Column 6 presents the p-value (see Inferential Statistics chapter) of the F statistic based on the sample size. Because our normal criterion for rejecting the null hypothesis is p<.05, this p value is very good (good = low), and ANOVA Source Table Between Groups Within Groups Sum of Squares df Mean Square F Sig Total Degrees of Freedom in ANOVA All statistics, such as F, t, and chi-square, are evaluated in the context of the sample size: larger samples allow lower statistical values to reach the magic.05 level of confidence. The sample size is expressed in terms of degrees of freedom (df). Your statistics class has more to say about df. In a t-test, the df is the sample size minus 2 (N-2). In ANOVA, we use two df values. The df-error is based on the sample size: df e = (n g -1), where n g is the size of each of the group samples 16 = (5-1) + (5-1) + (5-1) + (5-1) ANOVA also requires a df for the number of groups: df bg = g-1, where g is the number of groups The F statistic is always presented along with these df values, e.g., F(3,19)=13.6, p<.0001.
4 Page 4 we can reject the null hypothesis. What has been rejected? By rejecting the null hypothesis, we conclude that the four means are not equal in the population, that is, all majors are not created equal. However, what it does not tell us is exactly which major is smarter than which other major. Is Group 4 smarter than Group 2, or just smarter than the hapless Group 1? Just eyeballing the means is not good enough: we need to know if the differences between particular pairs of means are really significantly different. How is this done? One way is to perform t-tests between pairs of means (there are several other ways as well). Using SPSS to Calculate One-Way ANOVA A one-way ANOVA is an analysis in which there is only one independent variable, as the preceding example. This is the simplest kind of ANOVA, and SPSS dedicates a procedure purely to it. (See menu screen illustration.) The dialog window in which the details of the analysis are entered is quite simple. In the dialog illustration, the dependent variable (IQ) and the independent variable (group) have been entered. In the Options... dialog you can ask for descriptive statistics and a rather sorry looking graph of the means. Syntax: ONEWAY iq BY group /STATISTICS DESCRIPTIVES /PLOT MEANS /MISSING ANALYSIS. The principal output of the procedure is the source table shown above. In a paper, the approprate way to report the results of an ANOVA is a variation of: A one-way between-groups ANOVA revealed a significant effect of major, F(3,16) = 13.6, p <.05. Note that the ANOVA used two types of degrees of freedom: the between-groups df and the error df.
5 Page 5 Factorial ANOVA If indeed the truth lies in the interactions, then we need to perform more complicated studies that include more than one IV. Factorial designs of this kind were introduced in the research designs chapter. For example, in the study presented above, we might want to know if gender is related to IQ. The obvious design would be a 4x2 between-subjects factorial: four majors crossed with gender. In the table below, the 40 students are indicated by S 1...S 40 in the 8 cells of the factorial design. Group 1 Group 2 Group 3 Group 4 Male S 1, S 2, S 3, S 4, S 5 S 11, S 12, S 13, S 14, S 15 S 21, S 22, S 23, S 24, S 25 S 31, S 32, S 33, S 34, S 35 Female S 6, S 7, S 8, S 9, S 10 S 16, S 17, S 18, S 19, S 20 S 26, S 27, S 28, S 29, S 30 S 36, S 37, S 38, S 39, S 40 The mathematics of a factorial ANOVA are more complicated than those of the one-way ANOVA, but the principles are the same. The ANOVA compares the variability due to between-groups differences to the amount of error variance in the sample. However, in this two-way factorial, we need to look at three types of between-groups variability: the variability between the majors, the variability between the genders, and the interaction effect variability. A ratio (F statistic) of between-subjects variability to error variance is calculated for each of these three types of between-groups variability. How many null hypotheses are there? SPSS and Factorial ANOVA The simple one-way ANOVA procedure cannot be used. Instead, factorial ANOVAs are produced by the SPSS GLM procedure. GLM mean general linear model. You will study the GLM in your second year of graduate-level statistics. GLM is a very powerful and flexible procedure that was only introduced to SPSS in the 1980s. Because it is powerful and flexible, it can be configured in many ways and has a large number of options. Univariate refers to the fact that you will be analyzing one dependent variable at a time. The IQ across majors study presented previously was enhanced by adding gender as a second independent variable to serve as an example of a factorial ANOVA. The analysis dialog box shown here has been configured to run this 4x2 ANOVA. Use the Fixed Factors box for the IVs. Ignore the boxes below that until you get to graduate school. You can specify in detail which means tables you would like to see display in the output by clicking on Options. What Other Goodies are in this Menu? Multivariate ANOVA (MANOVA) allows you to analyze several DVs simultaneously, in a single set. Repeated Measures ANOVA analyzes the repeated measures designs introduced in the research designs chapter.
6 Page 6 Double-clicking on the items in the left-side box moves them to the right side Display Means for box. In this case, moving group to the Display Means box produces a means table that includes just the main effect of group. The group*gender item displays a 4x2 table of means from which you can see if there is an interaction effect. Syntax: UNIANOVA iq BY group gender /METHOD = SSTYPE(3) /INTERCEPT = INCLUDE /EMMEANS = TABLES(group*gender) /CRITERIA = ALPHA(.05) /DESIGN = group gender group*gender. The Output The source table in a factorial ANOVA expands on that of the one-way ANOVA. Two additional sources are reported: the second IV, and the interaction effect. (See Tests of Between-Subjects Effects table.) The only rows of importance in this source table are those indicating the effects in the factorial model: GROUP, GENDER, and GROUP*GENDER. The F statistics in this type of source table are calculated by dividing a factor s Mean Square by the Mean Square of the Error row. Mean Square is another way of saying variance. Hence, F for the Group factor is: MS group /MS error = 477.2/ = These results show that the Group and Gender main effects are significant at a very low p value. SPSS will not print all of the significant digits of a very small p value. For Group, the actual p value is , but no one cares because it is so far below.05. The Group X Gender interaction is not significant because the p value is so large (p =.567). In a paper, there are several forms for reporting the results of a factorial ANOVA: A 4 (major) x 2 (gender) between-groups ANOVA revealed
7 Page 7 significant main effects of major, F(3,12) = 33.7, p <.05, and gender, F(1, 12) = 33.9, p <.05. The interaction effect did not approach significance, F < 1. or, if the interaction had been stronger: A 4 (major) X 2 (gender) between-groups ANOVA revealed significant main effects of major, F(3,12) = 33.7, p <.05, and gender, F(1, 12) = 33.9, p <.05. However, these main effects must be interpreted within the significant Major X Gender interaction, F(3,12) = 8.5, p <.05. Note that the ANOVA used two types of degrees of freedom: the between-groups df and the error df. Digging Deeper Overall, do major and gender help us know what students IQs are? Said another way, do major and gender predict IQ? The Corrected Model row in the source table answers this general question: yes. The idea of a model was introduced in an early chapter. Here, the model is expressed mathematically: IQ = ƒ (major, gender) The Corrected Model essentially combines all the predictors of IQ (Group, Gender, and their interaction) to see if, as a whole, they predict the dependent variable. (Hint: add the df.) Of course, we usually don t care about the whole model, but rather only about its component parts, the individual IVs. The Intercept row in the table is Tests of Between-Subjects Effects Dependent Variable: IQ Source Corrected Model Type III Sum of Squares Reprise of Still Deeper Truth The intercept reveals a clue to the ridiculous conspiracy theory that ANOVA is just a lot of correlations. Do you remember the equation for a line from algebra? In statistics we call this a regression line, and write the equation as y is the dependent variable, IQ x is sort of the independent variables, major, gender, and the interaction, all rolled into one (sort of) y = a + bx + e a is the y-intercept of the line b is the slope of the line e is the error variance In the ANOVA table, the intercept F-test is testing if the y-intercept (a) is different than zero. In a certain sense, the corrected model F-test is testing whether the slope (b) is different than zero. When the slope is different than zero, the independent variables (x) affect the dependent variable (y). In the manner of a correlation, a (b) near 1.0 and a low (e) gives us a correlation scattergram with a long, skinny oval. (i.e., a good correlation). Error variance (e) is analogous to the fatness of the oval. df Mean Square F Sig Intercept GROUP GENDER GROUP * GENDER Error Total Corrected Total R Squared =.929 (Adjusted R Squared =.888) y-intercept (a) y (DV) Line, slope (b) x Skinny oval and slope near 1.0 indicates high correlation coefficient (IV)
8 Page 8 not usually important. It compares the grand mean (101.0) to zero. Because 101 is so far from zero, the F is enormous. (But see the sidebar for its deeper meaning.) What s Next? A lot more...
Chapter 5 Analysis of variance SPSS Analysis of variance
Chapter 5 Analysis of variance SPSS Analysis of variance Data file used: gss.sav How to get there: Analyze Compare Means One-way ANOVA To test the null hypothesis that several population means are equal,
More informationSimple Tricks for Using SPSS for Windows
Simple Tricks for Using SPSS for Windows Chapter 14. Follow-up Tests for the Two-Way Factorial ANOVA The Interaction is Not Significant If you have performed a two-way ANOVA using the General Linear Model,
More informationMain Effects and Interactions
Main Effects & Interactions page 1 Main Effects and Interactions So far, we ve talked about studies in which there is just one independent variable, such as violence of television program. You might randomly
More informationSPSS Guide: Regression Analysis
SPSS Guide: Regression Analysis I put this together to give you a step-by-step guide for replicating what we did in the computer lab. It should help you run the tests we covered. The best way to get familiar
More informationProfile analysis is the multivariate equivalent of repeated measures or mixed ANOVA. Profile analysis is most commonly used in two cases:
Profile Analysis Introduction Profile analysis is the multivariate equivalent of repeated measures or mixed ANOVA. Profile analysis is most commonly used in two cases: ) Comparing the same dependent variables
More informationOne-Way ANOVA using SPSS 11.0. SPSS ANOVA procedures found in the Compare Means analyses. Specifically, we demonstrate
1 One-Way ANOVA using SPSS 11.0 This section covers steps for testing the difference between three or more group means using the SPSS ANOVA procedures found in the Compare Means analyses. Specifically,
More informationINTERPRETING THE ONE-WAY ANALYSIS OF VARIANCE (ANOVA)
INTERPRETING THE ONE-WAY ANALYSIS OF VARIANCE (ANOVA) As with other parametric statistics, we begin the one-way ANOVA with a test of the underlying assumptions. Our first assumption is the assumption of
More informationHYPOTHESIS TESTING: CONFIDENCE INTERVALS, T-TESTS, ANOVAS, AND REGRESSION
HYPOTHESIS TESTING: CONFIDENCE INTERVALS, T-TESTS, ANOVAS, AND REGRESSION HOD 2990 10 November 2010 Lecture Background This is a lightning speed summary of introductory statistical methods for senior undergraduate
More information1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96
1 Final Review 2 Review 2.1 CI 1-propZint Scenario 1 A TV manufacturer claims in its warranty brochure that in the past not more than 10 percent of its TV sets needed any repair during the first two years
More informationMixed 2 x 3 ANOVA. Notes
Mixed 2 x 3 ANOVA This section explains how to perform an ANOVA when one of the variables takes the form of repeated measures and the other variable is between-subjects that is, independent groups of participants
More informationABSORBENCY OF PAPER TOWELS
ABSORBENCY OF PAPER TOWELS 15. Brief Version of the Case Study 15.1 Problem Formulation 15.2 Selection of Factors 15.3 Obtaining Random Samples of Paper Towels 15.4 How will the Absorbency be measured?
More informationUNDERSTANDING THE TWO-WAY ANOVA
UNDERSTANDING THE e have seen how the one-way ANOVA can be used to compare two or more sample means in studies involving a single independent variable. This can be extended to two independent variables
More informationThe Dummy s Guide to Data Analysis Using SPSS
The Dummy s Guide to Data Analysis Using SPSS Mathematics 57 Scripps College Amy Gamble April, 2001 Amy Gamble 4/30/01 All Rights Rerserved TABLE OF CONTENTS PAGE Helpful Hints for All Tests...1 Tests
More informationSimple Linear Regression, Scatterplots, and Bivariate Correlation
1 Simple Linear Regression, Scatterplots, and Bivariate Correlation This section covers procedures for testing the association between two continuous variables using the SPSS Regression and Correlate analyses.
More informationUnit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression
Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression Objectives: To perform a hypothesis test concerning the slope of a least squares line To recognize that testing for a
More informationIntroduction to Analysis of Variance (ANOVA) Limitations of the t-test
Introduction to Analysis of Variance (ANOVA) The Structural Model, The Summary Table, and the One- Way ANOVA Limitations of the t-test Although the t-test is commonly used, it has limitations Can only
More informationThis chapter discusses some of the basic concepts in inferential statistics.
Research Skills for Psychology Majors: Everything You Need to Know to Get Started Inferential Statistics: Basic Concepts This chapter discusses some of the basic concepts in inferential statistics. Details
More informationHow To Run Statistical Tests in Excel
How To Run Statistical Tests in Excel Microsoft Excel is your best tool for storing and manipulating data, calculating basic descriptive statistics such as means and standard deviations, and conducting
More informationLinear Models in STATA and ANOVA
Session 4 Linear Models in STATA and ANOVA Page Strengths of Linear Relationships 4-2 A Note on Non-Linear Relationships 4-4 Multiple Linear Regression 4-5 Removal of Variables 4-8 Independent Samples
More informationStatistiek II. John Nerbonne. October 1, 2010. Dept of Information Science j.nerbonne@rug.nl
Dept of Information Science j.nerbonne@rug.nl October 1, 2010 Course outline 1 One-way ANOVA. 2 Factorial ANOVA. 3 Repeated measures ANOVA. 4 Correlation and regression. 5 Multiple regression. 6 Logistic
More informationAssociation Between Variables
Contents 11 Association Between Variables 767 11.1 Introduction............................ 767 11.1.1 Measure of Association................. 768 11.1.2 Chapter Summary.................... 769 11.2 Chi
More informationMultiple Regression in SPSS This example shows you how to perform multiple regression. The basic command is regression : linear.
Multiple Regression in SPSS This example shows you how to perform multiple regression. The basic command is regression : linear. In the main dialog box, input the dependent variable and several predictors.
More informationAdditional sources Compilation of sources: http://lrs.ed.uiuc.edu/tseportal/datacollectionmethodologies/jin-tselink/tselink.htm
Mgt 540 Research Methods Data Analysis 1 Additional sources Compilation of sources: http://lrs.ed.uiuc.edu/tseportal/datacollectionmethodologies/jin-tselink/tselink.htm http://web.utk.edu/~dap/random/order/start.htm
More informationExperimental Designs (revisited)
Introduction to ANOVA Copyright 2000, 2011, J. Toby Mordkoff Probably, the best way to start thinking about ANOVA is in terms of factors with levels. (I say this because this is how they are described
More informationThis chapter will demonstrate how to perform multiple linear regression with IBM SPSS
CHAPTER 7B Multiple Regression: Statistical Methods Using IBM SPSS This chapter will demonstrate how to perform multiple linear regression with IBM SPSS first using the standard method and then using the
More informationAn analysis method for a quantitative outcome and two categorical explanatory variables.
Chapter 11 Two-Way ANOVA An analysis method for a quantitative outcome and two categorical explanatory variables. If an experiment has a quantitative outcome and two categorical explanatory variables that
More informationMultivariate Analysis of Variance. The general purpose of multivariate analysis of variance (MANOVA) is to determine
2 - Manova 4.3.05 25 Multivariate Analysis of Variance What Multivariate Analysis of Variance is The general purpose of multivariate analysis of variance (MANOVA) is to determine whether multiple levels
More informationClass 19: Two Way Tables, Conditional Distributions, Chi-Square (Text: Sections 2.5; 9.1)
Spring 204 Class 9: Two Way Tables, Conditional Distributions, Chi-Square (Text: Sections 2.5; 9.) Big Picture: More than Two Samples In Chapter 7: We looked at quantitative variables and compared the
More informationIndependent t- Test (Comparing Two Means)
Independent t- Test (Comparing Two Means) The objectives of this lesson are to learn: the definition/purpose of independent t-test when to use the independent t-test the use of SPSS to complete an independent
More informationSPSS Explore procedure
SPSS Explore procedure One useful function in SPSS is the Explore procedure, which will produce histograms, boxplots, stem-and-leaf plots and extensive descriptive statistics. To run the Explore procedure,
More information" Y. Notation and Equations for Regression Lecture 11/4. Notation:
Notation: Notation and Equations for Regression Lecture 11/4 m: The number of predictor variables in a regression Xi: One of multiple predictor variables. The subscript i represents any number from 1 through
More information1.5 Oneway Analysis of Variance
Statistics: Rosie Cornish. 200. 1.5 Oneway Analysis of Variance 1 Introduction Oneway analysis of variance (ANOVA) is used to compare several means. This method is often used in scientific or medical experiments
More informationProjects Involving Statistics (& SPSS)
Projects Involving Statistics (& SPSS) Academic Skills Advice Starting a project which involves using statistics can feel confusing as there seems to be many different things you can do (charts, graphs,
More informationChapter 13 Introduction to Linear Regression and Correlation Analysis
Chapter 3 Student Lecture Notes 3- Chapter 3 Introduction to Linear Regression and Correlation Analsis Fall 2006 Fundamentals of Business Statistics Chapter Goals To understand the methods for displaing
More informationChapter Seven. Multiple regression An introduction to multiple regression Performing a multiple regression on SPSS
Chapter Seven Multiple regression An introduction to multiple regression Performing a multiple regression on SPSS Section : An introduction to multiple regression WHAT IS MULTIPLE REGRESSION? Multiple
More informationDDBA 8438: The t Test for Independent Samples Video Podcast Transcript
DDBA 8438: The t Test for Independent Samples Video Podcast Transcript JENNIFER ANN MORROW: Welcome to The t Test for Independent Samples. My name is Dr. Jennifer Ann Morrow. In today's demonstration,
More informationSPSS Tests for Versions 9 to 13
SPSS Tests for Versions 9 to 13 Chapter 2 Descriptive Statistic (including median) Choose Analyze Descriptive statistics Frequencies... Click on variable(s) then press to move to into Variable(s): list
More informationSCHOOL OF HEALTH AND HUMAN SCIENCES DON T FORGET TO RECODE YOUR MISSING VALUES
SCHOOL OF HEALTH AND HUMAN SCIENCES Using SPSS Topics addressed today: 1. Differences between groups 2. Graphing Use the s4data.sav file for the first part of this session. DON T FORGET TO RECODE YOUR
More informationKSTAT MINI-MANUAL. Decision Sciences 434 Kellogg Graduate School of Management
KSTAT MINI-MANUAL Decision Sciences 434 Kellogg Graduate School of Management Kstat is a set of macros added to Excel and it will enable you to do the statistics required for this course very easily. To
More informationRegression Analysis: A Complete Example
Regression Analysis: A Complete Example This section works out an example that includes all the topics we have discussed so far in this chapter. A complete example of regression analysis. PhotoDisc, Inc./Getty
More informationPart 2: Analysis of Relationship Between Two Variables
Part 2: Analysis of Relationship Between Two Variables Linear Regression Linear correlation Significance Tests Multiple regression Linear Regression Y = a X + b Dependent Variable Independent Variable
More informationDEPARTMENT OF PSYCHOLOGY UNIVERSITY OF LANCASTER MSC IN PSYCHOLOGICAL RESEARCH METHODS ANALYSING AND INTERPRETING DATA 2 PART 1 WEEK 9
DEPARTMENT OF PSYCHOLOGY UNIVERSITY OF LANCASTER MSC IN PSYCHOLOGICAL RESEARCH METHODS ANALYSING AND INTERPRETING DATA 2 PART 1 WEEK 9 Analysis of covariance and multiple regression So far in this course,
More informationDescriptive Statistics
Descriptive Statistics Primer Descriptive statistics Central tendency Variation Relative position Relationships Calculating descriptive statistics Descriptive Statistics Purpose to describe or summarize
More informationIBM SPSS Statistics 20 Part 4: Chi-Square and ANOVA
CALIFORNIA STATE UNIVERSITY, LOS ANGELES INFORMATION TECHNOLOGY SERVICES IBM SPSS Statistics 20 Part 4: Chi-Square and ANOVA Summer 2013, Version 2.0 Table of Contents Introduction...2 Downloading the
More informationSPSS Resources. 1. See website (readings) for SPSS tutorial & Stats handout
Analyzing Data SPSS Resources 1. See website (readings) for SPSS tutorial & Stats handout Don t have your own copy of SPSS? 1. Use the libraries to analyze your data 2. Download a trial version of SPSS
More informationRecall this chart that showed how most of our course would be organized:
Chapter 4 One-Way ANOVA Recall this chart that showed how most of our course would be organized: Explanatory Variable(s) Response Variable Methods Categorical Categorical Contingency Tables Categorical
More informationUNDERSTANDING THE DEPENDENT-SAMPLES t TEST
UNDERSTANDING THE DEPENDENT-SAMPLES t TEST A dependent-samples t test (a.k.a. matched or paired-samples, matched-pairs, samples, or subjects, simple repeated-measures or within-groups, or correlated groups)
More informationSimple Regression Theory II 2010 Samuel L. Baker
SIMPLE REGRESSION THEORY II 1 Simple Regression Theory II 2010 Samuel L. Baker Assessing how good the regression equation is likely to be Assignment 1A gets into drawing inferences about how close the
More informationLesson 1: Comparison of Population Means Part c: Comparison of Two- Means
Lesson : Comparison of Population Means Part c: Comparison of Two- Means Welcome to lesson c. This third lesson of lesson will discuss hypothesis testing for two independent means. Steps in Hypothesis
More informationDoing Multiple Regression with SPSS. In this case, we are interested in the Analyze options so we choose that menu. If gives us a number of choices:
Doing Multiple Regression with SPSS Multiple Regression for Data Already in Data Editor Next we want to specify a multiple regression analysis for these data. The menu bar for SPSS offers several options:
More informationII. DISTRIBUTIONS distribution normal distribution. standard scores
Appendix D Basic Measurement And Statistics The following information was developed by Steven Rothke, PhD, Department of Psychology, Rehabilitation Institute of Chicago (RIC) and expanded by Mary F. Schmidt,
More informationUsing Excel for inferential statistics
FACT SHEET Using Excel for inferential statistics Introduction When you collect data, you expect a certain amount of variation, just caused by chance. A wide variety of statistical tests can be applied
More informationresearch/scientific includes the following: statistical hypotheses: you have a null and alternative you accept one and reject the other
1 Hypothesis Testing Richard S. Balkin, Ph.D., LPC-S, NCC 2 Overview When we have questions about the effect of a treatment or intervention or wish to compare groups, we use hypothesis testing Parametric
More informationData Analysis Tools. Tools for Summarizing Data
Data Analysis Tools This section of the notes is meant to introduce you to many of the tools that are provided by Excel under the Tools/Data Analysis menu item. If your computer does not have that tool
More informationMultivariate Analysis of Variance (MANOVA)
Chapter 415 Multivariate Analysis of Variance (MANOVA) Introduction Multivariate analysis of variance (MANOVA) is an extension of common analysis of variance (ANOVA). In ANOVA, differences among various
More informationAnalysis of Variance ANOVA
Analysis of Variance ANOVA Overview We ve used the t -test to compare the means from two independent groups. Now we ve come to the final topic of the course: how to compare means from more than two populations.
More informationChapter 15. Mixed Models. 15.1 Overview. A flexible approach to correlated data.
Chapter 15 Mixed Models A flexible approach to correlated data. 15.1 Overview Correlated data arise frequently in statistical analyses. This may be due to grouping of subjects, e.g., students within classrooms,
More informationChapter 9. Two-Sample Tests. Effect Sizes and Power Paired t Test Calculation
Chapter 9 Two-Sample Tests Paired t Test (Correlated Groups t Test) Effect Sizes and Power Paired t Test Calculation Summary Independent t Test Chapter 9 Homework Power and Two-Sample Tests: Paired Versus
More informationChapter 7. Comparing Means in SPSS (t-tests) Compare Means analyses. Specifically, we demonstrate procedures for running Dependent-Sample (or
1 Chapter 7 Comparing Means in SPSS (t-tests) This section covers procedures for testing the differences between two means using the SPSS Compare Means analyses. Specifically, we demonstrate procedures
More informationSession 7 Bivariate Data and Analysis
Session 7 Bivariate Data and Analysis Key Terms for This Session Previously Introduced mean standard deviation New in This Session association bivariate analysis contingency table co-variation least squares
More informationResearch Methods & Experimental Design
Research Methods & Experimental Design 16.422 Human Supervisory Control April 2004 Research Methods Qualitative vs. quantitative Understanding the relationship between objectives (research question) and
More informationJanuary 26, 2009 The Faculty Center for Teaching and Learning
THE BASICS OF DATA MANAGEMENT AND ANALYSIS A USER GUIDE January 26, 2009 The Faculty Center for Teaching and Learning THE BASICS OF DATA MANAGEMENT AND ANALYSIS Table of Contents Table of Contents... i
More informationComparing Means in Two Populations
Comparing Means in Two Populations Overview The previous section discussed hypothesis testing when sampling from a single population (either a single mean or two means from the same population). Now we
More informationIntroduction. Hypothesis Testing. Hypothesis Testing. Significance Testing
Introduction Hypothesis Testing Mark Lunt Arthritis Research UK Centre for Ecellence in Epidemiology University of Manchester 13/10/2015 We saw last week that we can never know the population parameters
More information2. Simple Linear Regression
Research methods - II 3 2. Simple Linear Regression Simple linear regression is a technique in parametric statistics that is commonly used for analyzing mean response of a variable Y which changes according
More informationCOMPARISONS OF CUSTOMER LOYALTY: PUBLIC & PRIVATE INSURANCE COMPANIES.
277 CHAPTER VI COMPARISONS OF CUSTOMER LOYALTY: PUBLIC & PRIVATE INSURANCE COMPANIES. This chapter contains a full discussion of customer loyalty comparisons between private and public insurance companies
More informationRegression and Correlation
Regression and Correlation Topics Covered: Dependent and independent variables. Scatter diagram. Correlation coefficient. Linear Regression line. by Dr.I.Namestnikova 1 Introduction Regression analysis
More informationHow to Get More Value from Your Survey Data
Technical report How to Get More Value from Your Survey Data Discover four advanced analysis techniques that make survey research more effective Table of contents Introduction..............................................................2
More information10. Comparing Means Using Repeated Measures ANOVA
10. Comparing Means Using Repeated Measures ANOVA Objectives Calculate repeated measures ANOVAs Calculate effect size Conduct multiple comparisons Graphically illustrate mean differences Repeated measures
More informationAnalyzing Intervention Effects: Multilevel & Other Approaches. Simplest Intervention Design. Better Design: Have Pretest
Analyzing Intervention Effects: Multilevel & Other Approaches Joop Hox Methodology & Statistics, Utrecht Simplest Intervention Design R X Y E Random assignment Experimental + Control group Analysis: t
More informationMultivariate Analysis of Variance (MANOVA)
Multivariate Analysis of Variance (MANOVA) Aaron French, Marcelo Macedo, John Poulsen, Tyler Waterson and Angela Yu Keywords: MANCOVA, special cases, assumptions, further reading, computations Introduction
More informationRegression step-by-step using Microsoft Excel
Step 1: Regression step-by-step using Microsoft Excel Notes prepared by Pamela Peterson Drake, James Madison University Type the data into the spreadsheet The example used throughout this How to is a regression
More informationCase Study in Data Analysis Does a drug prevent cardiomegaly in heart failure?
Case Study in Data Analysis Does a drug prevent cardiomegaly in heart failure? Harvey Motulsky hmotulsky@graphpad.com This is the first case in what I expect will be a series of case studies. While I mention
More informationCHAPTER 13 SIMPLE LINEAR REGRESSION. Opening Example. Simple Regression. Linear Regression
Opening Example CHAPTER 13 SIMPLE LINEAR REGREION SIMPLE LINEAR REGREION! Simple Regression! Linear Regression Simple Regression Definition A regression model is a mathematical equation that descries the
More informationHaving a coin come up heads or tails is a variable on a nominal scale. Heads is a different category from tails.
Chi-square Goodness of Fit Test The chi-square test is designed to test differences whether one frequency is different from another frequency. The chi-square test is designed for use with data on a nominal
More informationAn introduction to IBM SPSS Statistics
An introduction to IBM SPSS Statistics Contents 1 Introduction... 1 2 Entering your data... 2 3 Preparing your data for analysis... 10 4 Exploring your data: univariate analysis... 14 5 Generating descriptive
More informationOne-Way Analysis of Variance (ANOVA) Example Problem
One-Way Analysis of Variance (ANOVA) Example Problem Introduction Analysis of Variance (ANOVA) is a hypothesis-testing technique used to test the equality of two or more population (or treatment) means
More informationBasic Concepts in Research and Data Analysis
Basic Concepts in Research and Data Analysis Introduction: A Common Language for Researchers...2 Steps to Follow When Conducting Research...3 The Research Question... 3 The Hypothesis... 4 Defining the
More informationSection 14 Simple Linear Regression: Introduction to Least Squares Regression
Slide 1 Section 14 Simple Linear Regression: Introduction to Least Squares Regression There are several different measures of statistical association used for understanding the quantitative relationship
More informationIntroduction to Quantitative Methods
Introduction to Quantitative Methods October 15, 2009 Contents 1 Definition of Key Terms 2 2 Descriptive Statistics 3 2.1 Frequency Tables......................... 4 2.2 Measures of Central Tendencies.................
More informationLAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING
LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING In this lab you will explore the concept of a confidence interval and hypothesis testing through a simulation problem in engineering setting.
More informationChapter 23. Inferences for Regression
Chapter 23. Inferences for Regression Topics covered in this chapter: Simple Linear Regression Simple Linear Regression Example 23.1: Crying and IQ The Problem: Infants who cry easily may be more easily
More informationTwo Related Samples t Test
Two Related Samples t Test In this example 1 students saw five pictures of attractive people and five pictures of unattractive people. For each picture, the students rated the friendliness of the person
More informationFactor Analysis. Chapter 420. Introduction
Chapter 420 Introduction (FA) is an exploratory technique applied to a set of observed variables that seeks to find underlying factors (subsets of variables) from which the observed variables were generated.
More information1/27/2013. PSY 512: Advanced Statistics for Psychological and Behavioral Research 2
PSY 512: Advanced Statistics for Psychological and Behavioral Research 2 Introduce moderated multiple regression Continuous predictor continuous predictor Continuous predictor categorical predictor Understand
More informationUsing MS Excel to Analyze Data: A Tutorial
Using MS Excel to Analyze Data: A Tutorial Various data analysis tools are available and some of them are free. Because using data to improve assessment and instruction primarily involves descriptive and
More informationIntroduction to Data Analysis in Hierarchical Linear Models
Introduction to Data Analysis in Hierarchical Linear Models April 20, 2007 Noah Shamosh & Frank Farach Social Sciences StatLab Yale University Scope & Prerequisites Strong applied emphasis Focus on HLM
More informationSIMPLE LINEAR CORRELATION. r can range from -1 to 1, and is independent of units of measurement. Correlation can be done on two dependent variables.
SIMPLE LINEAR CORRELATION Simple linear correlation is a measure of the degree to which two variables vary together, or a measure of the intensity of the association between two variables. Correlation
More informationSection Format Day Begin End Building Rm# Instructor. 001 Lecture Tue 6:45 PM 8:40 PM Silver 401 Ballerini
NEW YORK UNIVERSITY ROBERT F. WAGNER GRADUATE SCHOOL OF PUBLIC SERVICE Course Syllabus Spring 2016 Statistical Methods for Public, Nonprofit, and Health Management Section Format Day Begin End Building
More informationChapter 13. Chi-Square. Crosstabs and Nonparametric Tests. Specifically, we demonstrate procedures for running two separate
1 Chapter 13 Chi-Square This section covers the steps for running and interpreting chi-square analyses using the SPSS Crosstabs and Nonparametric Tests. Specifically, we demonstrate procedures for running
More informationSPSS Guide How-to, Tips, Tricks & Statistical Techniques
SPSS Guide How-to, Tips, Tricks & Statistical Techniques Support for the course Research Methodology for IB Also useful for your BSc or MSc thesis March 2014 Dr. Marijke Leliveld Jacob Wiebenga, MSc CONTENT
More informationModule 3: Correlation and Covariance
Using Statistical Data to Make Decisions Module 3: Correlation and Covariance Tom Ilvento Dr. Mugdim Pašiƒ University of Delaware Sarajevo Graduate School of Business O ften our interest in data analysis
More informationUNDERSTANDING ANALYSIS OF COVARIANCE (ANCOVA)
UNDERSTANDING ANALYSIS OF COVARIANCE () In general, research is conducted for the purpose of explaining the effects of the independent variable on the dependent variable, and the purpose of research design
More informationChapter 7 Section 7.1: Inference for the Mean of a Population
Chapter 7 Section 7.1: Inference for the Mean of a Population Now let s look at a similar situation Take an SRS of size n Normal Population : N(, ). Both and are unknown parameters. Unlike what we used
More information5. Correlation. Open HeightWeight.sav. Take a moment to review the data file.
5. Correlation Objectives Calculate correlations Calculate correlations for subgroups using split file Create scatterplots with lines of best fit for subgroups and multiple correlations Correlation The
More informationModule 5: Multiple Regression Analysis
Using Statistical Data Using to Make Statistical Decisions: Data Multiple to Make Regression Decisions Analysis Page 1 Module 5: Multiple Regression Analysis Tom Ilvento, University of Delaware, College
More informationFinal Exam Practice Problem Answers
Final Exam Practice Problem Answers The following data set consists of data gathered from 77 popular breakfast cereals. The variables in the data set are as follows: Brand: The brand name of the cereal
More informationData analysis process
Data analysis process Data collection and preparation Collect data Prepare codebook Set up structure of data Enter data Screen data for errors Exploration of data Descriptive Statistics Graphs Analysis
More informationRandomized Block Analysis of Variance
Chapter 565 Randomized Block Analysis of Variance Introduction This module analyzes a randomized block analysis of variance with up to two treatment factors and their interaction. It provides tables of
More informationStatistical tests for SPSS
Statistical tests for SPSS Paolo Coletti A.Y. 2010/11 Free University of Bolzano Bozen Premise This book is a very quick, rough and fast description of statistical tests and their usage. It is explicitly
More information