SPSS Guide How-to, Tips, Tricks & Statistical Techniques



Similar documents
The Dummy s Guide to Data Analysis Using SPSS

SPSS Tests for Versions 9 to 13

SCHOOL OF HEALTH AND HUMAN SCIENCES DON T FORGET TO RECODE YOUR MISSING VALUES

An introduction to IBM SPSS Statistics

SPSS Explore procedure

Analysing Questionnaires using Minitab (for SPSS queries contact -)

II. DISTRIBUTIONS distribution normal distribution. standard scores

Chapter 5 Analysis of variance SPSS Analysis of variance

Introduction to Quantitative Methods

Data analysis process

January 26, 2009 The Faculty Center for Teaching and Learning

Additional sources Compilation of sources:

Using SPSS, Chapter 2: Descriptive Statistics

Chapter Seven. Multiple regression An introduction to multiple regression Performing a multiple regression on SPSS

Projects Involving Statistics (& SPSS)

Directions for using SPSS

UNDERSTANDING THE TWO-WAY ANOVA

Research Methods & Experimental Design

THE KRUSKAL WALLLIS TEST

Bill Burton Albert Einstein College of Medicine April 28, 2014 EERS: Managing the Tension Between Rigor and Resources 1

Data Analysis for Marketing Research - Using SPSS

Doing Multiple Regression with SPSS. In this case, we are interested in the Analyze options so we choose that menu. If gives us a number of choices:

Multivariate Analysis of Variance (MANOVA)

SPSS ADVANCED ANALYSIS WENDIANN SETHI SPRING 2011

HYPOTHESIS TESTING: CONFIDENCE INTERVALS, T-TESTS, ANOVAS, AND REGRESSION

Mixed 2 x 3 ANOVA. Notes

Data Analysis in SPSS. February 21, If you wish to cite the contents of this document, the APA reference for them would be

Multivariate Analysis of Variance. The general purpose of multivariate analysis of variance (MANOVA) is to determine

CALCULATIONS & STATISTICS

SPSS Notes (SPSS version 15.0)

Instructions for SPSS 21

Descriptive Statistics

Examining Differences (Comparing Groups) using SPSS Inferential statistics (Part I) Dwayne Devonish

SPSS Guide: Regression Analysis

Using Excel for inferential statistics

Introduction to Regression and Data Analysis

Data Analysis Tools. Tools for Summarizing Data

Overview of Non-Parametric Statistics PRESENTER: ELAINE EISENBEISZ OWNER AND PRINCIPAL, OMEGA STATISTICS

Simple linear regression

Data exploration with Microsoft Excel: analysing more than one variable

Simple Predictive Analytics Curtis Seare

EPS 625 INTERMEDIATE STATISTICS FRIEDMAN TEST

Introduction to Analysis of Variance (ANOVA) Limitations of the t-test

Independent t- Test (Comparing Two Means)

Study Guide for the Final Exam

One-Way ANOVA using SPSS SPSS ANOVA procedures found in the Compare Means analyses. Specifically, we demonstrate

ABSORBENCY OF PAPER TOWELS

SPSS TUTORIAL & EXERCISE BOOK

The Chi-Square Test. STAT E-50 Introduction to Statistics

When to use Excel. When NOT to use Excel 9/24/2014

An introduction to using Microsoft Excel for quantitative data analysis

4. Descriptive Statistics: Measures of Variability and Central Tendency

DESCRIPTIVE STATISTICS. The purpose of statistics is to condense raw data to make it easier to answer specific questions; test hypotheses.

Linear Models in STATA and ANOVA

INTERPRETING THE ONE-WAY ANALYSIS OF VARIANCE (ANOVA)

Chapter 7 Section 7.1: Inference for the Mean of a Population

Multiple Regression in SPSS This example shows you how to perform multiple regression. The basic command is regression : linear.

Analysis of Data. Organizing Data Files in SPSS. Descriptive Statistics

Once saved, if the file was zipped you will need to unzip it. For the files that I will be posting you need to change the preferences.

Simple Linear Regression, Scatterplots, and Bivariate Correlation

Chapter 7: Simple linear regression Learning Objectives

NCSS Statistical Software

UNDERSTANDING THE INDEPENDENT-SAMPLES t TEST

Bivariate Statistics Session 2: Measuring Associations Chi-Square Test

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

Chapter 23. Inferences for Regression

ASSIGNMENT 4 PREDICTIVE MODELING AND GAINS CHARTS

KSTAT MINI-MANUAL. Decision Sciences 434 Kellogg Graduate School of Management

Chapter 13. Chi-Square. Crosstabs and Nonparametric Tests. Specifically, we demonstrate procedures for running two separate

1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96

An SPSS companion book. Basic Practice of Statistics

Statistics for Sports Medicine

Describing, Exploring, and Comparing Data

How to Get More Value from Your Survey Data

Simple Tricks for Using SPSS for Windows

HYPOTHESIS TESTING WITH SPSS:

IBM SPSS Statistics 20 Part 1: Descriptive Statistics

DESCRIPTIVE STATISTICS AND EXPLORATORY DATA ANALYSIS

business statistics using Excel OXFORD UNIVERSITY PRESS Glyn Davis & Branko Pecar

The Statistics Tutor s Quick Guide to

Testing for differences I exercises with SPSS

Data Analysis, Research Study Design and the IRB

Chapter 2: Descriptive Statistics

SPSS Resources. 1. See website (readings) for SPSS tutorial & Stats handout

Two Related Samples t Test

Row vs. Column Percents. tab PRAYER DEGREE, row col

COMPARISONS OF CUSTOMER LOYALTY: PUBLIC & PRIVATE INSURANCE COMPANIES.

SPSS Manual for Introductory Applied Statistics: A Variable Approach

13: Additional ANOVA Topics. Post hoc Comparisons

Main Effects and Interactions

Introduction to Statistics with SPSS (15.0) Version 2.3 (public)

Bowerman, O'Connell, Aitken Schermer, & Adcock, Business Statistics in Practice, Canadian edition

Introduction Course in SPSS - Evening 1

Statistics Review PSY379

We are often interested in the relationship between two variables. Do people with more years of full-time education earn higher salaries?

Chapter 2 Probability Topics SPSS T tests

Module 3: Correlation and Covariance

Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression

QUANTITATIVE METHODS BIOLOGY FINAL HONOUR SCHOOL NON-PARAMETRIC TESTS

Transcription:

SPSS Guide How-to, Tips, Tricks & Statistical Techniques Support for the course Research Methodology for IB Also useful for your BSc or MSc thesis March 2014 Dr. Marijke Leliveld Jacob Wiebenga, MSc

CONTENT Introduction... 3 Description of the dataset... 4 Step 1: Preparing the Dataset... 6 Goal: Label the variables... 6 Goal: Checking outliers... 6 Goal: Checking assumptions for regression analysis... 7 Step 2: Descriptive statistics... 9 Goal: Distribution and means... 9 Step 3: Correlations and reliability analysis... 12 Goal: Correlations between variables... 12 Goal: Test reliability... 13 Goal: Recoding variables... 15 Step 4: Making a new sum variable... 17 Goal: Making a sum variable... 17 Step 5: Technique Choice... 19 Goal: Choosing the analysis technique... 19 Step 6: The control variables... 20 Goal: T-Test Independent Samples... 20 Goal: Linear Regression... 22 Goal: One-way Anova... 24 Goal: One-way Anova (2)... 25 Step 7: Testing the hypothesis... 27 Goal: Kruskal Wallis Test... 27 Goal: Linear Regression (2)... 29 Goal: Paired samples t-test... 31 Goal: Cross-table with Chi-square... 33 Goal: Mann-Whitney U-test... 35 Goal: Manova... 37 Goal: Two-way Anova... 41 Appendix... 48 Tables for Univariate and bivariate analyses... 48 2

INTRODUCTION What to do before you start... When you start working in SPSS, it is useful to tick the box which makes sure the computer code of the analyses you make is also saved in the output. This might not immediately be clear to you. However, it is very insightful for your tutor. You can do this by going to: Edit>Options > tab Viewer > tick the box Display commands in the log in the bottom left. The techniques that are discussed in the chapter control variables are also usable for testing your hypotheses and the same goes for the analyses that are described in testing your hypotheses. The only reason these techniques are discussed in these chapters is due to the structure of the dataset used in this guide. Recommended literature Field, A. Discovering statistics using SPSS. Sage. ISBN: 978-1-84787-906-6 Huizingh, E. Applied statistics with SPSS. Sage. ISBN: 9781412919319 Keller, G. Statistics for Management and Economics (7th International Student Edition), Thomson. ISBN:0-495-01339-0 Recommended literature for more advanced and specialized techniques: Janssens, W., Wijnen, K., De Pelsmacker, P., & Van Kenhove, P. (2008) Marketing Research with SPSS. Pearson Education Limited. ISNB: 978-0-273-70383-9. > e.g. logistic regression, factor analysis, SEM, cluster analysis, Scaling techniques Tabachnick, B. G., and Fidell, L. S. (2007). Using Multivariate Statistics, 5th ed. Boston: Allyn and Bacon. ISBN-10: 0205459382; ISBN-13: 9780205459384 > e.g. multiple regressions, experimental designs (ANOVAs), and time series analyses Lattin, J., Carroll, J.D., & Green, P.E. (2003). Analyzing Multivariate Data. Toronto: Thomson Learning. ISBN 0-534-34974-9 > e.g., Roles of third variables in the linear model, Hierarchical linear models, Principal components analysis, Factor analysis, Latent class analysis, Conjoint analysis, Choice models, Multidimensional scaling. 3

DESCRIPTION OF THE DATASET The case discussed here revolves around the company Solar Energy Ltd. and the study this company did concerning the relationship between autonomy and the job satisfaction of its employees. As it happens, the board of Solar Energy Ltd. expected that more autonomy leads to higher job satisfaction. They also wanted to know whether variables such as education level, sex, age and religion also affect job satisfaction. The conceptual model of their study can be found below. Autonomy Job Satisfaction Need for structure Education level Gender Age Religion In order to study this model Solar Energy Ltd. had their employees fill out the following questionnaire: Age: What is your age? (in years) Gender: What is your gender? (1 = man, 2 = women) Religion: What is your religion? (1 = reformed, 2 = catholic, 3 = hindoeïsm, 4 = muslim, 5 = jewish, 6 = atheïst, 7 = different) Education Level: What is your highest level of education? (1 = primary school, 2 = secondary school, 3 = post-secondary, 4 = Ph.D.) Autonomy: A1 A2 How often is it necessary to explain yourself beforehand to a superior about the tasks that have to be performed? (1 = never, 2 = sometimes, 3 = often, 4 = very often, 5 = always) How often is it possible to use your own ideas in your tasks? (1 never, 7 = always) A3 How often are you dependent on tasks your colleagues are responsible for when performing your job? (1 = always, 7 = never) 4

Job satisfaction: W1 W2 W3 How useful do you feel your job is to the company? (1= not at all, 7 = a lot) How much fun do you have when doing your job? (1= not at all, 7 = a lot) How satisfied are you with your job? (1= not at all, 7 = a lot) Need for structure S1 S2 To what degree do you need a very clear description of your job? (1= not at all, 7 = a lot) To what degree do you like working with clearly defined tasks in your job? (1= not at all, 7 = a lot) S3 To what degree do you experience stress when your tasks suddenly change? (1= not at all, 7 = a lot) In the follow-up study one year later: Job satisfaction: W1later To what degree do you feel your tasks are contributing to your company? (1= not at all, 7 = a lot) W2later To what degree do you experience fun when performing your tasks? (1= not at all, 7 = a lot) W3later T To what degree are you satisfied with your job? (1= not at all, 7 = a lot) Are you satisfied with your job? (1= yes, 2 = no) 5

STEP 1: PREPARING THE DATASET GOAL: LABEL THE VARIABLES The goal of this section is learning how you can label variables so you can understand what your variables mean later on. This can be done by clearly labeling your variables and naming the categories of each variable. TECHNIQUE Click the tab variable view in the bottom left corner of your SPSS file and type a description of your variables in the column labels. After this you click a cell in the column values which allows you to specify each category of your variable. Example: click in the column values on gender and type a 1 in the value box and man in label, then press add. Do the same for woman and press add again. Important! When discussing your analyses and findings, never mention terms like variable A1, W2, etc. (like we did in this guide), but mention their full description. This is because other people (clients for example) do not know what A1, W2 means. GOAL: CHECKING OUTLIERS Every statistical technique is based on several assumptions and in order to draw the right conclusion from the results these assumptions must be met. TECHNIQUE: OUTLIERS When using statistics it is important to always check your data for possible outliers (extreme values on the dependent or independent variable). It s possible that these 6

extreme values are the result of a mistake in the data input. However, these values might also be correct. These outliers violate the normality assumption and therefore a nonparametric test is has to be chosen in these cases. Outliers can be detected by plotting boxplots, histograms, probability plots or scatterplots. GOAL: CHECKING ASSUMPTIONS FOR REGRESSION ANALYSIS The assumptions that will be covered here are relevant for regression analysis: 1. The sample should be based on independent observations. 2. There is a linear relationship between the dependent and the independent variable. 3. The residuals are normally distributed. These assumptions also hold for the One-way ANOVA and multiple regression analysis. The assumptions regarding independent observations and normality also apply to T-tests. Nonparametric tests are not restricted by these assumptions. TECHNIQUE: INDEPENDENT OBSERVATIONS The respondents in the dataset must be determined in dependent of each other, and there cannot be any relation between the observed scores between respondents. This also means the expected correlation between the residuals of a regression analysis are zero (independence assumption). In the case of dependence the estimated standard error will be smaller than the actual standard error which leads to inefficient estimates of the regression coefficients. TECHNIQUE: LINEAR RELATIONSHIP There should be a linear relationship between the dependent and the independent variable. For every set of values of the independent variable the expected mean residuals should be equal to zero. A systematic deviation of this assumption indicates the relationship is not linear. Detection of this problem can be done by making a scatterplot where the residuals (on the y-axis) are plotted against another variable (on the x-axis). If a linear relationship exists, the residuals will be spread randomly around their average (which is zero). However, when the scatterplot shows a non-linear relationship another type of relationship might be applicable (e.g. logistic relationship). TECHNIQUE: NORMALITY ASSUMPTION There are several methods to test for non-normality. Examples include graphical ways of examining normality and statistical tests. The easiest way is to plot a histogram of the residuals, which is particularly relevant for large samples. When a more accurate analysis is required one can use P-P plots (normal probability plot). This is basically a scatterplot that shows the cumulative probability of a stand normal distribution plotted against the cumulative probabilities of the observed distribution. If the normality assumption is 7

satisfied, the dots on the P-P plot will be in a straight line. The normality assumption is violated if there is a systematic deviation from the line which will lead to wrong estimates for the confidence intervals and p-values. An important note here is that when sample size is sufficiently large (about 200) and the amount of independent variables is small (less than 5), the central limit theorem says that the estimates of confidence intervals and p-values will be accurate again, even though the normality assumption is violated. 8

STEP 2: DESCRIPTIVE STATISTICS GOAL: DISTRIBUTION AND MEANS Use SPSS to calculate the number of males and females whom participated in the study and their average age. Furthermore, compute the distribution of education level and provide the medians of the ordinal variables education level and A1, and the averages for the quasi interval variables A2, A3, W1 and W2. TECHNIQUE Analyze > Descriptive Statistics > Frequencies. Put the variables gender, age, education level, A1, A2, W1,W2 in the right box and click Statistics. Next, tick the boxes mean, median, std. deviation, minimum and maximum and click continue. 9

Click Charts and tick the box Histograms (with normal curve) and click continue and OK. SPSS OUTPUT Amount of male and female participants: Gender Frequency Percent Valid Percent Cumulative Percent Valid man 107 47,6 47,6 47,6 woman 118 52,4 52,4 100,0 Total 225 100,0 100,0 This table shows that 107 participants were male (47,6%) and 118 were female (52,4%). Furthermore, it shows that everyone answered what their gender was, since there is no row called missing. Descriptive statistics: Statistics Age Education Gender A1 A2 A3 W1 W2 N Valid 225 225 225 225 225 225 225 225 Missing 0 0 0 0 0 0 0 0 Mean 41,26 2,47 1,52 3,61 4,22 4,52 3,83 3,81 Median 42,00 2,00 2,00 3,00 4,00 4,00 4,00 4,00 Std. Deviation 13,203,987,501 1,475 1,627 1,593 1,839 1,651 Minimum 20 1 1 1 1 1 1 1 Maximum 65 4 2 6 7 7 7 7 10

HOW TO REPORT DESCRIPTIVE STATISTICS The table above shows the descriptive statistics that were required. Keep in mind that when you report a mean, you should also report the standard deviation (SD). Distribution of education level: The sample consists of 107 (47,6%) men and 118 (52,4%) women, with an average age of 41 (Mage= 41,26, SD =13,20). Table 1 shows the means of variables A2, A3, W1 and W2. Furthermore, education level was approximately normally distributed (see figure 1) ranging from primary school to Ph.D.-level. The median of education level was 2,00 (secondary school) and the median of A1 3,00 (often). Figure 1. Distribution of education level Table 1. Descriptive statistics for variables A2,A3,W1 and W2 A2 A3 W1 W2 Mean 4,22 4,52 3,83 3,81 Standard deviation 1,63 1,59 1,84 1,65 11

STEP 3: CORRELATIONS AND RELIABILITY ANALYSIS Two constructs in the model (autonomy and job satisfaction) are measured with multiple questions. Before proceeding with the analysis, it might be useful to check whether or not the questions correlate with each other and if they do, make a new sum variable that depicts the whole construct up in one score. GOAL: CORRELATIONS BETWEEN VARIABLES How are the questions measuring job satisfaction related? Can they be summed up? If yes, how would you do this? TECHNIQUE: CORRELATION First, you have to check whether or not the questions measuring job satisfaction (W1,W2 and W3) correlate with each other. Use Analyze>Correlate>Bivariate and put the three variables in the right box and press OK. SPSS-OUTPUT Correlations W1 W2 W3 W1 Pearson Correlation 1,000 -,103 -,089 Sig. (2-tailed),122,186 N 225,000 225 225 W2 Pearson Correlation -,103 1,000,897** Sig. (2-tailed),122,000 N 225 225,000 225 W3 Pearson Correlation -,089,897** 1,000 Sig. (2-tailed),186,000 N 225 225 225,000 **. Correlation is significant at the 0.01 level (2-tailed). NB. A correlation table actually provides the same information twice since the information is the same across the diagonal. Therefore, you only need half of the table to find your answer. The table shows that W2 does not significantly correlates with W1 (p is larger then 0,05; p=0,122). It also seems W3 does not correlate with W1 (p=0,186). However, W3 and W2 do correlate significantly (p is smaller than 0,05, namely p=0,000). 12

HOW TO REPORT THIS? A correlation analysis showed that W1 and W2 did not significantly correlate (r= -0,103, p=0,122). The variables W2 and W3 did correlate significantly (r= 0,897, p <.001). When people have more fun while working, they will be more satisfied with their work, and vice versa. Considering you can only sum up the variables that correlate significantly, it appears only W2 and W3 can be recoded into a sum variable. GOAL: TEST RELIABILITY The final test you have to perform to check whether or not a sum variable may be computed is Cronbach s alpha. This value should be as high as possible (ranging from 0 to 1), but a value of at least 0,6 is required. TECHNIQUE: CRONBACH S ALPHA Analyze > Scale > Reliability analysis. Move the three variables of job satisfaction to the right box and press Statistics. In the top right corner, tick the box under descriptives for : item, scale en scale if item deleted. Press continue and OK. 13

SPSS-OUTPUT Reliability Statistics Cronbach's Alpha N of Items,442 3 SPSS shows a value for Cronbach s Alpha of 0,442 when combining all three questions for job satisfaction (W1, W2 & W3). However, the minimum to allow for summing up the variables is a Cronbach s Alpha of 0,6. When examining the rest of the output of SPSS you find the last column Cronbach s Alpha if item deleted which shows the value of Cronbach s Alpha if one of the questions is omitted. For example, if you don t consider W1 the Cronbach s Alpha shoots up to 0,945 which is significantly above the minimum of 0,6 which means W2 and W3 can be summed to one variable. However, when you delete W2 the Cronbach s Alpha goes down to -0,193. The same goes for deleting W3 (Cronbach s Alpha = -0,229). Both of these options only reduce the Cronbach s Alpha of 0,44 that we found before. We can conclude that W2 & W3 can be summed, but W1 cannot (the same conclusion we reached with correlations). Item-Total Statistics Scale Mean if Item Deleted Scale Variance if Item Deleted Corrected Item- Total Correlation W1 7,99 10,174 -,099,945 W2 8,01 5,491,540 -,193 a W3 7,64 5,481,563 -,229 a Cronbach's Alpha if Item Deleted a. The value is negative due to a negative average covariance among items. This violates reliability model assumptions. You may want to check item codings. Important! A negative value for Cronbach s Alpha means something went wrong. One of the possibilities is that you included two variables with mirrored answer options (e.g. 1 means never with one variable, while 1 means always with the other variables). In order to calculate a valid Cronbach s Alpha in this case you should recode the variables in such a way that they all have the same scale (also see: recoding variables). 14

HOW TO REPORT THIS Reliability analysis on the three questions measuring job satisfaction showed that W2 and W3 together had a α = 0.945. When W1 was also included, the α was reduced to 0,44. Therefore, a sum variable was computed using only W2 and W3. GOAL: RECODING VARIABLES How are the questions about autonomy related? Can they be summed to one variable? In this case several steps are of importance. The variables A2 and A3 are interval while A1 is ordinal. Therefore, only A2 and A3 can be summed since you can only sum interval variables. However, the categories for A2 and A3 are opposites (A2 starts with never, while A3 starts with always). This means one of these questions has to be recoded in such a way that they can be summed. TECHNIQUE: RECODE Recode variable A3 in such a way that 1=never and 7=always Transform > Recode into different variables. Important! NEVER choose Recode into same variable since if something goes wrong you will have lost your original data! Move the variable you want to recode to the right box and fill in the new name for your variable (RecA3) under output variable and press change. Next, press Old and New Values. 15

Since the old A3 had: 1=always and 7=never and the new A3 should be: 1=never and 7=always, you should fill in 1 at Old Value and 7 at New Value and press add. After that you fill in 2 at Old Value and 6 at New Value and press add. Continue until you recoded all seven levels, then press continue and OK. You have now recoded the variable in such a way that the new variable has: 1= never and 7=always. TECHNIQUE: CRONBACH S ALPHA In order to calculate the Cronbach s Alpha of A2 and A3 you can follow the procedure described in question 3a but instead of W1,W2 and W3 you use the variables A2 and RecA3. SPSS provides the following output: Reliability Statistics Cronbach's Alpha N of Items,871 2 The Cronbach s Alpha is α = 0,871 which is higher than the minimum of 0,6. This means we can conclude the variables A2 and RecA3 can be summed into one variable. Regarding the other variables in the dataset: you can use the same procedure described here. 16

STEP 4: MAKING A NEW SUM VARIABLE GOAL: MAKING A SUM VARIABLE After you found out which variables can be summed, you can use SPSS to calculate a new variable for you. First, this will be done for the questions about job satisfaction and after that for the questions about autonomy. TECHNIQUE: SUMMING THE VARIABLES In order to sum W2 and W3: Transform > Compute Variable. Fill in a new name for the variable you are about to make in the field Target Variable. For example: SumW2W3 (to clearly identify this as the sum of W2 and W3; which is also nice if someone else uses your dataset in the future). Next, you can indicate in the right field that you want to sum the two variables: (W2+W3) and click OK. After this you can see your new variable in the dataset (both in the Variable view and the Data view ). TECHNIQUE: AVERAGE Make a new variable for the combination of A2 and RecA3 by taking the average of the two variables. Instead of making a new variable by summing the two (which was done above), the average of the variables is taken instead. We can do this by first summing the variables and then dividing by the number of variables (2 here). An advantage of this method when comparing it to summing the variables is that the 7-point scale is left intact this way which makes the result easier to interpret. Transform > Compute Variable. 17

Fill in the new name for the variable you are about to make in the field Target Variable. For example: AvA2A3 and fill the formula for averaging the scores in next (A2+RecA3)/2 and press OK. You can see your new variable in the dataset. 18

STEP 5: TECHNIQUE CHOICE GOAL: CHOOSING THE ANALYSIS TECHNIQUE In order to answer the following questions the same steps will be taken each time: 1. What statistical technique is necessary? (You can use the tables in the appendix for this) 2. How can you perform the technique in SPSS and how does the most important output look? 3. How do you report the results? TECHNIQUE: BIG THREE A method for choosing the right technique is the Big Three which are three questions about the data you are using. These questions will also be discussed later on. 1. How many variables are involved in the analysis? One: univariate analysis (descriptive statistics) Two: bivariate analysis (inferential statistics) > Two: multivariate analysis (inferential statistics) 2. What is the data type of the involved variable(s)? Independent variable (X): nominal, ordinal or interval (ratio) Dependent variable (Y): nominal, ordinal or interval (ratio) 3. Asymmetric vs. symmetric? (only for 2 or more variables) Asymmetric: when variables have a different data types or you want to explain the dependent variable based on the independent variable (causal relationship) Symmetric: when there is no need for predicting causal relationships and the variables are of the same data type. HOW TO REPORT THIS? It is important to know that directly copying SPSS output to your report is NOT allowed! You should use the values found in the SPSS output and report them in your own words by using the ABCD-Formula: A. What was the goal? (= WHAT) B. How did you do this? (= HOW) C. Was the test significant? Report this in the right way! (= RESULT) D. What can be concluded from the results? ( = CONCLUSION) In this formula, part A and D must be described in such a way that anyone can understand it. However, part B and C are for people who know about statistics. The best way of reporting the analyses is described in more detail in the results. Even though the letters A until D are also used in the results section, you should never use these in your research paper. 19

STEP 6: THE CONTROL VARIABLES GOAL: T-TEST INDEPENDENT SAMPLES The next step is checking the influence of the control variables that were included in the model (in this example: age, gender, religion and education level) on the most important dependent variable (job satisfaction). TECHNIQUE: INDEPENDENT SAMPLES T-TEST Needed technique: the variable gender is nominal and job satisfaction is interval. Furthermore, gender consists of two categories; male and female (k=2). Therefore, we need to use an independent samples t-test to find out whether or not gender influences job satisfaction. SPSS output: Analyze > Compare Means > Independent samples t-test Please make sure you add the new sum variable of job satisfaction in the box Test variable and gender in the box Grouping variable. Press Define groups and fill in a 1 for Group 1 and a 2 for Group 2. Press Continue and then OK. 20

SPSS-OUTPUT. Group Statistics Gender N Mean Std. Deviation Std. Error Mean SumW2W3 man 107 7,8037 3,31503,32048 woman 118 8,1525 3,07631,28320 Independent Samples Test Levene's Test for Equality of Variances t-test for Equality of Means F Sig. t df Sig. (2- tailed) Mean Std. Error Difference Difference 95% Confidence Interval of the Difference Lower Upper Sum W2W3 Equal variances assumed Equal variances not assumed,791,375 -,819 223,41390 -,34880,42611-1,18852,49092 -,816 216,551,41564 -,34880,42767-1,19174,49413 HOW TO REPORT THIS? Remember, you cannot use these inconvenient tables in your results section. Please explain the results in words. A. In order to analyze whether or not the average job satisfaction of men is different from the average job satisfaction of women, (=WHAT) B. we performed an independent samples t-test with gender and job satisfaction. (=HOW) C. The independent samples t-test was not significant, t(223) = -0,82, p = 0,414 (=RESULT) D. The average job satisfaction of men (M= 7,8, SD=3,32) does not differ from the average job satisfaction of women (M = 8,1, SD = 3,08). (=CONCLUSION) 21

GOAL: LINEAR REGRESSION TECHNIQUE The variables age and job satisfaction are both measured on interval data type. Therefore, we have to use regression analysis in order to examine the influence of age on job satisfaction. Analyze > Regression > Linear Regression Put the sum variable job satisfaction in the dependent field, and age in the independent field, then press OK. SPSS-OUTPUT Model Summary Model R R Square Adjusted R Square Std. Error of the Estimate 1,022 a,001 -,004 3,19600 a. Predictors: (Constant), Age 22

Model Sum of Squares df ANOVA b Mean Square F Sig. 1 Regression 1,142 1 1,142,112,738 a Residual 2277,818 223 10,214 Total 2278,960 224 a. Predictors: (Constant), Age b. Dependent Variable: SumW2W3 Model Unstandardized Coefficients Coefficients a B Std. Error Beta Standardized Coefficients 1 (Constant) 8,210,701 11,719,000 t Sig. Age -,005,016 -,022 -,334,738 a. Dependent Variable: SumW2W3 HOW TO REPORT THIS A. In order to analyze whether or not the age of employees influences their job satisfaction, (=WHAT) B. we performed a regression analysis with age regressed on job satisfaction. (=HOW) C. The regression analysis was not significant, R 2 =0,001, F(1,223) =.11, p =.738. (=RESULT) D. The age of employees does not influence job satisfaction, B = -0,005, t = 11.72, p = 0.738. (=CONCLUSION). Important! Whenever you use more independent variables (e.g. four) in a so-called multivariate linear regression, the F-test (in the ANOVA table) will display the overall significance of the influence of all these variables together on the dependent variable. If you want to know what variables have a specific effect, you can find this information in the table Coefficients. This is because it is possible that only two of the four regressors have a significant effect which leads to an overall significant F-test. Because this case only considers one independent variable, the significant F-test also means our sole predictor significantly predicts the dependent variable. 23

GOAL: ONE-WAY ANOVA TECHNIQUE The variable education level is ordinal while job satisfaction is interval. Unfortunately, there is no statistical test available for this combination of variables. However, in this case it is the easiest to assume the variable education level is nominal (remember, you can always make the transition to a lower data type, but never from low to high). This means you get an independent variable of more than two categories which leads to a one-way ANOVA. SPSS output: Analyze > Compare Means > One-way ANOVA Input the sum variable for job satisfaction in the dependent list and education level in factor. Tick the box descriptives under Options which provides the means of all treatment levels of the independent variable. SPSS- OUTPUT SumW2W3 Sum of Squares df Mean Square F Sig. Between Groups 17,016 3 5,672,554,646 Within Groups 2261,944 221 10,235 Total 2278,960 224 24

Descriptives SumW2W3 95% Confidence Interval for Mean Lower Upper N Mean Std. Deviation Std. Error Bound Bound Minimum Maximum Primary school 43 8,35 3,309,505 7,33 9,37 2 14 Secondary school 72 8,01 3,270,385 7,25 8,78 2 14 College 72 7,62 2,971,350 6,93 8,32 2 13 University 38 8,21 3,354,544 7,11 9,31 2 14 Total 225 7,99 3,190,213 7,57 8,41 2 14 HOW TO REPORT THIS? A. In order to analyze whether or not the job satisfaction of employees differs per education level, (=WHAT) B. we performed a One-way ANOVA of education level on job satisfaction. ( = HOW) C. This One-way ANOVA was not significant, F(3, 221) = 0,55, p = 0,65 ( = RESULT) D. The education level of employees does not influence their job satisfaction. ( = CONCLUSION) GOAL: ONE-WAY ANOVA (2) TECHNIQUE Needed technique: the variable religion is nominal (with more than two categories, so k>2) and job satisfaction is interval. Therefore, we have to use a One-way ANOVA to find out whether or not religion influences job satisfaction. SPSS output: Analyze > Compare Means > One-way ANOVA. 25

Input the sum variable for job satisfaction in the dependent list and relgion in factor. Tick the box descriptives under Options which displays the means of your treatment levels, then press OK. SPSS-OUTPUT ANOVA Descriptives SomW2W3 95% Confidence Interval for Mean N Mean Std. Deviation Std. Error Lower Bound Upper Bound Minimum Maximum reformed 14 8,21 3,017,806 6,47 9,96 3 14 catholic 28 7,93 3,344,632 6,63 9,23 2 14 hindoeism 48 8,44 3,814,551 7,33 9,55 2 14 muslim 48 8,15 2,552,368 7,40 8,89 3 13 jewish 46 8,11 3,261,481 7,14 9,08 2 14 atheist 28 7,61 3,095,585 6,41 8,81 2 13 other 13 6,00 2,309,641 4,60 7,40 2 10 Total 225 7,99 3,190,213 7,57 8,41 2 14 SumW2W3 Sum of Squares df Mean Square F Sig. Between Groups 67,819 6 11,303 1,114,355 Within Groups 2211,141 218 10,143 Total 2278,960 224 HOW TO REPORT THIS A. In order to analyze whether or not the job satisfaction of employees differs per religion, (=WHAT) B. we performed a One-way ANOVA of religion on job satisfaction. ( = HOW) C. This One-way ANOVA was not significant, F(6, 218) = 1,11, p = 0,355 ( = RESULT) D. The religion of employees does not influence their job satisfaction. ( = CONCLUSION) 26

STEP 7: TESTING THE HYPOTHESIS After checking the control variables, it is now time to examine the effect of the independent variable on the dependent variable. This will be done by using the aforementioned hypothesis. Unfortunately it is not possible to test the influence of a nominal variable on an ordinal variable due to a mistake in an earlier version of this SPSS guide involving the conceptual model and the dataset. Therefore, an example that was NOT included in the conceptual model will be used to show how this test is performed, even though there is no theoretical grounding for this research question. The variables that are referred to here are religion and autonomy. Does religion have an influence on the degree of autonomy people experience during their job? One of the questions for autonomy (A1) is ordinal and religion is nominal with more than two categories (K=7). Therefore, a Kruskal-Wallis test has to be performed here. GOAL: KRUSKAL WALLIS TEST What is the effect of religion on the variable How often is it necessary to explain yourself beforehand to a superior about the tasks that have to be performed? (A1) TECHNIQUE The variable religion is nominal (k>2) and autonomy A1 is ordinal. Therefore the Kruskal- Wallis test is used to determine the influence of religion on autonomy. Analyze > Nonparametric Tests > K Independent Samples Put the autonomy variable A1 in Test Variable List and religion in Grouping Variable, then click Define Range. Because religion has seven categories, you fill in a 1 at minimum and a 7 at maximum. Then press Continue and OK. 27

SPSS-OUTPUT Test Statistics a,b A1 Chi-Square 2,520 df 6 Asymp. Sig.,866 a. Kruskal Wallis Test b. Grouping Variable: religie HOW TO REPORT THIS A. In order to analyze whether people with a different religion differ in the degree of experience autonomy, (= WHAT) B. we performed a Kruskal-Wallis test with religion on autonomy. (= HOW) C. This Kruskal-Wallis test was not significant, Chi-Square(6) = 2,52, p = 0,866 (=RESULT) D. People with a different religion do not differ in experienced autonomy. (=CONCLUSION) NB When the test is significant you can find frequency tables for the observed autonomy per religion through the crosstabs option. You can report these to show the significant difference. 28

GOAL: LINEAR REGRESSION (2) What is the effect of the sum variable of autonomy on job satisfaction? TECHNIQUE The sum variable of autonomy and job satisfaction are both interval variables, therefore we can use a regression analysis to examine the influence of autonomy on job satisfaction. Analyze > Regression > Linear SPSS-OUTPUT Model Summary Model R R Square Adjusted R Square Std. Error of the Estimate 1,919 a,844,843 1,26341 a. Predictors: (Constant), MeanA2A3 29

ANOVA b Model Sum of Squares df Mean Square F Sig. 1 Regression 1923,009 1 1923,009 1204,746,000 a Residual 355,951 223 1,596 Total 2278,960 224 a. Predictors: (Constant), MeanA2A3 b. Dependent Variable: SumW2W3 Coefficients a Model Unstandardized Coefficients B Std. Error Beta Standardized Coefficients 1 (Constant),545,230 2,364,019 MeanA2A3 1,934,056,919 34,709,000 a. Dependent Variable: SumW2W3 t Sig. HOW TO REPORT THIS? A. In order to analyze whether or not a higher autonomy leads to a higher job satisfaction, (=WHAT) B. we performed a regression analysis of autonomy on job satisfaction, (=HOW) C. The results of this regression, R 2 = 0,84, F(1,224) = 1204.75, p =.000 reveal a significant effect. There is a positive relationship between autonomy and job satisfaction B = 1.93, t(224) = 34,70, p <.001. (=RESULT) D. A higher autonomy of employees does lead to a significantly higher job satisfaction. (CONCLUSION) 30

GOAL: PAIRED SAMPLES T-TEST Imagine that Solar Energy Plc decided to raise the autonomy of their employees based on past research within the company. A year after these changes have taken place, job satisfaction is measured again. Test whether or not job satisfaction has risen the past year. Mind you, the measurement of the second year has not yet been summed, make sure this is allowed. TECHNIQUE Before we can compare the original sum variable of job satisfaction with the new one, a sum variable for the second year has to be made (in the same way as the First one). Check if W2later and W3later correlate significantly (which is in fact true, see table below) and if the Cronbach s Alpha is sufficiently high to sum the variables (which is in fact true; Cronbach s Alpha is 0,974, see table below). Next, you can sum the variables in the same way as assignment 3a: (W2later+W3later). SPSS-OUTPUT Correlations W2later W3later W2later Pearson Correlation 1,000,950 ** Sig. (2-tailed),000 N 225,000 225 W3later Pearson Correlation,950 ** 1,000 Sig. (2-tailed),000 N 225 225,000 **. Correlation is significant at the 0.01 level (2- tailed). **. Correlation is significant at the 0.01 level (2- tailed). Reliability Statistics Cronbach's Alpha N of Items,974 2 31

TECHNIQUE We are dealing with two related samples that have to be compared here, because every person answered the same questions twice. The first time he/she answered the questions influence the second time he/she answered the questions which means the samples are related and we have to use the paired sample t-test. Analyze > Compare Means > Paired samples t-test Drag the old job satisfaction variable (SumW2W3) to variable 1 and the new job satisfaction variable (SumW2laterW3later) to variable 2, then press OK. NB. You could input additional pairs of variables here if you wanted to. However, since we only have one pair here, this is not relevant for this analysis. SPSS-OUTPUT Paired Samples Statistics Mean N Std. Deviation Std. Error Mean Pair 1 SumW2W3 7,99 225 3,190,213 SumW2laterW3later 9,90 225 3,042,203 Paired Samples Test Pair 1 SumW2W3 - SumW2laterW3later Paired Differences Mean Std. Std. Error Deviation Mean 95% Confidence Interval of the Difference Lower Upper t df Sig. (2- tailed) -1,91111,43416,02894-1,96815-1,85407-66,028 224,000 32

HOW TO REPORT THIS? A. In order to analyze whether or not job satisfaction has risen in the last year, (=WHAT) B. we performed a paired samples t-test on the original job satisfaction variable and the new one. (=HOW) C. The paired samples t test was significant, t(224) = -66,03, p <.001 (=RESULT) D. Job satisfaction did significantly rise this year, from M=7,99 to M=9,90 a year later (=CONCLUSION).* *it is important to note here that it is not possible to proof whether or not the increase is due to the increase in autonomy based on this test! Another test would have to be performed which includes both measurement moments of autonomy too. GOAL: CROSS-TABLE WITH CHI-SQUARE Imagine another question was posed regarding job satisfaction, namely are you satisfied with your job? 1=yes, 2=no and we wanted to find out whether or not gender influences this question? Perform the appropriate analysis and report the results. TECHNIQUE The relevant variables are both nominal, therefore a Chi-square with cross table has to be used. Analyze > Descriptive Statistics > Crosstabs Drag gender to Row and the T (the other question about job satisfaction) to Column. Then press Statistics in the top right and click the box Chi-square. Press Continue and then Cells, tick the boxes Row and Column in the Percentages heading. Then press Continue and OK. 33

SPSS-OUTPUT gender * T Crosstabulation T 1 2 Total gender man Count 101 6 107 % within geslacht 94,4% 5,6% 100,0% % within T 87,8% 5,5% 47,6% women Count 14 104 118 % within geslacht 11,9% 88,1% 100,0% % within T 12,2% 94,5% 52,4% Total Count 115 110 225 % within geslacht 51,1% 48,9% 100,0% % within T 100,0% 100,0% 100,0% Chi-Square Tests Value Pearson Chi-Square 152,954 a 1,000 Continuity Correction b 149,669 1,000 Likelihood Ratio 179,621 1,000 df Asymp. Sig. (2- sided) Exact Sig. (2- sided) Fisher's Exact Test,000,000 Linear-by-Linear Association N of Valid Cases 225 152,274 1,000 a. 0 cells (,0%) have expected count less than 5. The minimum expected count is 52,31. b. Computed only for a 2x2 table Exact Sig. (1- sided) NB. If the value in an SPSS table is shown with an E in it, double click the table in the SPSS output and widen the column of Value a bit. HOW TO REPORT THIS? A. In order to analyze whether or not man and women differ concerning their job satisfaction, (=WHAT) B. we performed a cross table with Chi-square with gender and job satisfaction (1=yes, 2=no) (=HOW) C. The Chi-square test was significant, Chi-Square(1) = 152,95, p <.001. (=RESULT) D. Men are more often satisfied with their job (94,4%) than women (11,9%). (=CONCLUSION) 34

GOAL: MANN-WHITNEY U-TEST Imagine one of the researchers working on this report is bored and wants to know if gender influences education level. Even though it is not in the conceptual model and he has no theory to suspect there is a difference (and how the difference would look), he still decides to find out if there is a relationship. Therefore, we have to find out if there is a difference between men and women concerning their education level. TECHNIQUE We are dealing with a nominal independent variable (gender, k = 2) and an ordinal dependent variable education level, which means a Mann-Whitney U-test is appropriate. SPSS output: Analyze > Nonparametric tests > 2 independent samples Input education level in the Testing variable box and gender in the Grouping variable one. Press define groups and indicate Group 1 equals 1 and Group 2 equals 2. Press Continue and then OK. 35

SPSS-OUTPUT Ranks Education level gender N Mean Rank Sum of Ranks 1 107 105,26 11263,00 2 118 120,02 14162,00 Total 225 Test Statistics a Education level Mann-Whitney U 5485,000 Wilcoxon W 11263,000 Z -1,768 Asymp. Sig. (2-tailed),077 a. Grouping Variable: gender The test shows that the Mean Ranks appear to differ from each other (105 vs 120), but using a α =.05 the difference is not significant. Since you can observe a trend in this case, you could describe the effect as marginally significant. NB All p-values above.10 cannot be described as such. Since it is not possible to determine the medians of both groups (namely the median of men and women) in this menu, you have to determine these yourself in SPSS. In order to do this, first select only the male subjects in your dataset. Go to Data > Select Cases. Press the button if condition is satisfied under select and press if next. Indicate that you only want to select people that scored a 1 on the variable gender: gender=1 and press Continue, then OK. NB Always make sure you filter out unselected cases (option in the Output section) which will make sure the unselected cases are not deleted but simply kept out of the analyses. 36

You can now get the median of education level through Analyze > Frequencies. Since only men are selected at the moment, the median only applies to the male group, which will indicate the median is 2 (secondary school). In order to determine the median of women, you go back to data > select cases > if... and indicate that you want to use people that scored 2 now: gender = 2 and press Continue, then OK. If you ask SPSS to determine the median through Frequencies, the median of the female group will be displayed, which is 3 (College). HOW TO REPORT THIS? A. In order to analyze if men and women differ regarding their education level, (= WHAT) B. we performed a Mann-Whitney U-test. (=HOW) C. This test was marginally significant, MWU = 5485, p =.077. (=RESULT) D. Men have a lower education level (MR = 105.3; median = Secondary school) than women (MR = 120.0; median = College). (= CONCLUSION) GOAL: MANOVA If you want to examine multiple dependent variables at once as a researcher, you can use the Multivariate Analysis of Variance (MANOVA). This is especially useful when one or more of your variables are not interval and can be used to answer questions like: do job satisfaction and autonomy of employees differ per age category? 37

TECHNIQUE: MANOVA We are dealing with two dependent variables and 1 independent variable. All variables are measured on the interval data type. Analyze >General Linear Model > Multivariate The dependent variables SumW2W3 and mean A2A3 are put in Dependent variables and age in Fixed factor. Press Model and tick Full factorial with Sum of squares Type III, then press continue. Press Options, in order to get extra means with the analyses tick Descriptive statistics, Parameter estimates and Homogeneity test. Press Continue, then OK. 38

SPSS-OUTPUT Multivariate Tests c Effect Value F Hypothesis df Error df Sig. Intercept Pillai's Trace,845 483,792 a 2,000 178,000,000 Wilks' Lambda,155 483,792 a 2,000 178,000,000 Hotelling's Trace 5,436 483,792 a 2,000 178,000,000 Roy's Largest Root 5,436 483,792 a 2,000 178,000,000 leeftijd Pillai's Trace,318,752 90,000 358,000,948 Wilks' Lambda,705,754 a 90,000 356,000,946 Hotelling's Trace,384,755 90,000 354,000,945 Roy's Largest Root,251 1,000 b 45,000 179,000,481 a. Exact statistic b. The statistic is an upper bound on F that yields a lower bound on the significance level. c. Design: Intercept + age 39

Source Dependent Variable Tests of Between-Subjects Effects Type III Sum of Squares df Mean Square F Sig. Corrected Model SumW2W3 269,302 a 45 5,984,533,993 MeanA2A3 64,011 b 45 1,422,565,987 Intercept SumW2W3 10405,476 1 10405,476 926,815,000 MeanA2A3 2379,835 1 2379,835 945,908,000 leeftijd SumW2W3 269,302 45 5,984,533,993 MeanA2A3 64,011 45 1,422,565,987 Error SumW2W3 2009,658 179 11,227 MeanA2A3 450,351 179 2,516 Total SumW2W3 16631,000 225 MeanA2A3 3847,500 225 Corrected Total SumW2W3 2278,960 224 MeanA2A3 514,362 224 a. R Squared =,118 (Adjusted R Squared = -,104) HOW TO REPORT THIS? When interpreting a MANOVA, first look at multivariate effects. These describe if the independent variables have an effect on all dependent variables. Four different statistics are used to interpret this, (Pillai's Trace and Wilk's Lambda, Roy's largest root and Hotelling's Trace) which all use a slightly different calculation. Normally, it is best to use Wilk s Lambda because it shows the amount of unexplained variance (the opposite of the R 2 ). Only if a multivariate effect was found specific univariate effects are relevant. You can do this in the table with univariate results that automatically comes after the multivariate table, and look at the relevant independent variables there. 40

A. In order to analyze whether or not the degree of job satisfaction and autonomy of employees differs per age (=WHAT) B. we performed a Multivariate Analysis of Variance. (=HOW) C. This test was not significant at all, F (90,356)= 0,754, p =0,946. (=RESULT) D. Apparently there is no difference in job satisfaction and autonomy for employees of different ages. Therefore, it is unnecessary to interpret the univariate analysis. (= CONCLUSION) GOAL: TWO-WAY ANOVA Besides the One-Way ANOVA there is also the Two-Way ANOVA which is used when you want to perform an analysis of variance with two independent variables. In this case, autonomy and need for structure were chosen as the independent variables. Both variables will be split based on the median ( median split ). The dependent variable is, again, job satisfaction. The question is whether or not there is a difference in job satisfaction for employees with a high or low autonomy and employees with a high or low need for structure. TECHNIQUE: SPLITTING This test is often used when you have two independent variables on nominal data type. An example of this could be an experiment in which you manipulated power (high vs low) and valence (win vs lose) and want to find out how these factors influence negotiating behavior. In this case it is unnecessary to split, since they are already nominal! However, in this case it is needed to split the two independent variables based on their medians. The median can be found through Descriptives and splitting them goes as follows: Transform > Compute Variable Indicate how you want to name the new variable in Target Variable and press if. This allows you to specify how you want to split the variable. Press Continue and provide a new value for the variable. After splitting both variables in two groups, the Two-Way ANOVA can be performed. 41

42

TECHNIQUE: TWO-WAY ANOVA Analyze > General Linear Model > Univariate Input the (in)dependent variables and press Model. Specify the model as Full factor with sum of squares Type III, then press Continue. 43

Tick the box descriptive statistics at Options so the marginal and cellmeans are displayed. Then press Contrast and change the factors to simple contrasts, don t forget to press Change and press continue. There are several contrasts you can use here. More information about this can be found in advanced statistics books. Next, indicate that you want a plot of the main effect as well as the interaction effect in the Plot section. Then press Continue and OK. 44

SPSS-OUTPUT Descriptive Statistics Dependent Variable:SumW2W3 SplitA SplitS Mean Std. Deviation N Low Autonomy Low Need for Structure 5,26 1,866 57 High Need for Structure 5,33 1,748 54 Total 5,30 1,802 111 High Autonomy Low Need for Structure 8,37,496 19 High Need for Structure 11,05 1,525 95 Total 10,61 1,728 114 Total Low Need for Structure 6,04 2,119 76 High Need for Structure 8,98 3,191 149 Total 7,99 3,190 225 Dependent Variable:SumW2W3 Tests of Between-Subjects Effects Source Type III Sum of Squares df Mean Square F Sig. Corrected Model 1698,749 a 3 566,250 215,682,000 Intercept 9081,339 1 9081,339 3459,048,000 SplitA 784,849 1 784,849 298,946,000 SplitS 76,463 1 76,463 29,124,000 SplitA * SplitS 68,869 1 68,869 26,232,000 Error 580,211 221 2,625 Total 16631,000 225 Corrected Total 2278,960 224 a. R Squared =,745 (Adjusted R Squared =,742) 45

Test Results Dependent Variable:SumW2W3 Source Sum of Squares df Mean Square F Sig. Contrast 784,849 1 784,849 298,946,000 Error 580,211 221 2,625 HOW TO REPORT THIS? Before you report anything (in a Two-Way design) about the main effect of both variables, it is important to first look at the interaction effects. When the two lines in the graph cross each other, an interaction effect is usually present. This means the effect of autonomy on job satisfaction is influenced by their need for structure. Employees with a high autonomy and a high need for structure have a higher job satisfaction than employees with high autonomy and a low need for structure. Use the sample means to describe this effect! A. In order to analyze the influence of autonomy and need for structure on job satisfaction of employees, (=WHAT) B. we performed a 2 (autonomy: low vs. high) x 2 (need for structure: low vs. high) ANOVA on job satisfaction. (=HOW) C. Both main effects proved to be significant. A high autonomy leads to a higher job satisfaction (M = 10.61) than low autonomy (M = 5.30), F(1,221)= 298,95, p < 0,001. Also, a high need for structure leads to more job satisfaction (M = 8.98) than a low need for structure (M = 6.04), F (1,221)= 29,124, p < 0,001. The interaction effect also appeared significant, F (1,221)= 26,232, p < 0,001. However, for people with a low score for autonomy, it does not matter if they have a high need for structure, because in both cases they have a low score for job satisfaction (Mlow need for structure = 5.26 en M high need for structure = 5.33). Furthermore, for people with a high score for autonomy, need for structure does make a 46

difference regarding their score for job satisfaction. Especially people with a high need for structure are very satisfied with their job (M = 11.05) when compared to people with a low need for structure(m = 8.37). (=RESULT) D. The most important conclusion is that the need for structure is not influential for people with a low score for autonomy. However, for people with a high score on autonomy, this effect is present; especially people with a high need for structure are very satisfied with their job. (=CONCLUSION) Figure 1. Autonomy and need for structure on job satisfaction 12 10 8 6 4 Low need for structure High need for structure 2 0 Low autonomy High autonomy 47

APPENDIX TABLES FOR UNIVARIATE AND BIVARIATE ANALYSES Univariate analyses Nominal Ordinal Interval Central tendency Mode Median Mean Distribution Range (not very precise) Standard deviation Bivariate analyses: symmetric vs. asymmetric Asymmetric when: 1) variables have different measurement scales OR 2) predict DV based on IV Symmetric bivariate analyses (X X) X X Nominal Ordinal Interval Nominal Cross table with Chisquare test of independence Ordinal Spearman Rank correlation coefficient Interval Pearson correlation coefficient 48