INTRODUCTION TO SPSS FOR WINDOWS Version 19.0
|
|
|
- Damian Smith
- 9 years ago
- Views:
Transcription
1 INTRODUCTION TO SPSS FOR WINDOWS Version 19.0 Winter 2012
2 Contents Purpose of handout & Compatibility between different versions of SPSS.. 1 SPSS window & menus 1 Getting data into SPSS & Editing data.. 3 Reading an SPSS viewer/output (.spv) file & Editing your pout. 7 Saving data as an SPSS data (.sav) file Saving your output (statistical results and graphs) 9 Exporting SPSS Output. 10 Printing your work & Exiting SPSS.. 11 Running SPSS using syntax or command language (.sps files). 12 Display variable names or variable labels.13 Creating and Recording Variables Creating a new variable. 14 Recoding or combining categories of a variable 15 Example: Recoding a categorical variable...15 Example: Creating a indicator or dummy variable..17 Summarizing your data Frequency tables (& bar charts) for categorical variables. 20 Contingency tables for categorical variables. 21 Descriptive statistics (& histograms) for numerical variables.. 22 Descriptive statistics (& boxplots) by groups for numerical variables. 24 Using the Split File option for summaries by groups 26 Using the Select Cases option for summaries for a subgroup of subjects/observations 27 Graphing your data Bar chart 28 Histogram & Boxplot 29 Normal probability plot. 30 Error bar plot.. 31 Scatter plot. 32 Adding a line or loess smooth to a scatter plot.. 32 Stem-and-leaf plot.. 33 Hypothesis tests & Confidence intervals One sample t test & Confidence interval for a mean. 34 Paired t test & Confidence interval for the difference between means. 37 Two sample t test & Confidence interval for the difference between means 39 Sign test and Wilcoxon signed rank test Mann Whitney U test (or Wilcoxon rank sum test) One-way ANOVA (Analysis of variance) & Post-hoc tests Kruskal-Wallis test One-sample binomial test McNemar s test..53 Chi-square test for contingency tables...55 Fisher s exact test Trend test for contingency tables/ordinal variables Binomial, McNemar s, Chi-square and Fisher s exact tests using summary data Confidence interval for a proportion. 63 Correlation & Regression Pearson and spearman rank correlation coefficient Linear regression... 68
3 Liner regression via ANOVA commands.. 76 Logistic regression 80
4 1 Purpose of handout IBM SPSS Statistics (or SPSS) provides a powerful statistical and data management system in a graphical environment. The user interfaces make statistical analysis more accessible for casual users and more convenient for experienced users. Most tasks can be accomplished simply by pointing and clicking the mouse. The objective of this handout is to get you oriented with SPSS for Windows. It teaches you how to enter and save data in SPSS, how to edit and transform data, how to explore your data by producing graphics and summary descriptives, and how to use pointing and clicking to run statistical procedures. Compatibility between different versions of SPSS and PASW Statistics SPSS data files (files ending in.sav) and syntax (command) files (files ending in.sps) are compatible between different versions of SPSS (at least, versions 11.0 or newer). However, SPSS viewer/output files (files ending in.spv) are NOT compatible between different versions. One option for avoiding compatibility problems between different versions of SPSS is to export your output using an html or MS Word format. The compatibility between Window and Mac versions of SPSS is also limited. SPSS Windows & Menus An overview of the SPSS windows, menus, toolbars, and dialog boxes is given in the SPSS Tutorials under Help. You can also find information under Topics, Case Studies, Statistics Coach, and Command & Syntax (if you are using syntax commands.) Window Types Data Editor. When you start an SPSS session, you usually see the Data Editor window (otherwise you will see a Viewer window). The Data Editor displays the contents of the working data file. There a two views in the data editor window: 1) Data View displays the data in a spreadsheet format with variable names listed for column headings, and 2) Variable View which displays information about the variables in your data set. In the Data View you can edit or enter data, and in the Variable View you can change the format of a variable, add format and variable labels, etc. Viewer (Output). Statistical results and graphs are displayed in the Viewer window. The (output) Viewer window is divided into two panes. The right-hand pane contains the all the output and the left-hand pane contains a tree-structure of the results. You can use the left-hand pane for navigating through, editing and printing your results. Chart Editor. The chart editor is used to edit graphs. When you double-click on figure or graph, it will reappear in a chart editor window.
5 2 Syntax Editor. The Syntax Editor is used to create SPSS command syntax for using the SPSS production facility. Usually you will be using the point and click facilities of SPSS, and hence, you will not need to use the Syntax Editor. More information about the Syntax Editor and using the SPSS syntax is given in the SPSS Help Tutorials under Working with Syntax. A few instructions to get you started are given later in the handout in the section Running SPSS using the Syntax Editor (or Command Language) Menus Data Editor Menu: File. Use the File menu to create a new SPSS file, open an existing file, or read in spreadsheet or database files created by other software programs (e.g., Excel). Edit. Use the Edit menu to modify or copy data and output files. View. Choose which buttons are available in the window or how the window should look. Data. Use the Data menu to make changes to SPSS data files, such as merging files, transposing variables, or creating subsets of cases for subset analysis. Transform. Use the Transform menu to make changes to selected variables in the data file (e.g., to recode a variable) and to compute new variables based on existing variables. Analyze. Use the Analyze menu to select the various statistical procedures you want to use, such as descriptive statistics, cross-tabulation, hypothesis testing and regression analysis. Graphs. Use the Graphs menu to display the data using bar charts, histograms, scatterplots, boxplots, or other graphical displays. All graphs can be customized with the Chart Editor. Utilities. Use the Utilities menu to view variable labels for each variable. Add-ons. Information about other SPSS software. Window. Choose which window you want to view. Help. Index of help topics, tutorials, SPSS home page, Statistics coach, and version of SPSS. Viewer Menu: Menu is similar to Data Editor menu, but has two additional options: Insert. Use the insert menu to edit your output Format. Use the format menu to change the format of your output. Chart Editor Menu: Use SPSS Help to learn more about the Chart Editor. Toolbars
6 3 Most Windows applications provide buttons arranged along the top of a window that act as shortcuts to executing various functions. In SPSS, you will find such buttons (icons) at the top the of the Data Editor, Viewer, Chart Editor, and Syntax windows. The icons are usually symbolic representations of the procedure they execute when pushed, unfortunately their meanings are not intuitively obvious until one has already used them. Hence, the best way to learn these buttons is to use them and note what happens. The Status Bar The Status Bar runs along the bottom of a window and alerts the user to the status of the system. Typical messages one will see are Processor is ready, Running procedure. The Status Bar will also provide up-to-date information concerning special manipulations of the data file like whether only certain cases are being used in an analysis or if the data has been weighted according to the value of some variable. File Types Data Files. A file with an extension of.sav is assumed to be a data file in SPSS for Windows format. A file with an extension of.por is a portable SPSS data file. The contents of a data file are displayed in the Data Editor window. Viewer (Output) Files. A file with an extension of.spv is assumed to be a Viewer file containing statistical results and graphs. Syntax (Command) Files. A file witn an extension of.sps is assumed to be a Syntax file containing spss syntax and commands. Getting Data into SPSS & Editing Data When reading and editing data into SPSS the data will be displayed in the Data Editor Window. An overview of the basic structure of an SPSS data file is given in the SPSS Help Tutorials: 1. Choose Help on the menu bar 2. Choose Tutorial 3. Choose Reading Data Reading Data from a SPSS Data (.sav) File To read a data file from your computer/floppy disk/flash drive that was created and saved using SPSS. The filename should end with the suffix.sav. Or 1. Choose Open an existing data source 2. Double click on the filename or 3. Single click on the filename and choose OK 1. Choose Cancel
7 4 2. Choose File on the menu bar 3. Choose Open 4. Choose Data Edit the directory or disk drive to indicate where the data is located. 6. Double click on the filename or 7. Single click on the filename and choose Open Reading Data from an Text Data File To read an raw/text (ascii) data file from your computer/floppy disk/flash drive, where the data for each observation is on a separate line and a space is used to separate variables on the same line (i.e., the file format is freefield). The filename should end with the suffix.dat. 1. Choose File on the menu bar 2. Choose Read Text Data 3. Choose Files of Type *.dat 4. Edit the directory or disk drive to indicate where the data is located 5. Double click on the filename or 6. Single click on the filename and choose Open 7. Follow the Import Wizard Instructions. You can also get to the Import Wizard as follows: 1. Choose File on the menu bar 2. Choose Open 3. Choose Data Choose Files of Type *.dat 5. Edit the directory or disk drive to indicate where the data is located 6. Double click on the filename or 7. Single click on the filename and choose Open 8. Follow the Import Wizard Instructions. Instructions on how to read a text data file in fixed format are located in SPSS Help Tutorials under Reading Data from a Text File.
8 5 Reading Data from Other Types of External Files SPSS allows you to read a variety of other types of external files, such as Excel spreadsheet files, SAS data files, and Stata data files. To read data from other types of external files, you follow the same steps as you would for reading an SPSS save file, except that you specify the file type according to what package was used to create the save file. For further instruction on how to read data from other types of external files, see the SPSS for Windows Base System User's Guide on data files or the SPSS Help Tutorials. Entering and Editing Data Using the Data Editor The Data Editor provides a convenient spreadsheet-like facility for entering, editing, and displaying the contents of your data file. A Data Editor window opens automatically when you start an SPSS session. Instruction on Using the Data Editor to enter data is given in the SPSS Help Tutorials. Note that if you are already familiar with entering data into a different spreadsheet program (e.g., MS Excel), you might find it easy to enter your data in the program your are familiar with and then read the data into SPSS. Entering Data. Basic data entry in the Data Editor is simple: Step 1. Create a new (empty) Data Editor window. At the start of an SPSS session a new (empty) Data Editor window opens automatically. During an SPSS session you can create a new Data Editor window by 1. Choose File 2. Choose New 3. Choose Data Step 2. Move the cursor to the first empty column. Step 3. Type a value into the cell. As you type, the value appears in the cell editor at the top of the Data Editor window. Each time you press the Enter key, the value is entered in the cell and you move down to the next row. By entering data in a column, you automatically create a variable and SPSS gives it the default variable name var Step 4. Choose the first cell in the next column. You can use the mouse to click on the cell or use the arrow keys on the keyboard to move to the cell. By default, SPSS names the data in the second column var Step 5. Repeat step 4 until you have entered all the data. If you entered an incorrect value(s) you will need to edit your data. See the following section on Editing Data.
9 6 Editing Data. With the Data Editor, you can modify a data file in many ways. For example you can change values or cut, copy, and paste values, or add and delete cases. To Change a Data Value: 1. Click on a data cell. The cell value is displayed in the cell editor. 2. Type the new value. It replaces the old value in the cell editor. 3. Press then Enter key. The new value appears in the data cell. To Cut, Copy, and Paste Data Values 1. Select (highlight) the cell value(s) you want to cut or copy. 2. Pull down the Edit box on the main menu bar. 3. Choose Cut. The selected cell values will be copied, then deleted. Or 4. Choose Copy. The selected cell values will be copied, but not deleted. 5. Select the target cell(s) (where you want to put the cut or copy values). 6. Pull down the Edit box on the main menu bar. 7. Choose Paste. The cut or copy values will be ``pasted'' in the target cells. To Delete a Case (i.e., a Row of Data) 1. Click on the case number on the left side of the row. The whole row will be highlighted. 2. Pull down the Edit box on the main menu bar. 3. Choose Clear. To Add a Case (i.e., a Row of Data) 1. Select any cell in the case from the row below where you want to insert the new case. 2. Pull down the Data box on the main menu bar. 3. Choose Insert. Defining Variables. The default name for new variables is the prefix var and a sequential fivedigit number (e.g., var00001, var00002, var00003). To change the name, format and other attributes of a variable. 1. Double click on the variable name at the top of a column or, 2. Click on the Variable View tab at the bottom of Data Editor Window. 3. Edit the variable name under column labeled Name. The variable name must be eight characters or less in length. You can also specify the number of decimal places (under Decimals), assign a descriptive name (under Label), define missing values (under Missing), define the type of variable (under Measure; e.g., scale, ordinal, nominal), and define the values for nominal variables (under Values). After the data is entered (or several times during data entering), you will want to save it as an SPSS save file. See the section on Saving Data As An SPSS Save File.
10 7 Reading an SPSS Viewer/Output (.spv) File Statistical results and graphs are displayed in the Viewer window. An overview of how to use the Viewer is given in the SPSS Help Tutorials under Working with Output. If you saved the results of Viewer window during an earlier SPSS session, you can use the following commands to display the Viewer (output) results in a current SPSS session. However, SPSS output/viewer files (files ending in.spv) are NOT always compatible between different versions. Usually SPSS output files created with an older version and can be read by a new version, but an output file created using a new version can not be read by an older version. One option for avoiding compatibility problems between different versions of SPSS is to export your output in html or MS Word format. The compatibility between Window and Mac versions of SPSS is limited. To read a Viewer file from your computer\floppy disk\flashdrive that was created and saved using SPSS. The filename should end with the suffix spv. 1. Choose File on the menu bar 2. Choose Open 3. Choose Output Edit the directory or disk drive to indicate where the data is located 5. Double click on the filename or 6. Single click on the filename and choose Open Editing Your Output Editing the statistical results and graphs in the Viewer window is beyond the scope of this handout. Instructions on how to edit your output is given in the SPSS Help Tutorials under Working with Output and Creating and Editing Charts. You can use either the tree-structure in the left hand pane or the results displayed in the right hand pane to select, move or delete parts of the output. To edit a table or object (an object is a group of results) you first need to double click on the table/object so an editing box appears around the table/object, and then select the value you want to modify. An editing box' will be a ragged box outlining the table. If you only do a single click you will get a box with straight/plain lines outlining the table. In general, to create nice looking tables of your results it is often easier to hand enter the values into a blank MS Word table than to edit a SPSS table/object (either in SPSS or MS Word). To edit a chart you first need to double click on the chart so it appears in a new Chart Editor window. After you are done editing the chart, close the window and then export the chart, for example to a windows metafile and then into a MS Word file. By default in SPSS a P-value is displayed as.000 if the P-value is less than.001. You can report the P-value as <.001 or to have SPSS display more significant digits:
11 8 1. In a SPSS (output) Viewer window double click (with the left mouse button) on the table containing the p-value you want to display differently A ``editing box'' should appear around the table. 2. Click on the p-value using the right mouse button. 3. Choose Cell Properties. (If you do not get this option, you need to double click on the table to get the ragged box.) 4. Change the number of decimals to the desired number (default is 3). 5. Choose OK or 6. Double click on the p-value with the left mouse button and SPSS will display the p-value with more significant digits. If the p-value is very small, the p-value will be displayed in scientific notation (e.g., 1.745E-10 = ). Saving Data as an SPSS Data (.sav) File To save data as a new SPSS Data file onto your computer/floppy disk/flashdrive: 1. Display the Data Editor window (i.e., execute the following commands while in the Data Editor window displaying the data you want to save.) 2. Choose File on the menu bar. 3. Choose Save As Edit the directory or disk drive to indicate where the data should be saved. SPSS will automatically add the.sav suffix to the filename. 5. Choose Save To save data changes in an existing SPSS Save: file. 1. Display the Data Editor window (i.e., execute the following commands while in the Data Editor window displaying the data you want to save.) 2. Choose File box on the menu bar 3. Choose Save Caution. The Save command saves the modified data by overwriting the previous version of the file. You can save your data in other formats besides an SPSS save file (e.g., as an ASCII file, Excel file, SAS data set). To save your data with a given format you follow the same steps as saving data in a new SPSS Save file, except that you specify the Save as Type as the desired format.
12 9 Saving Your Output (Statistical Results and Graphs) To save the statistical results and graphs displayed in the Viewer window as a new SPSS Output file: 1. Display the Viewer window (i.e., execute the following commands while in the Viewer window displaying the results you want to save.) 2. Choose File on the menu bar. 3. Choose Save As Edit the directory or disk drive to indicate where the output should be saved. SPSS will automatically add the.spv suffix to the filename. 5. Choose Save To save Viewer changes in an existing SPSS Output file. 1. Display the Viewer window (i.e., execute the following commands while in the Viewer window displaying the results you want to save.) 2. Choose File on the menu bar. 3. Choose Save. Caution. The Save command saves the modified Viewer window by overwriting the previous version of the file. NOTE that you will not be able to open SPSS output that was created with a different version than the version of SPSS that you are using to open the output. You can avoid this incompatibility problem by exporting your output in an html or MS Word format (see the next page).
13 10 Exporting SPSS Output Sometimes you will want to save your SPSS output in a different file format than a SPSS output file, because you want to avoid compatibility problems between different versions of SPSS, you want to further edit your output in a Word document, or you want include graphs or figures in another document file. The basic steps in exporting SPSS output to another file type are, while in a SPSS (output) Viewer window: 1. Choose File 2. Choose Export 3. Objects to Export: Choose what you want to export All: Exports all the output and other information not shown in the output. You usually do not want to use this opion. All visible: Exports all visible output Selected: Exports only output that is selected or highlighted in the Viewer window 4. Document Type: Choose the type of file or format you want to use save your results. Word/RTF (*.doc) is a good option. Numerical and graphical output will be saved in the same file. With the HTML option numerical output will be saved in one file and each graph will be saved in a separate file. 5. Document File Name: Enter the file name and location. 6. Choose OK (or Paste)
14 11 Printing Your Work in SPSS To print statistical results and graphs in the Viewer window or data in the Data Editor window: 1. Display the output or data you want to print (i.e., execute the following commands while in a viewer/output or data window) 2. Choose File on the menu bar. 3. Choose Print Choose All visible output or Selected output (if you have selected parts of the output). 5. Choose OK NOTE there is no printing capability at the Seattle Downtown Campus Classroom Location. Exiting SPSS To exit SPSS: 1. Choose File on the menu bar 2. Choose Exit SPSS If you have made changes to the data file or the output file since the last time you saved these files, before exiting SPSS you will be asked whether you want to save the contents of the Data Editor window and Viewer window. If you are unsure as to whether you want to save the contents of the data or output window, choose Cancel, then display the window(s) and if you want to save the contents of the window, follow the instructions in this handout for saving data or output windows. SPSS will use the overwrite method when saving the contents of the window.
15 12 Running SPSS using Syntax (or Command Language) This handout describes how to the run various statistical summaries and procedures using the point-and-click menus in SPSS. However, it is possible run SPSS commands using SPSS syntax/command language. If you are running similar analyses repeatedly, it can be more efficient to run your analysis using SPSS syntax. How to run SPSS using the syntax/command language is beyond the scope of this handout. Help on running SPSS using the syntax/command language can be found in the SPSS Tutorials under Working with Syntax. To get you started using SPSS syntax, follow the point-and-click instructions for running a particular analysis, but select Paste instead of OK at the last step. A Syntax Editor window will open containing the SPSS syntax for running the analysis. To run the analysis you can choose Run on the menu bar or you can highlight the syntax you want to run, click the right mouse button, and select Run Selection. You can add more syntax to the Syntax Editor window by using the point-and-click method, selecting Paste instead of OK at the last step. The additional syntax will be added at the bottom of the Syntax Editor window. You can also write syntax directly into the syntax file and/or use copy, paste and editing commands to modify the syntax. Remember to save you syntax file before exiting SPSS. The file should end in.sps. You can open a syntax file by selecting File on the menu bar, Open, and the Syntax Here s an example of SPSS syntax. This syntax runs a two sample t- test comparing HDL cholesterol (hdl) for subjects without and with CHD (incchd, coded 0 for no and 1 for yes). This syntax creates 3 indicators variables, neversmoker, formersmoker, and currentsmoker for smoking status (smoke). Note that a period (.) is used to denote the end of a string of syntax and Execute. is sometimes required to run the syntax. Comments can be added between the symbols /* and */ or after * to help you remember what the syntax is doing.
16 13 Displaying Variable Names or Variable Labels When running SPSS via the menus you want to either have the variable labels or variable names displayed. Here is an example of the variable labels being displayed. The variable name is also (always) displayed in parenthesis after the variable label. Here is an example of the variable name being displayed. To select whether the variable labels or names display: 1. Choose Edit 2. Choose Options 3. Choose General 4. Select Display labels or names.
17 14 Creating and Recoding Variables Creating a New Variable To create a new variable: 1. Display the Data Editor window (i.e., execute the following commands while in the Data Editor window displaying the data file you want to use to create a new variable). 2. Choose Transform on the menu bar 3. Choose Compute Variable Enter the new variable name in the Target Variable box. 5. Enter the definition of the new variable in the Numeric Expression box (e.g., SQRT(visan), LN(age), or MEAN(age)) or 6. Select variable(s) and combine with desired arithmetic operations and/or functions. 7. Choose OK After creating a new variable(s), you will probably want to save the new variable(s) by re-saving your data using the Save command under File on the menu bar (See Saving Data as an SPSS Save File). Further instructions on creating a new variable are given in the SPSS Help Tutorials under Modifying Data Values. Example: Creating a (New) Transformed Variable You can use the SPSS commands for creating a new variable to create a transformed variable. Suppose you have a variable indicating triglyceride level, trig, and you want to transform this variable using the natural logarithm to make the distribution less skewed (i.e., you want to create a new variable which is natural logarithm of triglyceride levels). 1. Display the Data Editor window 2. Choose Transform on the menu bar 3. Choose Compute Enter, say, lntrig, in the Target Variable box. 5. Enter Ln(trig) in the Numeric Expression box. 6. Choose OK Now, a new variable, lntrig, which is the natural logarithm of trig, will be added to your data set. Remember to save your data set before exiting SPSS (e.g., while in the SPSS Data window, choose Save under File or click on the floppy disk icon).
18 15 Recoding or Combining Categories of a Variable To recode or combine categories of a variable: 1. Display the Data Editor window (i.e., execute the following commands while in the Data Editor window displaying the data file you want to use to recode variables). 2. Choose Transform on the menu bar 3. Choose Recode 4. Choose Into Same Variables... or Into Different Variables Select a variable to recode from the variable list on the left and then click on the arrow located in the middle of the window. This defines the input variable. 6. If recoding into a different variable, enter the new variable name in the box under Name:, then choose Change. This defines the output variable. 7. Choose Old and New Values Choose Value or Range under Old Value and enter old value(s). 9. Choose New Value and enter new value, then choose Add. 10. Repeat the process until all old values have been redefined. 11. Choose Continue 12. Choose OK After creating a new variable(s), you will probably want to save the new variable(s) by re-saving your data using the Save command under File box on the menu bar (See Saving Data as an SPSS Save File). Example: Recoding a Categorical Variable You can use the commands for recoding a variable to change the coding values of a categorical variable. You may want to change a coding value for a particular category to modify which category SPSS uses as the referent category in a statistical procedure. For example, suppose you want to perform linear regression using the ANOVA (or General Linear Model) commands, and one of your independent variables is smoking status, smoke, that is coded 1 for never smoked, 2 for former smoker and 3 for current smoker. By default SPSS will use current smoker as the referent category because current smoker has the largest numerical (code) value. If you want never smoked to be the referent category you need to recode the value for never smoked to a value larger than 3. Although you can recode the smoking status into the same variable, it is better to recode the variable into a new/different variable, newsmoke, so you do not lose your original data if you make an error while recoding.
19 16 1. Display the Data Editor window 2. Choose Transform 3. Choose Recode 4. Choose Into Different Variables Select the variable smoke as the Input variable 6. Enter newsmoke as the name of the Output variable, and then choose Change. 7. Choose Old and New Values Choose Value under Old Value. (It may already be selected.) 9. Enter 1 (code for never smoker) 10. Choose Value under New Value. (It may already be selected.) 11. Enter 4 (or any value greater than 3) 12. Choose Add 13. Choose All Other Values under Old Value. 14. Choose Copy Old Value(s) under New Value. 15. Choose Add 16. Choose Continue 17. Choose OK Remember to save your data set before exiting SPSS.
20 17 Example: Creating Indicator or Dummy Variables You can use the commands for recoding a variable to create indicator or dummy variables in SPSS. Suppose you have a variable indicating smoking status, smoke, that is coded 1 for never smoked, 2 for former smoker and 3 for current smoker. To create three new indicator or dummy variables for never, former and current smoking: 1. Display the Data Editor window 2. Choose Transform 3. Choose Recode 4. Choose Into Different Variables Select the variable smoke as the Input variable 6. Enter neversmoke as the name of the Output variable, and then choose Change. 7. Choose Old and New Values Choose Value under Old Value. (It may already be selected.) 9. Enter 1 (code value for never smoker) 10. Choose Value under New Value. (It may already be selected.) 11. Enter 1 (to indicate never smoker) 12. Choose Add 13. Choose All Other Values under Old Value. 14. Choose Value under New Value. 15. Enter Choose Add 17. Choose Continue 18. Choose OK Now, you have created a binary indicator variable for never smoker (coded 1 if never smoker, 0 if former or current smoker). Next, create a binary indicator variable for former smoker.
21 18 1. Display the Data Editor window 2. Choose Transform 3. Choose Recode 4. Choose Into Different Variables Select the variable smoke as the Input variable 6. Enter formersmoke as the name of the Output variable, and then choose Change. (Or change (edit) never to former, and then choose Change). 7. Choose Old and New Values Choose 1 1 under Old New and then choose Remove. 9. Choose Value under Old Value. 10. Enter 2 (code value for former smoker) 11. Choose Value under New Value. 12. Enter 1 (to indicate former smoker) 13. Choose Add 14. Choose Continue 15. Choose OK Now, you have a created a binary indicator variable for former smoker (coded 1 if former smoker, 0 if never or current smoker). To create a binary indicator variable for current smoker you would use similar commands to those for creating the indicator variable for former smoke, except that now the value of 3 for smoke is coded as 1 and all other values are coded as 0.
22 19 Example: Creating a Categorical Variable From a Numerical Variable You can use the commands for recoding a variable to create a categorical variable from a numerical variable (i.e., group values of the numerical variable into categories). For example, suppose you have a variable that is the number of pack years smoked, packyrs, and you want to create a categorical variable with the four categories, 0, >0 to 10, >10 to 30, and >30 pack years smoked. 1. Display the Data Editor window 2. Choose Transform 3. Choose Recode 4. Choose Into Different Variables Select the variable packyrs as the Input variable 6. Enter a name for the new variable, packcat, for the Output variable, and then choose Change. 7. Choose Old and New Values Choose Value under Old Value. (It may already be selected.) 9. Enter Choose Value under New Value. 11. Enter 0 (to indicate 0 pack years) 12. Choose Add 13. Choose Range under Old Value. 14. Enter 0.01 and 10 in the two blank boxes. 15. Choose Value under New Value 16. Enter 1 (to indicate >0 to 10 pack years) 17. Choose Add 18. Choose Range under Old Value. 19. Enter and 30 in the two blank boxes. 20. Choose Value under New Value 21. Enter 2 (to indicate >10 to 30 pack years) 22. Choose Add 23. Choose Range, value through HIGHEST under Old Value. 24. Enter in the blank box. 25. Choose Value under New Value 26. Enter 3 (to indicate >30 pack years) 27. Choose Add 28. Choose Continue 29. Choose OK Note that if you may want to use different coding values depending on which category you want to be used as the referent category in certain statistical procedures. Remember to save your data set before exiting SPSS.
23 20 Summarizing Your Data Frequency Tables (& Bar Charts) for Categorical Variables. To produce frequency tables and bar charts for categorical variables: 1. Choose Analyze from the menu bar 2. Choose Descriptive Statistics 3. Choose Frequencies 4. Variable(s): To select the variables you want from the source list on the left, highlight a variable by pointing and clicking the mouse and then click on the arrow located in the middle of the window. Repeat the process until you have selected all the variables you want. 5. Choose Charts (Skip to step 7 if you do not want bar charts.) 6. Choose Bar Chart(s) 7. Choose Continue 8. Choose OK Example: Frequency table and bar chart for the categorical variable, smoking status (smoke). Smoking status is the selected variable(s) and Bar charts under Charts has been selected. Frequency table and bar chart of smoking status Smoking status Smoking status Frequency Percent Valid Percent Cumulative Percent never former current Total Percent never former Smoking status current
24 21 Contingency Tables for Categorical Variables. To produce contingency tables for categorical variables: 1. Choose Analyze from the menu bar. 2. Choose Descriptive Statistics 3. Choose Crosstabs Row(s): Select the row variable you want from the source list on the left and then click on the arrow located next to the Row(s) box. Repeat the process until you have selected all the row variables you want. 5. Column(s): Select the column variable you want from the source list on the left and then click on the arrow located next to the Column(s) box. Repeat the process until you have selected all the column variables you want. 6. Choose Cells Choose the cell values (e.g., observed counts; row, column, and margin (total) percentages). Note the option is selected when the little box is not empty. 8. Choose Continue 9. Choose OK Example: Contingency table of smoking status by coronary heart disease (CHD). Smoking status is the row variable and CHD is the column variable. Observed counts and row percentages will be displayed. Smoking status * Incident CHD Crosstabulation Smoking status Incident CHD no yes Total never Count % within Smoking status 91.0% 9.0% 100.0% former Count % within Smoking status 87.7% 12.3% 100.0% current Count % within Smoking status 90.6% 9.4% 100.0% Total Count % within Smoking status 90.0% 10.0% 100.0%
25 22 Descriptive Statistics (& Histograms) for Numerical Variables. To produce descriptive statistics and histograms for numerical variables: 1. Choose Analyze on the menu bar 2. Choose Descriptive Statistics 3. Choose Frequencies Variable(s): To select the variables you want from the source list on the left, highlight a variable by pointing and clicking the mouse and then click on the arrow located in the middle of the window. Repeat the process until you have selected all the variables you want. 5. Choose Display frequency tables to turn off the option. Note that the option is turned off when the little box is empty. 6. Choose Statistics 7. Choose summary measures (e.g., mean, median, standard deviation, minimum, maximum, skewness or kurtosis). 8. Choose Continue 9. Choose Charts (Skip to step 11 if you do not want histograms.) 10. Choose Histograms(s) 11. Choose Continue 12. Choose OK An alternate way to produce only the descriptive statistics is at step 3 to choose Descriptives... instead of Frequencies..., then, select the variables you want. By default SPSS computes the mean, standard deviation, minimum and maximum. Choose Options... to select other summary measures. Example: Descriptive summaries and histogram for the numerical variable age. Age is the variable to summarize. You can select more than one variable to analyze. Remember to turn off the Display frequency tables option.
26 23 Mean, standard deviation, minimum and maximum were selected under Statistics, and histogram was selected under Charts Summaries for Age Statistics Age N Valid 1000 Missing 0 Mean Std. Deviation Minimum 65 Maximum 90 Histogram of Age Histogram Frequency Age Mean =72.14 Std. Dev. =5.275 N =1,000
27 24 Descriptive Statistics (& Boxplots) by Groups for Numerical Variables. To produce descriptive statistics and boxplots by groups for numerical variables: 1. Choose Analyze on the menu bar 2. Choose Descriptive Statistics 3. Choose Explore Dependent List: To select the variables you want to summarize from the source list on the left, highlight a variable by pointing and clicking the mouse and then click on the arrow located next to the dependent list box. Repeat the process until you have selected all the variables you want. 5. Factor List: To select the variables you want to use to define the groups from the source list on the left, highlight a variable by pointing and clicking the mouse and then click on the arrow located next to the factor list box. 6. Choose Plots... (If you do not want boxplots, choose Statistics for the Display option and skip to Step 11.) 7. Choose Factor levels together from the Boxplot box. 8. Select Stem-and-leaf option from the Descriptive box to turn off the option. 9. Choose Continue 10. Choose Both for the Display option 11. Choose OK Example: Total cholesterol by family history of heart attack (yes or no). In this example total cholesterol is the dependent variable. You can select more than one variable. Summaries will be computed for each group defined by family history of heart attack. Both numerical summaries (statistics) and plots are selected. Under Statistics Descriptives is usually selected by default. Under Plots select Boxplot option and unselect stem-andleaf. Select Percentiles if you want the 25 th and 75 th percentiles to report with the median.
28 25 Total cholesterol Family history of heart attack no 95% Confidence Interval for Mean Descriptives Std. Statistic Error Mean Lower Bound Upper Bound % Trimmed Mean Median Variance Std. Deviation Minimum 111 The explore command by default produces a lot of different summaries, so you need to select what to report. All summaries are shown for all groups the table has been cropped in this example. Maximum 363 Range 252 Interquartile Range 49 Skewness Kurtosis yes Mean % Confidence Lower Bound Interval for Mean Upper Bound The interquartile range is reported as the difference between the 75 th and 25 th percentiles. Request percentiles (see prior page) to get the 25 th and 75 th percentiles. Boxplot of Total Cholesterol by Family History of Heart Attack Total cholesterol no Family history of heart attack yes
29 26 Using the Split File Option for Summaries by Groups for Categorical and Numerical Variables. The Split File option in SPSS is a convenient way to produce summaries, graphs, and run statistical procedures by groups. To activate the option: 1. Choose Data on the menu bar of the Data Editor window 2. Choose Split File 3. Choose Compare groups or Organize output by groups. The two options display the output differently. Try each option to see which works best for your needs. 4. Choose the variable that defines the groups. 5. Choose OK Now, all the summaries, graphs, and statistical procedures you request will be done (automatically) for each group. To turn off this option: 1. Choose Data on the menu bar of the Data Editor window 2. Choose Split File 3. Choose Analyze all cases, do no create groups 4. Choose OK Example. Use the Split File option to run summaries by family history of heart attack (yes or no). Compare groups option will try to display the results for each group side by side when feasible. Organize output by groups option will display the results separately for each group starting with the group with the lowest numerical code value.
30 27 Using the Select Cases Option for Summaries for a subgroup of subjects/observations. The Select Cases option in SPSS is a convenient way to produced summaries and run statistical procedures for a subgroup of subjects or to temporary exclude subjects from the analysis. To activate this option: 1. Choose Data on the menu bar of the Data Editor window 2. Choose Select Cases 3. Choose If condition is satisfied 4. Choose If 5. Enter the expression that indicates the subjects/observation you want to select. 6. Choose Continue 7. Choose OK Now, all the summaries, graphs, and statistical procedures you request will be done using only the selected subjects/observations. To turn off this option: 1. Choose Data on the menu bar of the Data Editor window 2. Choose Select Cases 3. Choose All cases 4. Choose OK Example: Select subjects not lipid lowering medications (i.e., subjects with lipid = 0 indicating no medications). Select the If condition is satisfied and then If Caution! Usually you do not want to delete observations from your dataset, so do not select this Typical expressions will involve combinations of the following symbols: Symbol Definition = equal ~= not equal >= greater than or equal <= less than or equal > greater than < less than & and or
31 28 Graphing Your Data You can produce very fancy figures and graphs in SPSS. Producing fancy figures and graphs is beyond the scope of this handout. Instructions on producing figures and graphs can be found in SPSS Help under Topics Contents Building Charts and Editing Charts, as well as in the SPSS Tutorials under Creating and Editing Charts. Note, that both the Help and Tutorials you need to have Internet access. Also, last time I tried the doing a tutorial is didn t work. This handout covers the basic commands for creating simple graphs using the Legacy Dialogs under Graphs versus the newer methods using the Chart Builder. Bar Charts The easiest way to produce simple bar charts is to use the Bar Chart option with the Frequencies... command. See Frequency Tables (& Bar Charts) for Categorical Variables. You can only produce only one bar chart at a time using the Bar command. 1. Choose Graphs and then Legacy Dialogs from the menu bar. 2. Choose Bar Choose Simple, Clustered, or Stacked 4. Choose what the data in the bar chart represent (e.g., summaries for groups of cases). 5. Choose Define 6. Select a variable from the variable list on the left and the click on the arrow next to the Category axis. 7. Choose what the bars represent (e.g., number of cases or percentage of cases) 8. Choose OK 60.0% 50.0% 60.0% 50.0% Family history of heart attack no yes 40.0% 40.0% Percent 30.0% Percent 30.0% 20.0% 20.0% 10.0% 10.0% 0.0% never former Smoking status current 0.0% never former Smoking status current
32 29 Histograms The easiest way to produce simple histograms is to use the Histogram option with the Frequencies... command. See Descriptive Statistics (& Histograms) for Numerical Variables. You can produce only one histogram at a time using the Histogram command. 1. Choose Graphs and then Legacy Dialogs from the menu bar 2. Choose Histogram Select a variable from the variable list on the left and then click on the arrow in the middle of the window. 4. Choose Display normal Curve if you want a normal curve superimposed on the histogram. 5. Choose OK Frequency Body mass index Mean = Std. Dev. = N =1,000 Boxplots The easiest way to produce simple boxplots is to use the Boxplot option with the Explore... command. See Descriptive Statistics (& Boxplots) By Groups for Numerical Variables. You can produce only one boxplot at a time using the Boxplot command. 1. Choose Graphs and then Legacy Dialogs from the menu bar. 2. Choose Boxplot Choose Simple or Clustered 4. Choose what the data in the boxplots represent (e.g., summaries for groups of cases). 5. Choose Define 6. Select a variable from the variable list on the left and then click on the arrow next to the Variable box. 7. Select the variable from the variable list that defines the groups and then click on the arrow next to Category Axis. 8. Choose OK Serum fasting glucose normal impaired fasting glucose ADA diabetes status diabetic
33 30 Normal Probability Plots. To produce Normal probability plots: 1. Choose Analyze from the menu bar 2. Choose Descriptive Statistics. 3. Choose Q-Q Plots... to get a plot of the quantiles (Q-Q plot) or choose P-P Plots... to get a plot of the cumulative proportions (P-P plot) 4. Select the variables from the source list on the left and then click on the arrow located in the middle of the window. 5. Choose Normal as the Test Distribution. The Normal distribution is the default Test Distribution. Other Test Distributions can be selected by clicking on the down arrow and clicking on the desired Test distribution. 6. Choose OK SPSS will produce both a Normal probability plot and a detrended Normal probability plot for each selected variable. Usually the Q-Q plot is the most useful for assessing if the distribution of the variable is approximately Normal. Normal Q-Q Plot of Serum fasting glucose Normal Q-Q Plot of Body mass index 250 Expected Normal Value Expected Normal Value Observed Value Observed Value
34 31 Error Bar Plot. To produce an error bar plot of the mean of a numerical variable (or the means for different groups of subjects): 1. Choose Graphs and then Legacy Dialogs from the menu bar. 2. Choose Error Bar Choose Simple or Clustered 4. Choose what the data in the error bars represent (e.g., summaries for groups of cases). 5. Choose Define 6. Select a variable from the variable list on the left and then click on the arrow next to the Variable box. 7. Select the variable from the variable list that defines the groups and then click on the arrow next to Category Axis. 8. Select what the bars represent (e.g., confidence interval, ±standard deviation, ±standard error of the mean) 9. Choose OK 300 Error Bar Plot Mean +- 2 SD Serum fasting glucose normal impaired fasting glucose ADA diabetes status diabetic A bar chart of the mean with error bars can be made using the commands for making a bar chart Mean Serum fasting glucose normal impaired fasting glucose ADA diabetes status Error bars: +/- 2 SD diabetic 1. Choose Graphs and then Legacy Dialogs from the menu bar. 2. Choose Bar Choose Simple 4. Choose Summaries for groups of cases 5. Choose Define 6. Select a variable from the variable list on the left and the click on the arrow next to the Category axis (e.g., diabetes status) 7. Choose Other statistic (e.g. mean). By default the mean will be selected. 8. Choose a variable for the Variable that you the want to display the mean (or Other statistic). 9. Choose Options 10. Select Display error bars 11. Select Standard deviation, and enter 2 for the Multiplier 12. Choose Continue 13. Choose OK
35 32 Scatter Plot. To produce a scatter plot between two numerical variables: 1. Choose Graphs and then Legacy Dialogs on the menu bar. 2. Choose Scatter/Dot Choose Simple 4. Choose Define 5. Y Axis: Select the y variable you want from the source list on the left and then click on the arrow next to the y axis box. 6. X Axis: Select the x variable you want from the source list on the left and then click on the arrow next to the x axis box. 7. Choose Titles Enter a title for the plot (e.g., y vs. x). 9. Choose Continue 10. Choose OK HDL cholesterol HLD cholesterol vs BMI Body mass index Adding a linear regression line to a scatter plot. To add a linear regression (least-squares) line to a scatter plot of two numerical variables: 1. While in the Viewer window double click on the scatter plot. The scatter plot should now be displayed in a window titled Chart Editor. 2. Choose Elements. 3. Choose Fit Line at Total. (A line should be added to the plot, because the next 2 steps are the default options. 4. Choose Linear (in the Properties window) 5. Choose Apply 6. Choose Close HDL cholesterol HLD cholesterol vs BMI Body mass index 40 R Sq Linear = Additional options: o Choose Mean under Confidence Intervals (in the Properties window) to add a prediction interval for the linear regression line to the scatter plot or o Choose Individual under Confidence Intervals to add a prediction interval for individual observations to the scatter plot. 7. Click on the ``X'' in the upper right hand corner of the Chart Editor window, or choose File and then Close to return to the Viewer window. 50
36 33 Adding a Loess (scatter plot) smooth to a scatter plot. To add a Loess smooth to a scatter plot of two numerical variables: 1. While in the Viewer window double click on the scatter plot. The scatter plot should now be displayed in a window titled Chart Editor. 2. Choose Elements. 3. Choose Fit Line at Total. The next two steps (4. & 5.) may be already selected 4. Choose Loess (in the Properties window). Default options for % of points to fit (50%) and kernel (Epanechnikov) are usually appropriate options. 5. Choose Apply (in the Properties window). 6. Choose Close 7. Click on the ``X'' in the upper right hand corner of the Chart Editor window, or choose File and then Close to return to the Viewer. HDL cholesterol HLD cholesterol vs BMI Body mass index Stem-and-leaf Plot. To produce stem-and-leaf plot: 1. Choose Analyze on the menu bar 2. Choose Descriptive Statistics 3. Choose Explore Dependent List: To select the variables you want from the source list on the left, highlight a variable by pointing and clicking the mouse and then click on the arrow located next to the dependent list box. Repeat the process until you have selected all the variables you want. 5. Choose Plots Choose Stem-and-leaf from the Descriptive box. Note the option may already be selected if the little box is not empty. 7. Choose None from the Boxplot box 8. Choose Continue 9. Choose Plots for the Display option 10. Choose OK Severity of Illness Index Stem-and- Leaf Plot Frequency Stem & Leaf Extremes (>=62) Stem width: Each leaf: 1 case(s)
37 34 Hypothesis Tests & Confidence Intervals One-Sample t Test 1. Choose Analyze from the menu bar. 2. Choose Compare Means 3. Choose One-Sample T Test Test Variable(s): Select the variable you want from the source list on the left, highlight variables by pointing and clicking the mouse and then click on the arrow located in the middle of the window. 5. Edit the Test Value. The Test Value is the value of the mean under the null hypothesis. The default value is zero. 6. Choose OK Confidence Interval for a Mean (from one sample of data) 1. Choose Analyze from the menu bar. 2. Choose Compare Means 3. Choose One-Sample T Test Test Variable(s): Select the variable you want from the source list on the left, highlight variables by pointing and clicking the mouse and then click on the arrow located in the middle of the window. 5. The Test Value should be 0, which is the default value. 6. By default a 95% confidence interval will be computed. Choose Options to change the confidence level. 7. Choose OK SIDS Example. There were 48 SIDS cases in King County, Washington, during the years 1974 and The birth weights (in grams) of these 48 cases were: The mean (and standard deviation) of these measurements is 2891 (623) grams. We want to know if the mean birth weight in the population of SIDS infant is different from that of normal children, 3300 grams. We could construct a 95% confidence interval, to see if the interval contains the value of 3300 grams or we could perform a one sample t test to test if the mean in the SIDs population is equal to 3300 (versus not equal to 3300).
38 35 To construct a 95% confidence interval When computing the interval for a mean make sure the Test Value is 0. One-Sample Statistics N Mean Std. Deviation Std. Error Mean birth weight Number of subjects, mean, standard deviation, and standard error of the mean. One-Sample Test Test Value = 0 95% Confidence Interval of the Difference t df Sig. (2-tailed) Mean Difference Lower Upper birth weight Ignore the t test results (t, df, sig.) because these results are for testing if the mean birth weight is equal to 0 (versus not equal to zero). 95% confidence interval for the mean birth weight is 2710 to 3072 grams
39 36 To perform a one sample t test to test if the mean in the SIDs population is equal to 3300 versus not equal to To run the one-sample t test to test if the mean birth weight is equal to 3300 you need to change the Test Value from the default value of 0 to One-Sample Statistics N Mean Std. Deviation Std. Error Mean birth weight One-Sample Test Test Value = % Confidence Interval of the Difference t df Sig. (2-tailed) Mean Difference Lower Upper birth weight Sig. (2-tailed) = two tailed p-value = <.001 t = test statistic value = df = degrees of freedom = 47 Ignore the results for 95% confidence interval of the difference, because it is the confidence interval for the mean minus 3300.
40 37 Paired t Test 1. Choose Analyze from the menu bar. 2. Choose Compare Means 3. Choose Paired-Samples T Test Paired Variable(s): Select two paired variables you want from the source list on the left, and then click on the arrow in the middle of the in window. The order in which you select the two variables will determine how the difference is computed. Repeat the process until you have selected all the paired variables you want to test. 5. Choose OK Confidence Interval for the Difference Between Means from Paired Sample By default a 95% confidence interval for the difference means of the paired samples will be computed when performing a paired t test. Choose Options to change the confidence level. Prozac Example. To compare the effect of Prozac on anxiety 10 subjects are given one week of treatment with Prozac and one week of treatment with a placebo. The order of the treatments was randomized for each subject. An anxiety questionnaire was used to measure a subject's anxiety on a scale of 0 to 30. Higher scores indicate more anxiety. Subject Placebo Prozac Difference Mean difference, d 1.3 Standard deviation, s 4.5 d
41 38 Paired t test and confidence interval for the difference between paired means. The order of the variables in calculating the difference is determined by the order in which you selected the variables. The difference will computed by Variable 1 Variable 2. Paired Samples Statistics Mean N Std. Deviation Std. Error Mean Pair 1 placebo prozac Summaries for each sample of data (or variable). Paired Samples Correlations N Correlation Sig. Pair 1 placebo & prozac Correlation between the paired values - usually not useful. Paired Samples Test Mean Std. Deviation Paired Differences t df Std. Error 95% Confidence Interval of Mean the Difference Sig. (2- tailed) Pair 1 placebo - prozac Lower Upper difference = placebo - prozac mean difference = 1.3 standard deviation of the differences = 4.5 standard error of the differences = % confidence interval for the mean difference is -1.9 to 4.6 Paired t test Sig. (2 tailed) = two-sided p-value = 0.39 t = test statistic value =.904 df = degrees of freedom
42 39 Two-Sample t Test 1. Choose Analyze on the menu bar. 2. Choose Compare Means 3. Choose Independent-Samples T Test Test Variable(s): Select the test variable you want from the source list on the left and then click on the arrow located next to the test variable box. Repeat the process until you have selected all the variables you want. 5. Grouping Variable: Select the variable which defines the groups and then click on the arrow located next to the grouping variable box. 6. Choose Define Groups Click on blank box next to Group 1, then enter the code value (numeric or character/string) for group Click on blank box next to Group 2, then enter the code value (numeric or character/string) for group Choose Continue 10. Choose OK Confidence Interval for the Difference Between Means from Independent Samples By default a 95% confidence interval for the difference means from two independent samples will be computed when performing a two sample t test. Choose Options to change the confidence level. Model Cities Example. Two groups of people were studied - those who had been randomly allocated to a Fee-For-Service medical insurance group and those who had been randomly allocated to a Prepaid insurance group. We would like to compare the two groups on the quality of health care they received in each group, but first we would like to know how comparable the groups are on other characteristics that might affect medical outcome. For example, we would like to know if the mean age in the two groups is similar. Hopefully, the process of random allocation minimizes this possibility, but there is always a chance that it didn't. Group n Mean Standard deviation Prepaid (GHC) Fee-for-service (KCM) We could compare the average age between the two groups using a two sample t test or a confidence interval for the difference between the average ages of the two groups.
43 40 Two sample t test and 95% confidence interval for the difference between means (from independent samples). After you select the Grouping Variable, SPSS will put in question marks to prompt you to define the code values for the two groups. Select Define Groups to enter the code values. In this example the group codes are numeric, 0 (for GHC) and 1 (for KCM) T-Test Group Statistics prov N Mean Std. Deviation Std. Error Mean age GHC KCM Summaries for each sample/group. Independent Samples Test age Levene's Test for Equality of Variances F Sig. Equal variances assumed Equal variances not assumed SPSS by default tests if the variances are equal using Levene s test. A small p-value (sig.) indicates the variances may be different. sig. = p-value = <.001 F = test statistic value = 47.0
44 41 Independent Samples Test age t-test for Equality of Means Mean Difference Std. Error Difference t df Sig. (2-tailed) Equal variances assumed Equal variances not assumed Two Sample t test. SPSS by default always performs both versions of the two sample t test assuming equal variance and unequal variances Sig. (2 tailed) = two sided p-value = <.001 (equal var.), <.001 (unequal var.) t = test statistic value = -4.2 (equal var.), -4.4 (unequal var.) df = degrees of freedom = 4372 (equal var.), 2294 (unequal var.) mean difference = difference between means = -2.4 (equal and unequal var.) std. error difference = standard error of the difference between means =.6 (equal var.),.5 (unequal var.) Independent Samples Test age 95% Confidence Interval of the Difference Lower Upper Equal variances assumed Equal variances not assumed % confidence interval for the difference between means is -3.4 to -1.3 (assuming equal variances) -3.4 to -1.3 (assuming unequal
45 42 Sign Test and Wilcoxon Signed-Rank Test 1. Choose Analyze from the menu bar. 2. Choose Nonparametric Tests 3. Choose Legacy Dialogs 4. Choose 2 Related Samples Test Pair(s) List: Select two paired variables you want from the source list on the left, and then click on the arrow in the middle of the in window. The order in which you select the two variables will determine how the difference is computed. Repeat the process until you have selected all the paired variables you want to test. 6. Choose Sign as the Test Type. 7. and/or 8. Choose Wilcoxon as the Test Type. 9. Choose OK Aspirin Example. To compare 2 types of Aspirin, A and B, 1 hour urine samples were collected from 10 people after each had taken either A or B. A week later the same routine was followed after giving the other type to the same 10 people. Person Type A Type B Difference Mean = = d Standard deviation = = s d A Sign test or Wilcoxon Signed Rank test could be used to compare the two types of Aspirin.
46 43 The order of the variables in calculating the difference is determined by the order in which you selected the variables. The difference will computed by Variable 2 Variable 1 (which is the opposite of the paired t test). Select Wilcoxon or Sign (or both) Under Options you can select summaries Descriptive (n, mean, etc.) and Quartiles (median, 25 th and 75 th percentile) Descriptive Statistics Percentiles N Mean Std. Deviation Minimum Maximum 25th 50th (Median) 75th AspirinA AspirinB Sign Test AspirinB - AspirinA a AspirinB < aspirina b AspirinB > aspirina c AspirinB = aspirina Frequencies N Negative 8 Differences(a) Positive 1 Differences(b) Ties(c) 1 Total 10 Sign Test Test Statistics(b) AspirinB - AspirinA Exact Sig. (2-tailed).039(a) a Binomial distribution used. b Sign Test Exact sig. (2-tailed) = exact, two-sided p-value = The p-value is exact because it is computed using the Binomial distribution instead of using an approximation to the Normal distribution. (Note that the exact p-value is reported only for small sample sizes.)
47 44 Wilcoxon Signed Ranks Test Ranks N Mean Rank Sum of Ranks aspirinb - aspirina Negative Ranks 8(a) Positive Ranks 1(b) Ties 1(c) Total 10 a aspirinb < aspirina b aspirinb > aspirina c aspirinb = aspirina Information used in the test statistic not usually reported; use the previous descriptives. Test Statistics(b) aspirinb - aspirina Z (a) Asymp. Sig. (2-tailed).015 a Based on positive ranks. b Wilcoxon Signed Ranks Test Wilcoxon Signed Rank Test Asymp. Sig. (2-tailed) = two sided p-value = Asymp. is an abbreviation for asymptotic, which means the p-value is computed using a large sample approximation based on the Normal distribution.
48 45 Mann-Whitney U Test (or Wilcoxon Rank Sum Test) 1. Choose Analyze on the menu bar. 2. Choose Nonparametric Tests 3. Choose Legacy Dialogs 4. Choose 2 Independent Samples Test Variable(s): Select the test variable you want from the source list on the left and then click on the arrow located next to the test variable box. Repeat the process until you have selected all the variables you want. 6. Grouping Variable: Select the variable which defines the grouping and then click on the arrow located next to the grouping variable box. The grouping variable must be numeric for the variable to appear on the left hand side. 7. Choose Define Groups Click on the blank box next to group 1, then enter the code value (it must be numeric) for group Click on the blank box next to group 2, then enter the code value (it must be numeric) for group Choose Continue to return to Two Independent Samples dialog box. 11. Choose Mann-Whitney U as the Test Type. Note that the option may already be selected if the little box is not empty. 12. Choose OK Legionnaires Example. During July and August, 1976, a large number of Legionnaires attending a convention died of mysterious and unknown cause. Chen et al. (1977) examined the hypothesis of nickel contamination as a toxin. They examined the nickel levels in the lungs of nine cases and nine controls. There was no attempt to match cases and controls. The data are as follows (μg/100g dry weight): Legionnaire cases Controls The Mann Whitney U test could be used to compare the two groups. After you select the Grouping Variable, SPSS will put in question marks to prompt you to define the code values for the two groups. Select Define Groups to enter the code values. Note: The codes must be numeric, otherwise the grouping variable will not appear on the left hand side.
49 46 In this example the group codes are 1 for legionnaires and 2 for controls. Mann-Whitney Test Ranks group N Mean Rank Sum of Ranks Nickel Total 18 Test Statistics(b) Information used in the test statistic not usually reported. The descriptives under Options are not useful; you can produce relevant descriptives (e.g. median and interquartile range for each group) using the Explore command. nickel Mann-Whitney U Wilcoxon W Z Asymp. Sig. (2-tailed).001 Exact Sig. [2*(1-tailed Sig.)].000(a) a Not corrected for ties. b Grouping Variable: group Mann Whitney test Asymp. Sig. (2-tailed) = two-sided p-value = This p-value is computed based a large sample approximation to the Normal distribution and it corrects for ties in the data, if present. Exact Sig. [2*(1-tailed Sig.)] = two-sided p- value = <.001 This p-value is an exact p-value, but it does not correct for ties in the data, if present. In this example, given the small sample sizes and few ties in the data, the exact p-value would be appropriate to report.
50 47 One-way ANOVA (Analysis of Variance) (E.g., to compare two or more means from two or more independent samples) 1. Choose Analyze on the menu bar 2. Choose Compare Means 3. Choose One-Way ANOVA Dependent: Select the variable from the source list on the left for which you want to use to compare the groups and then click on the arrow next to the dependent variable box. You run multiple one-way ANOVAs by selecting more than one dependent variable. 5. Factor: Select the variable from the source list on the left which defines the groups. 6. Choose OK To perform pairwise comparisons to determine which groups are different while controlling for multiple testing use the Post Hoc... option. There are many methods to choose from (e.g., Bonferroni and R-E-G-W-Q). Other useful options can be found under Options... For example, choose Descriptive to get descriptive statistics for each group (e.g., mean, standard deviation, minimum value, and maximum value). Choose Homogeneity-of-variance to perform the Levene Test to test if the group variances are all equal versus not all equal. A small p-value for the Levene's Test may indicate that the variances are not all equal. CHD Example. We can use one-way ANOVA to compare HDL levels between subjects with different hypertensive status (0=normotensive, 1=borderline, 2=definite) Hypertensive Standard Group n Mean Deviation Normotensive Borderline Definite You can select 1 or more variables to compare between groups. The variable selected as the Factor defines the groups. The variable can be numeric or character/string.
51 48 Oneway ANOVA HDL cholesterol Sum of Squares df Mean Square F Sig. Between Groups Within Groups Total One-way analysis of variance Sig. = p-value = <.001 F = test statistic = 9.0; df = degrees of freedom Sometimes the test statistic and degrees of freedom of the test statistics are reported along with the p-value; in this example, F=9.0 with degrees of freedom 2 and Sum of squares and mean square are used to compute the test statistic; they are usually not reported. Descriptives Under Options you can request Descriptives for each group to be computed. This information can be used to describe the differences between the groups. HDL cholesterol N Mean Std. Deviation Std. Error 95% Confidence Interval for Mean Minimum Maximum Lower Bound Upper Bound normotensive borderline definite Total
52 49 Post Hoc Tests Under Post Hoc you can request further comparisons be done between each of the possible pair of groups to determine which groups are different from each other. These are multiple comparison procedures, which control for the number of tests/comparison being performed. There are many methods to choose from; below is an example of the Bonferroni method and Ryan-Einot-Gabriel-Welsch method. Multiple Comparisons Dependent Variable: HDL cholesterol (I) (J) Hypertension Hypertension status status Mean Difference (I-J) Std. Error Sig. 95% Confidence Interval Lower Bound Upper Bound Bonferroni normotensive borderline definite 2.356(*) borderline normotensive definite 2.198(*) definite normotensive (*) borderline (*) * The mean difference is significant at the.05 level. The Bonferroni method is a method that shows all pairwise comparisons/differences along with a p-value (sig.) adjusted for the number of comparisons. In this example, subjects with normal blood pressure and borderline hypertension have similar HDL cholesterol levels, but subjects with definite hypertension have different HDL cholesterol levels than both subjects with normal blood pressure and borderline hypertension. Homogeneous Subsets HDL cholesterol Subset for alpha =.05 Hypertension status N 1 2 Ryan-Einot-Gabriel- definite Welsch Range borderline normotensive Sig Means for groups in homogeneous subsets are displayed. The Ryan-Einot-Gabriel-Welsch (R-E-G-W-Q) method is a method that groups together groups that are similar in the same subset and groups that are different are in different subsets. In this example, subjects with normal blood pressure and borderline hypertension are in one subset and subjects with definite hypertension are in a different subset. Hence, subjects with definite hypertension have different HDL cholesterol levels than subjects with normal blood pressure and borderline hypertension, but subjects with normal blood pressure and borderline hypertension have similar HDL cholesterol levels.
53 50 Kruskal-Wallis Test 1. Choose Analyze on the menu bar. 2. Choose Nonparametric Tests 3. Choose Legacy Dialogs 4. Choose K Independent Samples Test Variable(s): Select the test variable you want from the source list on the left and then click on the arrow located next to the test variable box. Repeat the process until you have selected all the variables you want to test. 6. Grouping Variable: Select the variable which defines the grouping and then click on the arrow located next to the grouping variable box. 7. Choose Define Range Click on the blank box next to Minimum, then enter the smallest numeric code value for the groups. 9. Click on the blank box next to Maximum, then enter the largest numeric code value for the groups. 10. Choose Continue 11. Choose Kruskal-Wallis H as the Test Type. Note that the option may already be selected if the little box is not empty. 12. Choose OK CAUTION: The group variable must be numeric and you must correctly enter the smallest numeric code value and the largest numeric code value. SPSS will not allow you to select a character/string variable as the grouping variable, and allow you to incorrectly enter the numeric code values. The results displayed for the Kruskal Wallis test in these cases will be incorrect, but no error or warning message will be displayed. CHD Example. We can use one-way ANOVA to compare serum insulin levels between subjects with different hypertensive status (0=normotensive, 1=borderline, 2=definite) Hypertensive Group n Median IQR* Normotensive , 15 Borderline , 17 Definite , 20 *IQR, interquartile range = 25 th percentile, 75 th percentile
54 51 Kruskal Wallis test You can select 1 or more variables to compare between groups. The variable selected as the Grouping Variable defines the groups. THE VARIABLE SHOULD BE NUMERIC. In this example the smallest numeric code is 0 (for normal) and the largest numeric code is 2 (for definite). Kruskal-Wallis Test Ranks Hypertension status N Mean Rank Serum insulin normotensive borderline definite Total 3425 Test Statistics(a,b) Information used in the test statistic not usually reported. The descriptives under Options are not useful; you can produce relevant descriptives (e.g. median and interquartile range for each group) using the Explore command. Serum insulin Chi-Square df 2 Asymp. Sig..000 a Kruskal Wallis Test b Grouping Variable: Hypertension status Kruskal Wallis test Asymp. Sig. = p-value = <.001 Asymp. is an abbreviation for asymptotic, which means the p-value is computed using a large sample approximation based on the Normal distribution. Chi-Square = test statistic value = Df = degrees of freedom = 2
55 52 One-Sample Binomial Test 1. Choose Analyze from the menu bar. 2. Choose Nonparametric Tests 3. Choose Legacy Dialogs 4. Choose Binomial Test Variable List: Select the test variable you want from the source list on the left and then click on the arrow located next to the test variable box. Repeat the process until you have selected all the variables you want. 6. Test Proportion: Click on the box next to Test Proportion and enter/edit the proportion value specified by your null hypothesis. 7. Choose OK Example. In the TRAP study, 125 patients of the 527 patients who were negative for lymphocytotoxic antibodies at baseline became antibody positive. The expected rate for being antibody positive is 30%. We could use the one-sample binomial test to test if the rate is different in the TRAP study population. Outcome is a variable coded 1 if positive and 0 if negative. Make sure to edit the test proportion value. This case.30 or 30%. The default is.50. NPar Tests Binomial Test Category N Observed Prop. Test Prop. Exact Sig. (1- tailed) Outcome Group 1 No Group 2 Yes Total One-sample binomial test, two-sided p-value given by 2 x.001 =.002 (Note: SPSS reports the one-sided p-value).
56 53 McNemar's Test 1. Choose Analyze from the menu bar. 2. Choose Descriptive Statistics 3. Choose Crosstabs Row(s): Select the row variable you want from the source list on the left and then click on the arrow located next to the Row(s) box. Repeat the process until you have selected all the row variables you want. 5. Column(s): Select the column variable you want from the source list on the left and then click on the arrow located next to the Column(s) box. Repeat the process until you have selected all the column variables you want. 6. Choose Cells For cell values choose total under percentages. 8. Choose Continue 9. Choose Statistics Choose McNemar 11. Choose Continue 12. Choose OK There is also another way to run McNemar s test (but the test pair variables must be numeric). 1. Choose Analyze from the menu bar. 2. Choose Nonparametric Tests 3. Choose Legacy Dialogs 4. Choose 2 Related Samples Test Pair(s) List: Select two paired variables you want from the source list on the left, highlight both variables by pointing and clicking the mouse and then click on the arrow located in the middle of the window. Repeat the process until you have selected all the paired variables you want. 6. Choose McNemar as the Test Type. 7. Unselect Wilcoxon to turn off the option. Note that the option is turned off when the little box is empty. 8. Choose OK Example. Suppose we want to compare two different treatments for a rare form of cancer. Since relatively few cases of this disease are seen, we want the two treatment groups to be as comparable as possible. To accomplish this goal, we set up a matched study such that a random member of each matched pair gets treatment A (chemotherapy), whereas the other member gets treatment B (surgery). The patients are assigned to pairs (621 pairs) matched on age (within 5 years), sex, and clinical condition. The patients are followed for 5 years, with survival as the outcome variable. The 5-year survival rate for treatment A is 17.1% (106/621) and for treatment B is 15.3% (95/621). We could use McNemar s test to compare the survival rate of the two treatments.
57 54 McNemar s test It doesn t matter for McNemar s test which variable is selected for the Row(s): or Columns(s). You can run more than one test at a time. Under Statistics select McNemar. Under Cells, in this example, select Total percentages. Crosstabs TreatmentA * TreatmentB Crosstabulation TreatmentB Total died survived TreatmentA died Count % of Total 82.1%.8% 82.9% survived Count % of Total 2.6% 14.5% 17.1% Total Count % of Total 84.7% 15.3% 100.0% Survival rate for Treatment A is 17.1% Survival rate for Treatment B is 15.3% Chi-Square Tests Value McNemar Test N of Valid Cases 621 a Binomial distribution used. Exact Sig. (2-sided).027(a) McNemar s test Exact Sig. (2-sided) = exact two-sided p-value = The p-value is exact because it is computed using the Binomial distribution instead of using an approximation to the Normal distribution.
58 55 Chi-square Test, Fisher s Exact test and Trend test for Contingency Tables If the Chi-square test is requested for a 2 x 2 table, SPSS will also compute the Fisher's Exact test. If the Chi-square test is requested for a table larger than 2 x 2, SPSS will also compute the Mantel-Haenszel test for linear or linear by linear association between the row and column variables. 1. Choose Analyze from the menu bar. 2. Choose Descriptive Statistics 3. Choose Crosstabs Row(s): Select the row variable you want from the source list on the left and then click on the arrow located next to the Row(s) box. Repeat the process until you have selected all the row variables you want. 5. Column(s): Select the column variable you want from the source list on the left and then click on the arrow located next to the Column(s) box. Repeat the process until you have selected all the column variables you want. 6. Choose Cells Choose the cell values (e.g., observed and expected counts; row, column, and margin (total) percentages). Note the option is selected when the little box is not empty. 8. Choose Continue 9. Choose Statistics Choose Chi-square 11. Choose Continue 12. Choose OK Asthma Example. An investigator studied the relationship of parental smoking habits and the presence of asthma in the oldest child. Type A families are defined as those in which both parents smoke and Type B families are those in which neither parent smokes. Of 100 type A families, 15 eldest children have asthma, and of 200 type B families, 6 children have asthma. We could use a chi-square test or Fisher s exact test to test if the proportion of first born children with asthma different in these two types of families? It doesn t matter for the chisquare, Fisher s Exact or trend test which variable is selected for the Row(s): or Columns(s). You can run more than one test at a time.
59 56 Under Statistics select Chisquare. Under Cells, in this example, select Row percentages. Crosstabs familytype * asthma Crosstabulation Asthma Total No Yes FamilyType A Count % within familytype 85.0% 15.0% 100.0% B Count % within familytype 97.0% 3.0% 100.0% Total Count % within familytype 93.0% 7.0% 100.0% Chi-Square Tests Asymp. Sig. (2- Exact Sig. Exact Sig. (1-sided) Value df sided) (2-sided) Pearson Chi-Square (b) Continuity Correction(a) Likelihood Ratio Fisher's Exact Test N of Valid Cases 300 a Computed only for a 2x2 table b 0 cells (.0%) have expected count less than 5. The minimum expected count is % of first born in family type A have asthma 3% of first born in family type B have asthma Fisher s Exact test Exact Sig. (2-sided) = exact two-side p- value = <.001 Chi-square test Pearson Chi-square (without continuity correction), p-value = <.001 Pearson Chi-square with continuity correction, p-value = <.001 Asymp. Sig. (2-sided) = two-sided p-value. Asymp. is an abbreviation for asymptotic, which means the p-value is computed using a large sample approximation based on the Normal distribution. Check that all cells have expected cell counts 5 or greater. Value = test statistic value df = degrees of freedom
60 57 Trend Test Example. A clinical trial of a drug therapy to control pain was performed. The investigators wanted to investigate whether adverse responses to the drug increased with larger drug doses. Subjects received either a placebo or one of four drug doses. In this example dose is an ordinal variable, and it reasonable to expect that as the dose increases and rate of adverse events will increase. Adverse event Dose n % (n) Placebo % (6) 500 mg % (7) 1000 mg % (9) 2000 mg % (10) 4000 mg % (16) There are several different methods for performing a trend test with ordinal variables. One test, which is available in SPSS is the Mantel-Haenszel chi-square, also called the Mantel-Haenszel test for linear association or linear by linear association chi-square test. Adverse events No Yes Total dose 0 Count % within dose 81.3% 18.8% 100.0% 500 Count % within dose 78.1% 21.9% 100.0% 1000 Count % within dose 71.9% 28.1% 100.0% 2000 Count % within dose 68.8% 31.3% 100.0% 4000 Count % within dose 50.0% 50.0% 100.0% Total Count % within dose 70.0% 30.0% 100.0% Chi-Square Tests Value df Asymp. Sig. (2-sided) Pearson Chi-Square 9.107(a) Likelihood Ratio Linear-by-Linear Association N of Valid Cases 160 In this example, there is a significant trend (p-value = 0.003, chi-square trend test), and we would conclude that the rate of adverse responses increases with drug dose. a 0 cells (.0%) have expected count less than 5. The minimum expected count is 9.60.
61 58 Using Standardized Residuals in R x C tables. When the contingency table has more then 2 rows and 2 columns it can be hard to determine the association or the largest differences. Standard residuals are often helpful in describing the association, if the chi-square test indicates there is a statistically significant association. The (adjusted) standardized residual re-expresses the difference between the observed cell count and expected cell count in terms of standard deviation units below or above the value 0 (the expected differences if there is no association), and the distribution of the standardized residuals has a standard Normal distribution. Hence, values less than -2 or greater than 2 indicate large differences and values less than -3 or greater than 3 indicate very large differences. Under Cells, select Adjusted standardized for Residuals Education vs Stage of Disease at Diagnosis Example. The chi-square indicated a significant association between education level and stage of disease at diagnosis ( Chi-square test, p-value = 0.016). Stage of Disease Education I II III 12 years Count % within education 25.3% 30.4% 44.3% Adjusted Residual College Count % within education 40.2% 34.8% 25.0% Adjusted Residual College graduate Count % within education 44.4% 32.2% 23.3% Adjusted Residual The adjusted standardized residuals indicate the biggest difference between the observed and expected cell counts (i.e., the most unusual differences under the assumption of no association between education and stage of disease) are for subjects with 12 years of education, where there are fewer subjects with Stage I and more subjects with Stage III or IV than expected if there was no association between education and stage of disease. Also, to a lesser extent, among the subjects with a college graduate degree there a more subjects with Stage I and fewer subject with Stage III or IV than expected if there was no association between education and stage of disease.
62 59 One sample binomial test, McNemar's test, Fisher's Exact test and Chi-square test for 2 x 2 and R x C Contingency Tables Using Summary Data There is an easy way in SPSS to perform a one sample binomial test, a McNemar's test, a Fisher's Exact test or a Chi-square test for a 2 x 2 or R x C table when you only have summary data (i.e., the number of observations in each cell). One sample binomial test. Suppose you observe 15 cases of myocardial infarction (MI) in 5000 men over a 1 year period and you want to test if the rate of MI is equal to a previously reported incidence rate of 5 per 1000 (or 0.005). 1. In a new (empty) SPSS Data Editor window enter the following 2 rows of data: MI Observed The values of 0 and 1 used to indicate MI (no/yes) are arbitrary. The variable names are also arbitrary (e.g., you can leave them as var0001 and var0002). 2. Next, you want to weight cases by Observed: Choose Data Choose Weight Cases... Choose Weight cases by Choose Observed and then the arrow button so the variable appears in the Frequency variable box. Choose OK 3. Now, run the one sample binomial test: Choose Analyze Choose Nonparametric Tests Choose Binomial... Choose MI so that in appears in the Test Variable List Change (edit) Test Proportion to.005. Choose OK
63 60 McNemar's test. Suppose you have the following summary table of presence and absence of DKA before and after therapy for paired data, Before therapy After therapy No DKA DKA No DKA DKA In a new (empty) SPSS Data Editor window enter the following 4 rows of data: Before After Observed The values of 0 and 1 used to indicate DKA and no DKA are arbitrary. The variable names are also arbitrary (e.g., you can leave them as var0001, var0002, and var0003). 2. Next, you want to weight cases by Observed: Choose Data Choose Weight Cases... Choose Weight cases by Choose Observed and then the arrow button so the variable appears in the Frequency variable box. Choose OK 3. Now, run McNemar's test: Choose Analyze Choose Nonparametric Tests Choose 2 Related Samples... Choose Before and After so that they appear in the Test Pair(s) List. Choose McNemar as the Test Type Choose Wilcoxon to turn off the option Choose OK
64 61 Chi-square test and Fisher's Exact test for a 2 x 2 table. Suppose you have the following summary table for oral contraceptive (OC) use by presence or absence of cancer (case or control), OC Use No Yes Cases (cancer) Controls In a new (empty) SPSS Data Editor window enter the following 4 rows of data: Case OCuse Observed The values of 0 and 1 used to indicate case/control and OC use (no/yes) are arbitrary. The variable names are also arbitrary (e.g., you can leave them as var0001, var0002, and var0003). 2. Next, you want to weight cases by Observed: Choose Data Choose Weight Cases... Choose Weight cases by Choose Observed and then the arrow button so the variable appears in the Frequency variable box. Choose OK 3. Now, run the Chi-square (\& Fisher's Exact) test Choose Analyze Choose Crosstabs Choose Case and OCuse as the row the column variables Choose Statistics... Choose Chi-square Choose Continue Choose OK
65 62 The commands are similar for running the Chi-square test for tables larger than 2x 2. Suppose you have the following summary table for education level by stage of disease at diagnosis Stage of Disease Education level I II III or IV High school or less College College graduate In a new (empty) SPSS Data Editor window enter the following 9 rows of data: Educ Stage Observed The values used to indicate education level and stage are arbitrary, and the variable names are also arbitrary. Follow steps 2. and 3. on the previous page (except use variables Educ and Stage, instead of Case and OCuse).
66 63 Confidence Interval for a Proportion To construct a confidence interval for a proportion or rate is rather awkward in SPSS, but you can do it with the raw data or with summary data (as long as the sample size is large enough to use the Normal approximation methods for binomial data). To construct a confidence interval using the raw data you need 1) a binary indicator variable equal to 1 if the variable is present for a subject and equal to 0 if the variable is absent for a subject, and 2) a variable that is equal to 1 for all subjects. For example, suppose you want to construct a confidence interval for the proportion of males in your data set. First you need a binary indicator variable for males, e.g. you could have a variable named Gender which is equal to 1 if the subject is a male and equal to 0 if the subject is a female. Second you need to create a variable that is equal to 1 for all subjects (e.g., use the Compute statement and create a variable Allones = 1). Now, 1. Choose Analyze on the menu bar 2. Choose Descriptive Statistics 3. Choose Ratio Numerator: Select the binary indicator variable from the source list on the left and then click on the arrow located in the middle of the window (e.g. select Gender) 5. Denominator: Select the variable equal to 1 for all subjects from the source list on the left and then click on the arrow located in the middle of the window (e.g. select Ones) 6. Choose Statistics Choose Mean under Central Tendency 8. Choose Confidence intervals (default is a 95% confidence interval) 9. Choose Continue 10. Choose OK To illustrate how you would construct a confidence interval with summary data, suppose in a data set of 3425 subjects, 1341 are males and 2084 are females: 1. In a new (empty) SPSS Data Editor window enter the following 2 rows of data: Gender Observed Allones Next, you want to weight cases by Observed: Choose Data Choose Weight Cases... Choose Weight cases by Choose Observed and then the arrow button so the variable appears in the Frequency variable box. Choose OK
67 64 3. Now, Choose Analyze on the menu bar Choose Descriptive Statistics Choose Ratio... Numerator: Select Gender Denominator: Select Allones Choose Statistics... Choose both Mean and Confidence intervals under Central Tendency Choose Continue Choose OK Example of the SPSS output using the previous summary data. Ratio Statistics Ratio Statistics for Gender / Allones 95% Confidence Interval for Mean Coefficient of Variation Price Related Coefficient of Median Mean Lower Bound Upper Bound Differential Dispersion Centered % The confidence intervals are constructed by assuming a Normal distribution for the ratios. The observed proportion was.392 or 39.2%. A 95% confidence interval is 37.5% to 40.8%.
68 65 Correlation & Regression Pearson and Spearman Rank Correlation Coefficient 1. Choose Analyze on the menu bar 2. Choose Correlate 3. Choose Bivariate Variable(s): Select the variables from the source list on the left and then click on the arrow located in the middle of the window. 5. Choose Pearson or/and Spearman as the Correlation Coefficients. Note that the option is selected if the box has a check mark in it. 6. Choose Two-tailed as the Test of Significance. SPSS will perform the test testing if the correlation is equal to zero versus it is not equal to zero. 7. Choose OK Note that you can use the Crosstabs command to calculate confidence intervals for the correlation. Example. Pain-related beliefs, catastrophizing, and coping have been shown to be associated with measures of physical and psychosocial functioning among patients with chronic musculoskeletal and rheumatologic pain. However, little is known about the relative importance of these process variables in the functioning of patients with temporomandibular disorders (TMD). Correlation coefficients could be calculated to examine the association between catastrophizing, depression (Beck Depression Inventory), pain-related activity interference and jaw opening (maximum assisted opening). (Reference: JA Turner, SF Dworkin, L Mancl, KH Huggins, EL Truelove. The roles of beliefs, catastrophizing, and coping in the functioning of patients with temporomandibular disorders. Pain, 92, 41-51, Typically, you would only report either the Pearson or Spearman (rank) correlation coefficients, but you might calculate both to see if you get different results or conclusions. The correlations are shown on the next page. Note that SPSS will display the correlation between variable 1 and variable 2 and between variable 2 and variable 1, which are equivalent, and similarly the correlations between all possible pairs of variables. So, all results displayed below the diagonal of the matrix of results are redundant.
69 66 Correlations 1 st entry = Pearson correlation coefficient 2 nd entry = Sig. (2-tailed) = p-value 3 rd entry = N = the number observations or subjects with non-missing data for both variables Correlations Beck inventory score Interference Maximum assisted opening Catastroph izing Catastroph Pearson Correlation 1.602(**).451(**) izing Sig. (2-tailed) N Beck inventory Pearson Correlation.602(**) 1.445(**) score Sig. (2-tailed) N Interference Pearson Correlation.451(**).445(**) Sig. (2-tailed) N Maximum Pearson Correlation assisted Sig. (2-tailed) opening N ** Correlation is significant at the 0.01 level (2-tailed). Nonparametric Correlations Correlations Correlation between Catastrophizing and Interference =.45 P-value = <.001 N = 118 subjects 1 st entry = Spearman rank correlation coefficient 2 nd entry = Sig. (2-tailed) = p-value 3 rd entry = N = the number observations or subjects with non-missing data for both variables Spearman's rho Catastrophizing Catastrophizing Beck inventory score Beck inventory score Interference Maximum assisted opening Correlation Coefficient (**).451(**) Sig. (2-tailed) N Correlation (**) (**) Coefficient Sig. (2-tailed) Rank correlation between Catastrophiz -ing and Interference =.45 Interference N Correlation Coefficient.451(**).455(**) Sig. (2-tailed) P-value = <.001 Maximum assisted opening N Correlation Coefficient Sig. (2-tailed) N = 118 subjects N ** Correlation is significant at the 0.01 level (2-tailed).
70 67 Confidence Interval for a Correlation Coefficient Typically the Crosstabs command is used to produce contingency tables for categorical variables. One of the options under Statistics is used to compute the correlation coefficient, which would you might want to calculate for ordinal variables. However, you can also use this option for quantitative variables. The Crosstabs command is found by selecting Analyze and then Descriptive Statistics. In this example the correlation between the quantitative variables catastrophizing and interference will be calculated. Select Statistics and then select Correlations. SPSS will produce a contingency table of the crosstabulation of the two variables which you can ignore. SPSS will display the correlation coefficient and standard error estimate for the correlation coefficient, which can be used to calculate confidence intervals. Symmetric Measures Value Asymp. Std. Error(a) Approx. T(b) Approx. Sig. Interval by Interval Pearson's R (c) Ordinal by Ordinal Spearman Correlation (c) N of Valid Cases 118 a Not assuming the null hypothesis. b Using the asymptotic standard error assuming the null hypothesis. c Based on normal approximation. An approximate 95% confidence interval for the correlation coefficient is given by Correlation coefficient ± 1.96 x Asymp. Std Error In this example, 95% confidence interval for the Pearson correlation coefficient is given by.451 ± 1.96 x.068 or.31,.58 95% confidence interval for the Spearman rank correlation coefficient is given by.451 ± 1.96 x.076 or.30,.60
71 68 Linear Regression 1. Choose Analyze on the menu bar 2. Choose Regression 3. Choose Linear Dependent: Select the dependent variable from the source list on the left and then click on the arrow next to the dependent variable box. 5. Independent(s): Select the independent variable and then click on the arrow next to the independent variable(s) box. Repeat the process until you have selected all the independent variables you want. 6. Choose Statistics Choose Estimates. SPSS will print the regression coefficient estimate, standard error, t statistic and p-value for each independent variable (as well as the intercept/constant). By default the option should be selected (i.e., the box has a check mark in it). 8. Choose Model fit. SPSS will print the multiple R, R squared, Adjusted R-squared, standard error of the regression line, and the ANOVA table. By default the option should be selected. 9. Choose Continue 10. Choose Enter as the Method. Enter is the default method for independent variable entry. Other methods of variable entry can be selected by clicking on the down arrow and clicking on the desired method of entry. 11. Choose OK Additional options are available under Statistics..., Plots..., Save..., Method, and Options... For example: Statistics... Estimates. Default option, which prints the usual linear regression results. Model fit. Default option, which prints the usual linear regression results. Confidence intervals (for the regression coefficient estimates) Covariance matrix (and correlation matrix for the regression coefficient estimates). R squared change. If independent variables are entered in Blocks (using the Block option; see below), this option computes the change in the R squared between models with different blocks of independent variables. It is also useful for computing a partial F test for a categorical variable with more than two categories by entering the indicator variables for the categorical variable in the second block (Block 2 of 2) and all other independent variables in the first block (Block 1 of 2) and using the R squared change option. Part and Partial Correlations. This option computes the Pearson correlation coefficient between the dependent variable and each independent variable (Zero-order correlation) and the correlation coefficient between the dependent variable and an independent variables after controlling for all the other independent variables in the regression model (Partial correlation). Squaring the partial correlation gives you the partial R-squared for an independent variable. This option also computes a Part correlation, which is the correlation between the dependent variable and an independent after (only) the independent variable has been adjusted for all the other independent variables in the regression model. The square of the Part correlation is equal to the change in the R-squared when an independent is added to the regression model with all the other independent variables.
72 69 (Multi-)Collinearity diagnostics. This option computes various statistics for detecting collinearity between the independent variables. For example, Tolerance is the proportion of a variable's variance not accounted for by other independent variables in the equation. A variable with a very low tolerance contributes little information to a model, and can cause computational problems. Another statistic is the VIF (variance inflation factor). Large values are an indicator of multicollinearity between independent variables. Plots... which are useful for doing regression diagnostics: Histogram or Normal Probability Plot (P-P plot) (of the standardized residuals). Produce all partial (residual) plots Other scatter plots Save... which produced variables which are useful for doing regression diagnostics: Predicted Values (unstandardized, standardized, adjusted) Residuals (unstandardized, standardized, studentized, delete) Distances (Mahalanobis, Cook's, Leverage) Influence Statistics (dfbeta, dffit) Note that SPSS creates a new variable for each selected Save... option and adds the new variables to the data file. The variable names are defined in the Variable View of the Data Editor. Once you are done using these variables you may want to delete them from the data file or save them (by re-saving the data file). Method. Click on the down arrow to the right of Method to display the methods available for independent variable entry (enter, stepwise, remove, backward, forward). Enter is the default option. The other options you enter independent variables into the model using various stepwise methods. Options... You can modify the entry and removal criteria used by stepwise, remove, backward, and forward independent variable entry methods. You can define how observations with missing data are handled. Previous, Block \# of \#, Next You can use these options to enter independent variables in blocks into the regression model. You can select different methods of variable entry for each block. This option is also useful for computing partial F tests with the R squared change option.
73 70 Example. Simple linear regression of forced expiratory volume (volume, 1 second) on height (cm). The dependent variable in this example is forced expiratory volume (fev1). There is only 1 independent variable in this example, height. Additional options can be found under Statistics, Plots, Save, & Options. Here are the Statistics options Usually you want the default options Estimates and Model fit selected. In this example, (95%) confidence interval for the regression coefficients is also selected. Here are the Plots options By default no options are selected. In this example, the normal probability plot of the residuals is requested.
74 71 Regression Variables Entered/Removed(b) Model Variables Entered Variables Removed Method 1 height(a). Enter a All requested variables entered. b Dependent Variable: fev1 Information on the independent variables and dependent variable in the regression model, and the method of entering the independent variables into the regression model. R-Square = proportion of the total variation in the dependent variable explained by the independent variable(s) =.315 or 31.5% Model Summary(b) Model R R Square Adjusted R Square Std. Error of the Estimate 1.562(a) a Predictors: (Constant), height b Dependent Variable: fev1 R is square root of R Square Adjusted R Square adjusts the R square for the number of variables in the model Std. error of the estimate = standard deviation of the error or residuals. Not usually reported, but used in estimating the standard error of the regression coefficients. ANOVA(b) Model Sum of Squares df Mean Square F Sig. 1 Regression (a) Residual Total a Predictors: (Constant), height b Dependent Variable: fev1 ANOVA = analysis of variance table. Not needed when there is only 1 independent variable in the model. The F test is equivalent to the t test for testing if the slope is equal to zero in the output that follows. (F = t 2 )
75 72 Coefficients(a) Model Unstandardized Coefficients Standardized Coefficients t Sig. 95% Confidence Interval for B Std. B Error Beta Lower Bound Upper Bound 1 (Constant) height a Dependent Variable: fev1 Unstandardized coefficients B = regression coefficient In this example B = is the slope and B = the intercept Std. Error = standard error of the regression coefficient. Standardized coefficients Beta = standardized regression coefficient t = t statistic for testing if the regression coefficient is equal to zero (versus not equal to zero) Sig. = p value for testing if the regression coefficient is equal to zero (versus not equal to zero). 95% confidence interval for B = 95% confidence interval for the regression coefficient In this example, you would report the slope (.039), standard error of the slope (.002) and the p-value (<.001), or the slope (.039) and 95% confidence interval (.035 to 0.043). Charts Normal P-P Plot of Regression Standardized Residual Dependent Variable: fev1 Expected Cum Prob Normal probability plot of the residuals. The points fall along a straight line, indicating the residuals have, at least approximately, a Normal distribution Observed Cum Prob 1.0
76 73 Linear Regression Example with three independent variables The dependent variable is forced expiratory volume (fev1). The independent variables are height, age and enter. The Enter method means all 3 independent variables will be included in the regression model. Statistics options By default, Estimates and Model fit are selected. In this example, part and partial correlations and collinearity diagnostics are also selected. Plots options Normal probability plot (of the standardized residuals) and partial (residual) plots are selected.
77 74 Regression Variables Entered/Removed(b) Variables Model Entered 1 gender, age, height(a) Variables Removed a All requested variables entered. b Dependent Variable: fev1. Method Enter Model Summary(b) Model R R Square Adjusted R Square Std. Error of the Estimate 1.601(a) a Predictors: (Constant), gender, age, height b Dependent Variable: fev1 Information on the independent variables, method of variable entry, and dependent variable. R-square is.361 or 36.1% (adjusted R-square is 35.8%). About 36% of the variation in the dependent variables can be explained by the 3 independent variables. ANOVA(b) Model Sum of Squares df Mean Square F Sig. 1 Regression (a) Residual Total a Predictors: (Constant), gender, age, height b Dependent Variable: fev1 The overall F test, indicates 1 or more the independent variables is significant (P <.001). Degrees of freedom of the F test are 3 and 795. Coefficients(a) Unstandardized Coefficients Standardized Coefficients t Sig. Correlations Collinearity Statistics B Std. Error Beta Zeroorder Partial Part Tolerance VIF (Constant) height age gender a Dependent Variable: fev1 Height, age, and gender are all statistically significant (P <.001), i.e., the regression coefficients are different from zero. The partial correlations (and partial R-squares, =.095, =.056, and =.026) indicate the correlation with the dependent variable adjusted for the other variables in the regression model. A low tolerance value (say, <.20) or a high variance inflation factor (VIF) (say, > 5 or 10) may indicate a multicollinearity problem.
78 75 Normal P-P Plot of Regression Standardized Residual Dependent Variable: fev1 Expected Cum Prob Normal probability plot of the residuals. The points fall approximately along a straight line, indicating the residuals have (approximately) a Normal distribution Observed Cum Prob 1.0 Partial Regression Plot Partial regression plots for height and age with lowess smooths Dependent Variable: fev1 The plot for height is assessing the relationship between height and fev1 after adjusting for age and gender (e.g., is the relationship linear). fev height Partial Regression Plot Similarly, the plot for age is assessing the relationship between age and fev1 adjusting for height and gender. fev Dependent Variable: fev age Note that SPSS will also produce a partial residual plot for gender. In general, the partial residuals plots for categorical/nominal variables are not very useful. Boxplots of the residuals for each category of a categorical/nominal variable are useful for regression diagnostics. To produce the boxplots you could use the Save options to save the residuals from a regression and then the Boxplot commands to plot the residuals.
79 76 Linear Regression via ANOVA Commands It is possible to use the analysis variance commands of SPSS to perform a linear regression analysis, because the methods are mathematically equivalent. Performing a linear regression analysis via analysis of variance in SPSS is more complicated than using the linear regression commands. However, the advantage of using the analysis of variance commands to perform a linear regression is that you do not have to create indicator variables for categorical variables or create interaction terms. To perform a linear regression via analysis of variance commands 1. Choose Analyze on the menu bar 2. Choose General Linear Model 3. Choose Univariate Dependent: Select the dependent variable from the source list on the left and then click on the arrow next to the dependent variable box. 5. Fixed Factor(s): Select the independent variables that are categorical/qualitative and then click on the arrow next to the fixed factor(s) box. Repeat the process until you have selected all the categorical variables you want. 6. Covariate(s): Select the independent variables that are continuous/quantitative and then click on the arrow next to the covariate(s) box. Repeat the process until you have selected all the continuous variables you want. 7. Choose Model Choose Custom 9. Factors & Covariates: Select/highlight all the variables, then under Build Terms select Main Effects. You may need to click on the down arrow to display the Main Effects option. After you have selected Main Effects, select the arrow under the Build Terms. All the variables should now appear in the Model box on the right hand side. 10. Choose Continue 11. Choose Options Choose Parameter Estimates under Display 13. Choose Continue 14. Choose OK For categorical variables the last category (i.e., the category with the largest numeric coding value) will be the referent group/category. SPSS will compute the F test for each continuous independent variable and for categorical independent variable. By selecting to have the parameter estimates displayed, SPSS will also compute the regression coefficient estimates, standard errors, t (statistic) values, p-values, and 95% confidence intervals that you get from the linear regression commands. To include interaction terms in the regression model, in Step 9 highlight two variables you want to create an (two-way) interaction term. Under Build Terms select Interaction, and then select the arrow under the Build Terms. A two-way interaction between two variables (variable 1 * variable 2) should now appear in the Model box on the right hand side.
80 77 Example. Linear regression of forced expiratory volume on height (continuous variable) and diabetes status (categorical variables; normal, impaired fasting glucose, diabetic). Forced expiratory volume (fev1) is the dependent variable. Diabetes is a categorical variable with the 3 categories Height is a continuous variable Under Model, select Custom, then select each of the variables separately until they all appear under Model: or select Main Effects under Build Terms(s), select all Factors & Covariates, and then select the arrow under Build Term(s). Under Options, select Parameter estimates to have usual linear regression results displayed in the output.
81 78 Univariate Analysis of Variance Between-Subjects Factors Tests of Between-Subjects Effects Dependent Variable: fev1 Source Type III Sum of Squares df Mean Square F Sig. Corrected Model (a) Intercept diabetes height Error Total Corrected Total a R Squared =.322 (Adjusted R Squared =.319) The overall test for the significant of diabetes is displayed (p-value = 0.026) Parameter Estimates Dependent Variable: fev1 Parameter B Std. Error t Sig. 95% Confidence Interval Lower Bound Upper Bound Intercept [diabetes=1.00] [diabetes=2.00] [diabetes=3.00] 0(a)..... height a This parameter is set to zero because it is redundant. This table displays the usual linear regression results. In this example diabetes = 3 (diabetic) is the reference group.
82 79 Example. Adding an interaction between diabetes status and height in the regression model To add an interaction between two variables, select the Build Term(s) to show Interaction, select two variables under Factors & Covariates and then select the arrow under Build Term(s) Univariate Analysis of Variance Dependent Variable: fev1 Tests of Between-Subjects Effects Source Type III Sum of Squares df Mean Square F Sig. Corrected Model (a) Intercept diabetes height diabetes * height Error Total Corrected Total a R Squared =.322 (Adjusted R Squared =.318) Parameter Estimates Dependent Variable: fev1 Parameter B Std. Error t Sig. Intercept [diabetes=1.00] [diabetes=2.00] [diabetes=3.00] 0(a)... height [diabetes=1.00] * height [diabetes=2.00] * height [diabetes=3.00] * height 0(a)... a This parameter is set to zero because it is redundant. This table displays the significant of the diabetes status by height interaction (p-value = 0.58). This table displays the usual linear regression results, which includes the results for diabetes status, height and the interaction between diabetes status and height.
83 80 Logistic Regression 1. Choose Analyze on the menu bar 2. Choose Regression 3. Choose Binary Logistic Dependent: Select the dependent variable from the source list on the left and then click on the arrow next to the dependent variable box. 5. Covariate(s): Select the independent variable and then click on the arrow next to the Covariate(s) box. Repeat the process until you have selected all the independent variables you want. 6. Choose Enter as the Method. Enter is the default method for independent variable entry. Other methods of variable entry can be selected by clicking on the down arrow and clicking on the desired method of entry. 7. Choose OK Additional options are available under >a*>b, Categorical..., Save..., Method, or Options.... For example: >a*>b (for adding two-way interactions) You can add an interaction between two independent variables to the regression model by selecting two variables from the source list on the left (hold down the Ctrl key while selecting the two variables) and then clicking on >a*>b (after you highlight two variables from the source list on the left the >a*>b should be available to select). Categorical... You can use the categorical option to have SPSS create indicator or dummy variables for categorical variables. 1. Choose Categorical 2. Categorical Covariates: Select a covariate that is categorical and then click on the arrow next to the Covariates box. 3. Choose Indicator as the Contrast: Indicator is the default method for creating indicator variables. Other methods can be selected by clicking on the down arrow and clicking on the desired method. 4. Choose the reference category as the last category (i.e., the category with the largest numeric coding value) or the first the category (i.e., category with the smallest numeric coding value). 5. Choose Change. 6. Repeat steps 2 through 5 until you have defined all categorical variables. 7. Choose Continue. Save... Predicted Values (Probabilities and Group Membership). This options creates new variables that are the predicted probabilities and the predicted group membership. The predicted group membership (0 or 1) is based on the whether the predicted probability is less than (group membership=0) or greater than or equal to (group membership=1) the classification cutoff. By default the classification cutoff value is 0.5. You can change the cutoff value using Options... Residuals (Unstandardized, Logit, Studentized, Standardized, Deviance) Influence (Cook's, leverage, dfbeta)
84 81 Note that SPSS creates a new variable for each selected Save... option and adds the new variables to the data file. The variable names are defined in the Viewer window. Once you are done using these variables you may want to delete them from the data file or save them (be resaving the data file). Method Click on the down arrow to the right of Method to display the methods available for independent variable entry (enter, forward:conditional, forward:lr, forward:wald, backward:conditional, backward:lr, backward:wald). Options... Confidence interval for odds ratio (CI for exp(b)) Hosmer-Lemeshow goodness-of-fit You can modify the entry and removal criteria used by the backward and forward variable entry methods. Previous, Block # of #, Next You can use these options to enter independent variables in blocks into the regression model. You can select different methods of variable entry for each block. Example. Logistic regression will be used to determine the relationship between any use of health services (coded 0 = no use, 1 = any use) and age, health index, gender and race. Subjects in the study (Model Cities Data Set) were followed for a varying amount of time, so the number of months followed (expos) will also be included as an independent variable in the logistic regression model. The dependent variable, anyuse, is binary. There are 5 independent variables. Female and Race are categorical/nominal variables.
85 82 You can use the Categorical option to define which variables are categorical and SPSS will create the indicator variables. By default the category with the largest numerical value (last) will be the reference group. Here, the category with the smallest numerical value was selected as the reference group. Under Options you can select to have the 95% confidence intervals for the odds ratios displayed in the output. Also, you can run the Hosmer- Lemeshow goodness-of-fit test. Logistic Regression Case Processing Summary Unweighted Cases(a) N Percent Selected Cases Included in Analysis Missing Cases Total Unselected Cases 0.0 Total a If weight is in effect, see classification table for the total number of cases. Dependent Variable Encoding Original Value Internal Value Information on the number of observations used in the logistic regression. Subjects with missing data are excluded. SPSS will always recode the dependent variable to a 0 or 1 binary variable (internal value), and will estimate the odds ratio for the event coded as 1 (vs the event coded as 0). If your dependent variable is not coded 0 or 1, check this table to determine the interpretation of the odds ratios.
86 83 Categorical Variables Codings Parameter coding Frequency (1) (2) race white other black female male female This table gives the definition of the indicator variables. E.g., race(1) = other race(2) = black (race = white, is the reference group) female(1) = female (male is the reference group) Caution! Make sure you understand the interpretation of the indicator variables that SPSS creates. It is very easy to get confused. For example, in this example the variable race is coded 1=white, 2=other, 3=black. A common mistake would be to interpret race(1) = white and race(2) = other. Block 0: Beginning Block Block 1: Method = Enter Omnibus Tests of Model Coefficients Chi-square df Sig. Step 1 Step Block Model Ignore all the output under Block 0. The output displays information for the logistic regression model with no independent variables in the model. Unless you are using stepwise methods to enter variables or entering variables in different blocks you can ignore this output. Model Summary R-square measures for logistic Step -2 Log likelihood Cox & Snell R Square Nagelkerke R Square regression usually not very useful (a) a Estimation terminated at iteration number 5 because parameter estimates changed by less than.001. Classification Table(a) Predicted anyuse percent Observed correct Step anyuse Overall percentage 83.1 a The cut value is.500 Ignore this table also. It is describing how the logistic regression predicts any use if a predicted probability > 0.5 is to used to indicate any use. All subjects are predicted to have use.
87 84 Hosmer and Lemeshow Test Step Chi-square df Sig Hosmer-Lemeshow goodness-of-fit statistic is formed by grouping the data into g groups (usually Contingency Table for Hosmer and Lemeshow Test anyuse =.00 anyuse = 1.00 Total Observed Expected Observed Expected Observed Step g=10) based on the percentiles of the estimated probabilities and calculating the Pearson chi-square statistic from the 2 x g table of observed and estimated expected frequencies. A small p- value indicates a lack of fit. Large differences between the observed and expected values can be used to help identify where there is lack-of-fit when present. The last table of the output usually has the results we are most interested in. It lists the odds ratios, p-values and 95% confidence intervals for the odds ratios. Variables in the Equation B S.E. Wald df Sig. Exp(B) 95.0% C.I.for EXP(B) Lower Upper Step expos 1(a) age female(1) race race(1) race(2) health Constant a Variable(s) entered on step 1: expos, age, female, race, health. Exp(B) = Odds Ratio 95.0% C.I. for EXP(B) = 95% confidence interval for the odds ratio Sig. = P-value for the individual odds ratio or the overall significant of a categorical/nominal variable if there is no Exp(B) listed.
88 85 B = the logistic regression coefficient, the log odds ratio S.E. = the standard error the of the logistic regression coefficient Wald = the Wald test statistic for testing if B=0 (or equivalently odds ratio = 1) or if all B s = 0 for a categorical variable with >2 indicator variables. d.f. = degrees of freedom of the test statistic. It is often helpful to write on your output the definition of the indicator variables, so you don t get confused about the interpretation of the results. Also, helpful to change Exp(B) to odds ratio, and sig. to P-value. Step 1(a) expos 95.0% C.I.for Odds odds ratio Ratio Lower Upper P-value age female (vs male) race.002 other vs white black vs white health
There are six different windows that can be opened when using SPSS. The following will give a description of each of them.
SPSS Basics Tutorial 1: SPSS Windows There are six different windows that can be opened when using SPSS. The following will give a description of each of them. The Data Editor The Data Editor is a spreadsheet
Describing, Exploring, and Comparing Data
24 Chapter 2. Describing, Exploring, and Comparing Data Chapter 2. Describing, Exploring, and Comparing Data There are many tools used in Statistics to visualize, summarize, and describe data. This chapter
SPSS Explore procedure
SPSS Explore procedure One useful function in SPSS is the Explore procedure, which will produce histograms, boxplots, stem-and-leaf plots and extensive descriptive statistics. To run the Explore procedure,
Using SPSS, Chapter 2: Descriptive Statistics
1 Using SPSS, Chapter 2: Descriptive Statistics Chapters 2.1 & 2.2 Descriptive Statistics 2 Mean, Standard Deviation, Variance, Range, Minimum, Maximum 2 Mean, Median, Mode, Standard Deviation, Variance,
An introduction to IBM SPSS Statistics
An introduction to IBM SPSS Statistics Contents 1 Introduction... 1 2 Entering your data... 2 3 Preparing your data for analysis... 10 4 Exploring your data: univariate analysis... 14 5 Generating descriptive
An Introduction to SPSS. Workshop Session conducted by: Dr. Cyndi Garvan Grace-Anne Jackman
An Introduction to SPSS Workshop Session conducted by: Dr. Cyndi Garvan Grace-Anne Jackman Topics to be Covered Starting and Entering SPSS Main Features of SPSS Entering and Saving Data in SPSS Importing
Introduction Course in SPSS - Evening 1
ETH Zürich Seminar für Statistik Introduction Course in SPSS - Evening 1 Seminar für Statistik, ETH Zürich All data used during the course can be downloaded from the following ftp server: ftp://stat.ethz.ch/u/sfs/spsskurs/
IBM SPSS Statistics 20 Part 1: Descriptive Statistics
CALIFORNIA STATE UNIVERSITY, LOS ANGELES INFORMATION TECHNOLOGY SERVICES IBM SPSS Statistics 20 Part 1: Descriptive Statistics Summer 2013, Version 2.0 Table of Contents Introduction...2 Downloading the
Introduction to SPSS 16.0
Introduction to SPSS 16.0 Edited by Emily Blumenthal Center for Social Science Computation and Research 110 Savery Hall University of Washington Seattle, WA 98195 USA (206) 543-8110 November 2010 http://julius.csscr.washington.edu/pdf/spss.pdf
SPSS for Simple Analysis
STC: SPSS for Simple Analysis1 SPSS for Simple Analysis STC: SPSS for Simple Analysis2 Background Information IBM SPSS Statistics is a software package used for statistical analysis, data management, and
SPSS Introduction. Yi Li
SPSS Introduction Yi Li Note: The report is based on the websites below http://glimo.vub.ac.be/downloads/eng_spss_basic.pdf http://academic.udayton.edu/gregelvers/psy216/spss http://www.nursing.ucdenver.edu/pdf/factoranalysishowto.pdf
Scatter Plots with Error Bars
Chapter 165 Scatter Plots with Error Bars Introduction The procedure extends the capability of the basic scatter plot by allowing you to plot the variability in Y and X corresponding to each point. Each
INTRODUCTORY LAB: DOING STATISTICS WITH SPSS 21
INTRODUCTORY LAB: DOING STATISTICS WITH SPSS 21 This section covers the basic structure and commands of SPSS for Windows Release 21. It is not designed to be a comprehensive review of the most important
SPSS: Getting Started. For Windows
For Windows Updated: August 2012 Table of Contents Section 1: Overview... 3 1.1 Introduction to SPSS Tutorials... 3 1.2 Introduction to SPSS... 3 1.3 Overview of SPSS for Windows... 3 Section 2: Entering
SAS Analyst for Windows Tutorial
Updated: August 2012 Table of Contents Section 1: Introduction... 3 1.1 About this Document... 3 1.2 Introduction to Version 8 of SAS... 3 Section 2: An Overview of SAS V.8 for Windows... 3 2.1 Navigating
Directions for Frequency Tables, Histograms, and Frequency Bar Charts
Directions for Frequency Tables, Histograms, and Frequency Bar Charts Frequency Distribution Quantitative Ungrouped Data Dataset: Frequency_Distributions_Graphs-Quantitative.sav 1. Open the dataset containing
SPSS Notes (SPSS version 15.0)
SPSS Notes (SPSS version 15.0) Annie Herbert Salford Royal Hospitals NHS Trust July 2008 Contents Page Getting Started 1 1 Opening SPSS 1 2 Layout of SPSS 2 2.1 Windows 2 2.2 Saving Files 3 3 Creating
SPSS (Statistical Package for the Social Sciences)
SPSS (Statistical Package for the Social Sciences) What is SPSS? SPSS stands for Statistical Package for the Social Sciences The SPSS home-page is: www.spss.com 2 What can you do with SPSS? Run Frequencies
Introduction to SPSS (version 16) for Windows
Introduction to SPSS (version 16) for Windows Practical workbook Aims and Learning Objectives By the end of this course you will be able to: get data ready for SPSS create and run SPSS programs to do simple
SPSS 12 Data Analysis Basics Linda E. Lucek, Ed.D. [email protected] 815-753-9516
SPSS 12 Data Analysis Basics Linda E. Lucek, Ed.D. [email protected] 815-753-9516 Technical Advisory Group Customer Support Services Northern Illinois University 120 Swen Parson Hall DeKalb, IL 60115 SPSS
SPSS Manual for Introductory Applied Statistics: A Variable Approach
SPSS Manual for Introductory Applied Statistics: A Variable Approach John Gabrosek Department of Statistics Grand Valley State University Allendale, MI USA August 2013 2 Copyright 2013 John Gabrosek. All
Appendix III: SPSS Preliminary
Appendix III: SPSS Preliminary SPSS is a statistical software package that provides a number of tools needed for the analytical process planning, data collection, data access and management, analysis,
IBM SPSS Statistics for Beginners for Windows
ISS, NEWCASTLE UNIVERSITY IBM SPSS Statistics for Beginners for Windows A Training Manual for Beginners Dr. S. T. Kometa A Training Manual for Beginners Contents 1 Aims and Objectives... 3 1.1 Learning
GETTING YOUR DATA INTO SPSS
GETTING YOUR DATA INTO SPSS UNIVERSITY OF GUELPH LUCIA COSTANZO [email protected] REVISED SEPTEMBER 2011 CONTENTS Getting your Data into SPSS... 0 SPSS availability... 3 Data for SPSS Sessions... 4
WHO STEPS Surveillance Support Materials. STEPS Epi Info Training Guide
STEPS Epi Info Training Guide Department of Chronic Diseases and Health Promotion World Health Organization 20 Avenue Appia, 1211 Geneva 27, Switzerland For further information: www.who.int/chp/steps WHO
CHARTS AND GRAPHS INTRODUCTION USING SPSS TO DRAW GRAPHS SPSS GRAPH OPTIONS CAG08
CHARTS AND GRAPHS INTRODUCTION SPSS and Excel each contain a number of options for producing what are sometimes known as business graphics - i.e. statistical charts and diagrams. This handout explores
How To Use Spss
1: Introduction to SPSS Objectives Learn about SPSS Open SPSS Review the layout of SPSS Become familiar with Menus and Icons Exit SPSS What is SPSS? SPSS is a Windows based program that can be used to
Excel Tutorial. Bio 150B Excel Tutorial 1
Bio 15B Excel Tutorial 1 Excel Tutorial As part of your laboratory write-ups and reports during this semester you will be required to collect and present data in an appropriate format. To organize and
How to Use a Data Spreadsheet: Excel
How to Use a Data Spreadsheet: Excel One does not necessarily have special statistical software to perform statistical analyses. Microsoft Office Excel can be used to run statistical procedures. Although
SPSS Tests for Versions 9 to 13
SPSS Tests for Versions 9 to 13 Chapter 2 Descriptive Statistic (including median) Choose Analyze Descriptive statistics Frequencies... Click on variable(s) then press to move to into Variable(s): list
Statgraphics Getting started
Statgraphics Getting started The aim of this exercise is to introduce you to some of the basic features of the Statgraphics software. Starting Statgraphics 1. Log in to your PC, using the usual procedure
January 26, 2009 The Faculty Center for Teaching and Learning
THE BASICS OF DATA MANAGEMENT AND ANALYSIS A USER GUIDE January 26, 2009 The Faculty Center for Teaching and Learning THE BASICS OF DATA MANAGEMENT AND ANALYSIS Table of Contents Table of Contents... i
Data exploration with Microsoft Excel: analysing more than one variable
Data exploration with Microsoft Excel: analysing more than one variable Contents 1 Introduction... 1 2 Comparing different groups or different variables... 2 3 Exploring the association between categorical
Introduction to IBM SPSS Statistics
CONTENTS Arizona State University College of Health Solutions College of Nursing and Health Innovation Introduction to IBM SPSS Statistics Edward A. Greenberg, PhD Director, Data Lab PAGE About This Document
Statistical Analysis Using SPSS for Windows Getting Started (Ver. 2014/11/6) The numbers of figures in the SPSS_screenshot.pptx are shown in red.
Statistical Analysis Using SPSS for Windows Getting Started (Ver. 2014/11/6) The numbers of figures in the SPSS_screenshot.pptx are shown in red. 1. How to display English messages from IBM SPSS Statistics
Table of Contents. Preface
Table of Contents Preface Chapter 1: Introduction 1-1 Opening an SPSS Data File... 2 1-2 Viewing the SPSS Screens... 3 o Data View o Variable View o Output View 1-3 Reading Non-SPSS Files... 6 o Convert
Bowerman, O'Connell, Aitken Schermer, & Adcock, Business Statistics in Practice, Canadian edition
Bowerman, O'Connell, Aitken Schermer, & Adcock, Business Statistics in Practice, Canadian edition Online Learning Centre Technology Step-by-Step - Excel Microsoft Excel is a spreadsheet software application
Bill Burton Albert Einstein College of Medicine [email protected] April 28, 2014 EERS: Managing the Tension Between Rigor and Resources 1
Bill Burton Albert Einstein College of Medicine [email protected] April 28, 2014 EERS: Managing the Tension Between Rigor and Resources 1 Calculate counts, means, and standard deviations Produce
Instructions for SPSS 21
1 Instructions for SPSS 21 1 Introduction... 2 1.1 Opening the SPSS program... 2 1.2 General... 2 2 Data inputting and processing... 2 2.1 Manual input and data processing... 2 2.2 Saving data... 3 2.3
An SPSS companion book. Basic Practice of Statistics
An SPSS companion book to Basic Practice of Statistics SPSS is owned by IBM. 6 th Edition. Basic Practice of Statistics 6 th Edition by David S. Moore, William I. Notz, Michael A. Flinger. Published by
Getting started manual
Getting started manual XLSTAT Getting started manual Addinsoft 1 Table of Contents Install XLSTAT and register a license key... 4 Install XLSTAT on Windows... 4 Verify that your Microsoft Excel is up-to-date...
An introduction to using Microsoft Excel for quantitative data analysis
Contents An introduction to using Microsoft Excel for quantitative data analysis 1 Introduction... 1 2 Why use Excel?... 2 3 Quantitative data analysis tools in Excel... 3 4 Entering your data... 6 5 Preparing
Directions for using SPSS
Directions for using SPSS Table of Contents Connecting and Working with Files 1. Accessing SPSS... 2 2. Transferring Files to N:\drive or your computer... 3 3. Importing Data from Another File Format...
EXCEL Tutorial: How to use EXCEL for Graphs and Calculations.
EXCEL Tutorial: How to use EXCEL for Graphs and Calculations. Excel is powerful tool and can make your life easier if you are proficient in using it. You will need to use Excel to complete most of your
IBM SPSS Statistics 20 Part 4: Chi-Square and ANOVA
CALIFORNIA STATE UNIVERSITY, LOS ANGELES INFORMATION TECHNOLOGY SERVICES IBM SPSS Statistics 20 Part 4: Chi-Square and ANOVA Summer 2013, Version 2.0 Table of Contents Introduction...2 Downloading the
Below is a very brief tutorial on the basic capabilities of Excel. Refer to the Excel help files for more information.
Excel Tutorial Below is a very brief tutorial on the basic capabilities of Excel. Refer to the Excel help files for more information. Working with Data Entering and Formatting Data Before entering data
Once saved, if the file was zipped you will need to unzip it. For the files that I will be posting you need to change the preferences.
1 Commands in JMP and Statcrunch Below are a set of commands in JMP and Statcrunch which facilitate a basic statistical analysis. The first part concerns commands in JMP, the second part is for analysis
SECTION 2-1: OVERVIEW SECTION 2-2: FREQUENCY DISTRIBUTIONS
SECTION 2-1: OVERVIEW Chapter 2 Describing, Exploring and Comparing Data 19 In this chapter, we will use the capabilities of Excel to help us look more carefully at sets of data. We can do this by re-organizing
Data Analysis Stata (version 13)
Data Analysis Stata (version 13) There are many options for learning Stata (www.stata.com). Stata s help facility (accessed by a pull-down menu or by command or by clicking on?) consists of help files
4 Other useful features on the course web page. 5 Accessing SAS
1 Using SAS outside of ITCs Statistical Methods and Computing, 22S:30/105 Instructor: Cowles Lab 1 Jan 31, 2014 You can access SAS from off campus by using the ITC Virtual Desktop Go to https://virtualdesktopuiowaedu
Getting started with the Stata
Getting started with the Stata 1. Begin by going to a Columbia Computer Labs. 2. Getting started Your first Stata session. Begin by starting Stata on your computer. Using a PC: 1. Click on start menu 2.
Data Analysis Tools. Tools for Summarizing Data
Data Analysis Tools This section of the notes is meant to introduce you to many of the tools that are provided by Excel under the Tools/Data Analysis menu item. If your computer does not have that tool
The Dummy s Guide to Data Analysis Using SPSS
The Dummy s Guide to Data Analysis Using SPSS Mathematics 57 Scripps College Amy Gamble April, 2001 Amy Gamble 4/30/01 All Rights Rerserved TABLE OF CONTENTS PAGE Helpful Hints for All Tests...1 Tests
Getting Started With SPSS
Getting Started With SPSS To investigate the research questions posed in each section of this site, we ll be using SPSS, an IBM computer software package specifically designed for use in the social sciences.
Introduction to PASW Statistics 34152-001
Introduction to PASW Statistics 34152-001 V18 02/2010 nm/jdr/mr For more information about SPSS Inc., an IBM Company software products, please visit our Web site at http://www.spss.com or contact: SPSS
Microsoft Excel. Qi Wei
Microsoft Excel Qi Wei Excel (Microsoft Office Excel) is a spreadsheet application written and distributed by Microsoft for Microsoft Windows and Mac OS X. It features calculation, graphing tools, pivot
Learning SPSS: Data and EDA
Chapter 5 Learning SPSS: Data and EDA An introduction to SPSS with emphasis on EDA. SPSS (now called PASW Statistics, but still referred to in this document as SPSS) is a perfectly adequate tool for entering
GeoGebra Statistics and Probability
GeoGebra Statistics and Probability Project Maths Development Team 2013 www.projectmaths.ie Page 1 of 24 Index Activity Topic Page 1 Introduction GeoGebra Statistics 3 2 To calculate the Sum, Mean, Count,
Importing and Exporting With SPSS for Windows 17 TUT 117
Information Systems Services Importing and Exporting With TUT 117 Version 2.0 (Nov 2009) Contents 1. Introduction... 3 1.1 Aim of this Document... 3 2. Importing Data from Other Sources... 3 2.1 Reading
Introduction to StatsDirect, 11/05/2012 1
INTRODUCTION TO STATSDIRECT PART 1... 2 INTRODUCTION... 2 Why Use StatsDirect... 2 ACCESSING STATSDIRECT FOR WINDOWS XP... 4 DATA ENTRY... 5 Missing Data... 6 Opening an Excel Workbook... 6 Moving around
Projects Involving Statistics (& SPSS)
Projects Involving Statistics (& SPSS) Academic Skills Advice Starting a project which involves using statistics can feel confusing as there seems to be many different things you can do (charts, graphs,
KSTAT MINI-MANUAL. Decision Sciences 434 Kellogg Graduate School of Management
KSTAT MINI-MANUAL Decision Sciences 434 Kellogg Graduate School of Management Kstat is a set of macros added to Excel and it will enable you to do the statistics required for this course very easily. To
IBM SPSS Direct Marketing 23
IBM SPSS Direct Marketing 23 Note Before using this information and the product it supports, read the information in Notices on page 25. Product Information This edition applies to version 23, release
Introduction to Microsoft Access 2003
Introduction to Microsoft Access 2003 Zhi Liu School of Information Fall/2006 Introduction and Objectives Microsoft Access 2003 is a powerful, yet easy to learn, relational database application for Microsoft
Summary of important mathematical operations and formulas (from first tutorial):
EXCEL Intermediate Tutorial Summary of important mathematical operations and formulas (from first tutorial): Operation Key Addition + Subtraction - Multiplication * Division / Exponential ^ To enter a
Descriptive and Inferential Statistics
General Sir John Kotelawala Defence University Workshop on Descriptive and Inferential Statistics Faculty of Research and Development 14 th May 2013 1. Introduction to Statistics 1.1 What is Statistics?
S P S S Statistical Package for the Social Sciences
S P S S Statistical Package for the Social Sciences Data Entry Data Management Basic Descriptive Statistics Jamie Lynn Marincic Leanne Hicks Survey, Statistics, and Psychometrics Core Facility (SSP) July
SPSS: AN OVERVIEW. Seema Jaggi and and P.K.Batra I.A.S.R.I., Library Avenue, New Delhi-110 012
SPSS: AN OVERVIEW Seema Jaggi and and P.K.Batra I.A.S.R.I., Library Avenue, New Delhi-110 012 The abbreviation SPSS stands for Statistical Package for the Social Sciences and is a comprehensive system
This book serves as a guide for those interested in using IBM SPSS
1 Overview This book serves as a guide for those interested in using IBM SPSS Statistics software to assist in statistical data analysis whether as a companion to a statistics or research methods course,
Data analysis process
Data analysis process Data collection and preparation Collect data Prepare codebook Set up structure of data Enter data Screen data for errors Exploration of data Descriptive Statistics Graphs Analysis
Introduction To Microsoft Office PowerPoint 2007. Bob Booth July 2008 AP-PPT5
Introduction To Microsoft Office PowerPoint 2007. Bob Booth July 2008 AP-PPT5 University of Sheffield Contents 1. INTRODUCTION... 3 2. GETTING STARTED... 4 2.1 STARTING POWERPOINT... 4 3. THE USER INTERFACE...
SPSS Step-by-Step Tutorial: Part 1
SPSS Step-by-Step Tutorial: Part 1 For SPSS Version 11.5 DataStep Development 2004 Table of Contents 1 SPSS Step-by-Step 5 Introduction 5 Installing the Data 6 Installing files from the Internet 6 Installing
ECONOMICS 351* -- Stata 10 Tutorial 2. Stata 10 Tutorial 2
Stata 10 Tutorial 2 TOPIC: Introduction to Selected Stata Commands DATA: auto1.dta (the Stata-format data file you created in Stata Tutorial 1) or auto1.raw (the original text-format data file) TASKS:
Excel 2007 Basic knowledge
Ribbon menu The Ribbon menu system with tabs for various Excel commands. This Ribbon system replaces the traditional menus used with Excel 2003. Above the Ribbon in the upper-left corner is the Microsoft
Intermediate PowerPoint
Intermediate PowerPoint Charts and Templates By: Jim Waddell Last modified: January 2002 Topics to be covered: Creating Charts 2 Creating the chart. 2 Line Charts and Scatter Plots 4 Making a Line Chart.
ASSIGNMENT 4 PREDICTIVE MODELING AND GAINS CHARTS
DATABASE MARKETING Fall 2015, max 24 credits Dead line 15.10. ASSIGNMENT 4 PREDICTIVE MODELING AND GAINS CHARTS PART A Gains chart with excel Prepare a gains chart from the data in \\work\courses\e\27\e20100\ass4b.xls.
Microsoft Excel 2010 Part 3: Advanced Excel
CALIFORNIA STATE UNIVERSITY, LOS ANGELES INFORMATION TECHNOLOGY SERVICES Microsoft Excel 2010 Part 3: Advanced Excel Winter 2015, Version 1.0 Table of Contents Introduction...2 Sorting Data...2 Sorting
DOING MORE WITH WORD: MICROSOFT OFFICE 2010
University of North Carolina at Chapel Hill Libraries Carrboro Cybrary Chapel Hill Public Library Durham County Public Library DOING MORE WITH WORD: MICROSOFT OFFICE 2010 GETTING STARTED PAGE 02 Prerequisites
EXCEL PIVOT TABLE David Geffen School of Medicine, UCLA Dean s Office Oct 2002
EXCEL PIVOT TABLE David Geffen School of Medicine, UCLA Dean s Office Oct 2002 Table of Contents Part I Creating a Pivot Table Excel Database......3 What is a Pivot Table...... 3 Creating Pivot Tables
Presentations and PowerPoint
V-1.1 PART V Presentations and PowerPoint V-1.2 Computer Fundamentals V-1.3 LESSON 1 Creating a Presentation After completing this lesson, you will be able to: Start Microsoft PowerPoint. Explore the PowerPoint
One-Way ANOVA using SPSS 11.0. SPSS ANOVA procedures found in the Compare Means analyses. Specifically, we demonstrate
1 One-Way ANOVA using SPSS 11.0 This section covers steps for testing the difference between three or more group means using the SPSS ANOVA procedures found in the Compare Means analyses. Specifically,
SPSS and AM statistical software example.
A detailed example of statistical analysis using the NELS:88 data file and ECB, to perform a longitudinal analysis of 1988 8 th graders in the year 2000: SPSS and AM statistical software example. Overall
Data exploration with Microsoft Excel: univariate analysis
Data exploration with Microsoft Excel: univariate analysis Contents 1 Introduction... 1 2 Exploring a variable s frequency distribution... 2 3 Calculating measures of central tendency... 16 4 Calculating
IBM SPSS Direct Marketing 22
IBM SPSS Direct Marketing 22 Note Before using this information and the product it supports, read the information in Notices on page 25. Product Information This edition applies to version 22, release
Excel 2003 A Beginners Guide
Excel 2003 A Beginners Guide Beginner Introduction The aim of this document is to introduce some basic techniques for using Excel to enter data, perform calculations and produce simple charts based on
Sample Table. Columns. Column 1 Column 2 Column 3 Row 1 Cell 1 Cell 2 Cell 3 Row 2 Cell 4 Cell 5 Cell 6 Row 3 Cell 7 Cell 8 Cell 9.
Working with Tables in Microsoft Word The purpose of this document is to lead you through the steps of creating, editing and deleting tables and parts of tables. This document follows a tutorial format
TIPS FOR DOING STATISTICS IN EXCEL
TIPS FOR DOING STATISTICS IN EXCEL Before you begin, make sure that you have the DATA ANALYSIS pack running on your machine. It comes with Excel. Here s how to check if you have it, and what to do if you
SPSS INSTRUCTION CHAPTER 1
SPSS INSTRUCTION CHAPTER 1 Performing the data manipulations described in Section 1.4 of the chapter require minimal computations, easily handled with a pencil, sheet of paper, and a calculator. However,
Excel 2007 A Beginners Guide
Excel 2007 A Beginners Guide Beginner Introduction The aim of this document is to introduce some basic techniques for using Excel to enter data, perform calculations and produce simple charts based on
Data analysis and regression in Stata
Data analysis and regression in Stata This handout shows how the weekly beer sales series might be analyzed with Stata (the software package now used for teaching stats at Kellogg), for purposes of comparing
Minitab Session Commands
APPENDIX Minitab Session Commands Session Commands and the Session Window Most functions in Minitab are accessible through menus, as well as through a command language called session commands. You can
business statistics using Excel OXFORD UNIVERSITY PRESS Glyn Davis & Branko Pecar
business statistics using Excel Glyn Davis & Branko Pecar OXFORD UNIVERSITY PRESS Detailed contents Introduction to Microsoft Excel 2003 Overview Learning Objectives 1.1 Introduction to Microsoft Excel
Moving from SPSS to JMP : A Transition Guide
WHITE PAPER Moving from SPSS to JMP : A Transition Guide Dr. Jason Brinkley, Department of Biostatistics, East Carolina University Table of Contents Introduction... 1 Example... 2 Importing and Cleaning
Ohio University Computer Services Center August, 2002 Crystal Reports Introduction Quick Reference Guide
Open Crystal Reports From the Windows Start menu choose Programs and then Crystal Reports. Creating a Blank Report Ohio University Computer Services Center August, 2002 Crystal Reports Introduction Quick
Doing Multiple Regression with SPSS. In this case, we are interested in the Analyze options so we choose that menu. If gives us a number of choices:
Doing Multiple Regression with SPSS Multiple Regression for Data Already in Data Editor Next we want to specify a multiple regression analysis for these data. The menu bar for SPSS offers several options:
Introduction to the TI-Nspire CX
Introduction to the TI-Nspire CX Activity Overview: In this activity, you will become familiar with the layout of the TI-Nspire CX. Step 1: Locate the Touchpad. The Touchpad is used to navigate the cursor
Using Microsoft Excel to Plot and Analyze Kinetic Data
Entering and Formatting Data Using Microsoft Excel to Plot and Analyze Kinetic Data Open Excel. Set up the spreadsheet page (Sheet 1) so that anyone who reads it will understand the page (Figure 1). Type
