STA 4107/5107. Chapter 3



Similar documents
Factor Analysis. Principal components factor analysis. Use of extracted factors in multivariate dependency models

FACTOR ANALYSIS NASC

Exploratory Factor Analysis

Canonical Correlation Analysis

4. There are no dependent variables specified... Instead, the model is: VAR 1. Or, in terms of basic measurement theory, we could model it as:

Factor Analysis Example: SAS program (in blue) and output (in black) interleaved with comments (in red)

Factor Analysis. Advanced Financial Accounting II Åbo Akademi School of Business

Factor Analysis. Chapter 420. Introduction

FACTOR ANALYSIS. Factor Analysis is similar to PCA in that it is a technique for studying the interrelationships among variables.

T-test & factor analysis

Rachel J. Goldberg, Guideline Research/Atlanta, Inc., Duluth, GA

Exploratory Factor Analysis Brian Habing - University of South Carolina - October 15, 2003

Common factor analysis

Psychology 7291, Multivariate Analysis, Spring SAS PROC FACTOR: Suggestions on Use

Exploratory Factor Analysis and Principal Components. Pekka Malo & Anton Frantsev 30E00500 Quantitative Empirical Research Spring 2016

Factor Analysis. Sample StatFolio: factor analysis.sgp

4. Multiple Regression in Practice

Chapter 7 Factor Analysis SPSS

PRINCIPAL COMPONENT ANALYSIS

This chapter will demonstrate how to perform multiple linear regression with IBM SPSS

Introduction to Principal Components and FactorAnalysis

To do a factor analysis, we need to select an extraction method and a rotation method. Hit the Extraction button to specify your extraction method.

Factor Analysis. Factor Analysis

Pull and Push Factors of Migration: A Case Study in the Urban Area of Monywa Township, Myanmar

2. Linearity (in relationships among the variables--factors are linear constructions of the set of variables) F 2 X 4 U 4

Multivariate Analysis (Slides 13)

Association Between Variables

Exploratory Factor Analysis

DATA ANALYSIS AND INTERPRETATION OF EMPLOYEES PERSPECTIVES ON HIGH ATTRITION

Factor analysis. Angela Montanari

Chapter Seven. Multiple regression An introduction to multiple regression Performing a multiple regression on SPSS

How to report the percentage of explained common variance in exploratory factor analysis

Topic 10: Factor Analysis

Principal Component Analysis

Additional sources Compilation of sources:

5.2 Customers Types for Grocery Shopping Scenario

A Primer on Mathematical Statistics and Univariate Distributions; The Normal Distribution; The GLM with the Normal Distribution

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

SPSS Explore procedure

Validation of the Core Self-Evaluations Scale research instrument in the conditions of Slovak Republic

Module 3: Correlation and Covariance

A Brief Introduction to SPSS Factor Analysis

The ith principal component (PC) is the line that follows the eigenvector associated with the ith largest eigenvalue.

Introduction to Matrix Algebra

Chapter 1 Introduction. 1.1 Introduction

INTRODUCTION TO MULTIPLE CORRELATION

Chapter 10. Key Ideas Correlation, Correlation Coefficient (r),

IBM SPSS Direct Marketing 23

Overview of Factor Analysis

IBM SPSS Direct Marketing 22

Data analysis process

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

Factor Analysis: Statnotes, from North Carolina State University, Public Administration Program. Factor Analysis

CALCULATIONS & STATISTICS

UNDERSTANDING THE TWO-WAY ANOVA

Introduction to Principal Component Analysis: Stock Market Values

Beef Demand: What is Driving the Market?

Facebook Friend Suggestion Eytan Daniyalzade and Tim Lipus

Multivariate Analysis of Variance (MANOVA): I. Theory

Multivariate Analysis

How To Run Factor Analysis

by the matrix A results in a vector which is a reflection of the given

Statistics in Psychosocial Research Lecture 8 Factor Analysis I. Lecturer: Elizabeth Garrett-Mayer

Research Methodology: Tools

Session 7 Bivariate Data and Analysis

Multiple Regression: What Is It?

Section 14 Simple Linear Regression: Introduction to Least Squares Regression

PARTIAL LEAST SQUARES IS TO LISREL AS PRINCIPAL COMPONENTS ANALYSIS IS TO COMMON FACTOR ANALYSIS. Wynne W. Chin University of Calgary, CANADA

An Introduction to Path Analysis. nach 3

HYPOTHESIS TESTING: CONFIDENCE INTERVALS, T-TESTS, ANOVAS, AND REGRESSION

II. DISTRIBUTIONS distribution normal distribution. standard scores

Reliability Analysis

Economics of Strategy (ECON 4550) Maymester 2015 Applications of Regression Analysis

Descriptive Statistics

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Correlation key concepts:

Module 5: Multiple Regression Analysis

Simple Predictive Analytics Curtis Seare

Stepwise Regression. Chapter 311. Introduction. Variable Selection Procedures. Forward (Step-Up) Selection

Multivariate Analysis of Variance (MANOVA)

How to Get More Value from Your Survey Data

MISSING DATA TECHNIQUES WITH SAS. IDRE Statistical Consulting Group

Analysing Questionnaires using Minitab (for SPSS queries contact -)

Factor Analysis Using SPSS

SPSS ADVANCED ANALYSIS WENDIANN SETHI SPRING 2011

Simple Regression Theory II 2010 Samuel L. Baker

Multiple regression - Matrices

Data Mining: Algorithms and Applications Matrix Math Review

Factor Analysis - 2 nd TUTORIAL

Linear Models in STATA and ANOVA

6.4 Normal Distribution

Practical Considerations for Using Exploratory Factor Analysis in Educational Research

Computer-Aided Multivariate Analysis

GMAC. Which Programs Have the Highest Validity: Identifying Characteristics that Affect Prediction of Success 1

[1] Diagonal factorization

Premaster Statistics Tutorial 4 Full solutions

Imputing Missing Data using SAS

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.

Multivariate Analysis of Variance. The general purpose of multivariate analysis of variance (MANOVA) is to determine

Transcription:

STA 4107/5107 Chapter 3 Factor Analysis 1 Key Terms Please review and learn these terms. 2 What is Factor Analysis? Factor analysis is an interdependence technique (see chapter 1) that primarily uses metric variables, however, non-metric variables can be included as dummy variables. The basic idea is to describe the set of P variables X 1, X 2,, X P in our data as linear combinations of a smaller number of factors, and in the process get a better understanding of the relationships between our variables. Factor analysis is based on a statistical model, unlike principal components (which we will cover later). Mathematically, factor analysis can be represented thusly: x 1 = l 11 F 1 + l 12 F 2 + + l 1m F m + ɛ 1 x 2 = l 21 F 1 + l 22 F 2 + + l 2m F m + ɛ 2. x p = l p1 F 1 + l p2 F 2 + + l pm F m + ɛ p Here, the X i are the original variables, the l ij are called the loadings and the F j are the common factors, and the ɛ i are the unique factors. In factor analysis, the loadings or weights are chosen to maximize the correlation between the variable and the factor. Our motive for studying factor analysis in this course is that we will extend the concept to structural equation modeling, which is also called confirmatory factor analysis near the end of the term. 3 Some History and Examples Factor analysis was developed at the turn of the 20 th century by psychologist Charles Spearman, who hypothesized that a person s score on a wide variety of tests of mental ability mathematical skill, vocabulary, other verbal skills, artistic skills, logical reasoning ability, etc. could all be explained by one underlying factor of general intelligence that he called g. From a data set of test scores of boys in preparatory school, he noticed that any two rows in the table of correlations were approximately proportional across the different variables. So for example, Classics and English in the table below have the ratios.83.70.66.63..67.64.54.51 Upon observing this, he hypothesized that it was due to a common factor, g. 1

Classics French English Mathematics Pitch Music Classics 1.0.83.78.70.66.63 French.83 1.0.67.67.65.57 English.78.67 1.0.64.54.51 Mathematics.70.67.64 1.0.45.51 Pitch.66.65.54.45 1.0.40 Music -.63.57.51.51.40 1.0 It was an interesting idea, but it turned out to be wrong. Today the College Board testing service operates a system based on the idea that there are at least three important factors of mental ability verbal, mathematical, and logical abilities and most psychologists agree that many other factors could be identified as well. 3.1 Examples 1. Consider various measures of the activity of the autonomic nervous system heart rate, blood pressure, etc. Psychologists have wanted to know whether, except for random fluctuation, all those measures move up and down together the activation hypothesis. Or do groups of autonomic measures move up and down together, but separate from other groups? Or are all the measures largely independent? An unpublished analysis by Richard Darlington at Cornell found that in one data set, at any rate, the data fitted the activation hypothesis quite well. 2. Suppose each of 500 people, who are all familiar with different kinds of automobiles, rates each of 20 automobile models on the question, How much would you like to own that kind of automobile? We could usefully ask about the number of dimensions on which the ratings differ. A one-factor theory would posit that people simply give the highest ratings to the most expensive models. A two-factor theory would posit that some people are most attracted to sporty models while others are most attracted to luxurious models. Three-factor and four-factor theories might add safety and reliability. Or instead of automobiles you might choose to study attitudes concerning foods, political policies, political candidates, or many other kinds of objects. 3. Rubenstein (1986) studied the nature of curiosity by analyzing the agreements of juniorhigh-school students with a large battery of statements such as I like to figure out how machinery works or I like to try new kinds of food. A factor analysis identified seven factors: three measuring enjoyment of problem-solving, learning, and reading; three measuring interests in natural sciences, art and music, and new experiences in general; and one indicating a relatively low interest in money. 2

4 Factor Analysis Decision Process We follow the six-stage model-building process introduced in Chapter 1. As we work through these stages we will be considering a data set, the US crime dataset, that contains variables measured on several cities in the US. These data are crime-related and demographic statistics for 47 US states in 1960. The data were collected from the FBI s Uniform Crime Report and other government agencies to determine how the variable crime rate depends on the other variables measured in the study, but we can use it as an example of looking for underlying patterns in the relationships between variables. There are 47 cases and 14 variables. R: Crime rate: number of offenses reported to police per million population Age: The number of males of age 14-24 per 1000 population S: Indicator (or dummy) variable for Southern states (0 = No, 1 = Yes) Ed: Mean number of years of schooling x 10 for persons of age 25 or older Ex0: 1960 per capita expenditure on police by state and local government Ex1: 1959 per capita expenditure on police by state and local government LF: Labor force participation rate per 1000 civilian urban males age 14-24 M: The number of males per 1000 females N: State population size in hundred thousands NW: The number of non-whites per 1000 population U1: Unemployment rate of urban males per 1000 of age 14-24 U2: Unemployment rate of urban males per 1000 of age 35-39 W: Median value of transferable goods and assets or family income in tens of dollars. X: The number of families per 1000 earning below 1/2 the median income We will see how well factor analysis identifies the underlying structure. What do you expect to happen? Stage 1: Objectives As with all statistical analyses, all the decisions need to be made with an emphasis on the research problem. Stage 1 focuses on carefully defining the objectives. Specifying the Unit of Analysis: Factor analysis can identify the structure of relationships either among variables or among respondents. Variables: factor analysis is applied to a correlation matrix of the variables. This is called R factor analysis and the approach is to identify the dimensions that are latent, or not initially observed. 3

Respondents: Here factor analysis is applied to the individual respondents based on their characteristics. This is called Q factor analysis and this method combines large numbers of cases into distinctly different groups. We will not consider this here, but will delve deeper when we get to cluster analysis. Achieving Data Summarization versus Data Reduction: Data summarization illucidates underlying relationships in an interpretable and understandable fashion. Data reduction uses the summarization as a variate instead of the original values in further data analyses. Variable Selection: As mentioned in the notes for chapter 1, the fact that we have statistical methods at our disposal that can extract patterns out of a large group of variables doesn t mean that we get to skip the hard part of research and simply measure everything we can think of and let the computer come up with the interesting ideas for us. The researcher should carefully specify the possible underlying factors of interest that could be identified with factor analysis. If the researcher doesn t actually have any ideas, it s unlikely that anything convincing will come of the analysis. Using Factor Analysis with Other Multivariate Techniques: Factor analysis can help simplify other procedures. Variables determined to be highly correlated and members of the same factor would be expected to have similar profiles of differences across groups in multivariate analysis of variance or discriminant analysis Highly correlated variables can cause problems in regression analyses, such as multiple regression and discriminant analysis. Including highly correlated predictor variables will not significantly improve predictive ability. Understanding the correlation structure can aid the researcher in building the regression model. 4.1 US Crime Example For this data set we are interested in the underlying structure between the variables, so we want an R factor analysis. Question: How many factors do you expect? What might they represent? We want both data summarization and reduction. We may want to see if the factors extracted can be used in predicting other variables taken on the States, such as population growth rate, violent crime rate, or economic growth. 4

Stage 2: Designing a Factor Analysis Obtain the Correlation Matrix for the Variables: The correlation matrix is the input for the factor analysis. This will be accomplished automatically when we perform proc factor in SAS. Variable Selection and Measurement Issues: What types of variables can be used in factor analysis? For the most part, we will prefer to use metric variables because correlation is well-defined, whereas non-metric variables are a bit trickier, but can be handled with the use of dummy variables. See the variable S, in our example. How many variables should be included? The researcher, as always, should measure no more and no fewer variables than he is actually interested in. Sample Size: The general rule is to have 5 times as many observations as variables. The reason for this is that the number of correlations increases as ( ) p 2, where p is the number of variables. US Crime Example We do not quite have the recommended sample size, but we expect fairly high correlation among several of the variables. We want to start out using all of them. Stage 3: Assumptions in Factor Analysis Conceptual Issues: The only hard assumption is that some underlying structure does exist. A factor analysis will always produce factors. It is up to the researcher to know whether the pattern being revealed is meaningful. Statistical Issues: Factor analysis does not have a large number of assumptions. We do not assume normality (except when testing the significance of the factors), or equal variances. The main issue, then, is the degree of correlation. We will consider some methods for assessing intercorrelation. If it is found that all correlations are low or all correlations are about the same, then no underlying structure is present and factor analysis is probably not appropriate Overall Measures of Intercorrelation: 1. visual inspection: if there are no correlations higher than 0.3, then factor analysis is unlikely to be helpful. SAS produces the partial correlation matrix. Partial correlations can also be considered. Partial correlation is the correlation between two variables after accounting for the effect of one or more other variables. For example, suppose an educator is interested in the correlation between the total scores from two tests, T 1 and T 2, given to 12th graders. Since both scores are most likely related to the student s socio-economic status, it could be informative to first remove the effect of SES from both T 1 and T 2 and then find the correlation between the adjusted scores. SAS will produce the anti-image correlation matrix 5

which is the matrix of negative partial correlations. Large partial or antiimage correlations are indicative of a data matrix not well-suited for factor analysis. 2. Bartlett Test of Sphericity: This is a significance test of correlation. SAS does not perform this test, so we will not consider it further. 3. Measure of sampling adequacy (MSA): This index ranges from 0 to 1, with 1 meaning that each variable is perfectly predicted by other variables. Above.80 is considered excellent. Below.50 is considered poor. If the MSA falls below.5, factor analysis is unlikely to be helpful. Variable-Specific Measures of Intercorrelation: The MSA index will be reported by SAS for each variable. Any variable that falls below.50 should be dropped. If more than one variable falls below.50, then the variable with the lowest MSA should be dropped first, the test run again and continue until all varibles with MSAs lower than.50 are dropped. US Crime Example SAS code for the various preliminary analyses is shown below. proc factor data=uscrime msa; var R Ed LF U1 U2 W X Age M N NW Ex0 Ex1; run; The output for the above code is shown below. The FACTOR Procedure Initial Factor Method: Principal Components Partial Correlations Controlling all other Variables R Ed LF U1 U2 W X R R 1.00000 0.43474-0.04649-0.23300 0.34240 0.22038 0.50633 Ed Ed 0.43474 1.00000 0.31204 0.30300-0.41866 0.02204-0.39221 LF LF -0.04649 0.31204 1.00000-0.43271 0.06074 0.14809 0.19807 U1 U1-0.23300 0.30300-0.43271 1.00000 0.78155-0.05629 0.05524 U2 U2 0.34240-0.41866 0.06074 0.78155 1.00000 0.10253-0.08551 W W 0.22038 0.02204 0.14809-0.05629 0.10253 1.00000-0.60659 X X 0.50633-0.39221 0.19807 0.05524-0.08551-0.60659 1.00000 Age Age 0.39364-0.24804-0.12354-0.01248-0.26215-0.17915-0.22656 M M 0.13539 0.03402 0.52549 0.57886-0.23758 0.04865 0.10520 N N -0.05539-0.00729 0.13951 0.12795 0.02032 0.16594 0.27515 NW NW 0.01955-0.16048 0.32688 0.17208-0.01444-0.22104 0.03064 Ex0 Ex0 0.25559-0.26546 0.20297 0.03532 0.01777 0.00253-0.07671 Ex1 Ex1-0.10060 0.23379-0.27059-0.08771-0.01614 0.05771-0.01865 So So -0.09654 0.11855-0.50496-0.37924 0.21655 0.19107 0.35286 Partial Correlations Controlling all other Variables Age M N NW Ex0 Ex1 So R R 0.39364 0.13539-0.05539 0.01955 0.25559-0.10060-0.09654 6

Ed Ed -0.24804 0.03402-0.00729-0.16048-0.26546 0.23379 0.11855 LF LF -0.12354 0.52549 0.13951 0.32688 0.20297-0.27059-0.50496 U1 U1-0.01248 0.57886 0.12795 0.17208 0.03532-0.08771-0.37924 U2 U2-0.26215-0.23758 0.02032-0.01444 0.01777-0.01614 0.21655 W W -0.17915 0.04865 0.16594-0.22104 0.00253 0.05771 0.19107 X X -0.22656 0.10520 0.27515 0.03064-0.07671-0.01865 0.35286 Age Age 1.00000 0.21229 0.01064 0.29533-0.03749-0.05227 0.05771 M M 0.21229 1.00000-0.45286-0.25136 0.03168 0.02834 0.21558 N N 0.01064-0.45286 1.00000 0.00919 0.13630-0.03747-0.08222 NW NW 0.29533-0.25136 0.00919 1.00000-0.09772 0.19784 0.47716 Ex0 Ex0-0.03749 0.03168 0.13630-0.09772 1.00000 0.96126 0.03080 Ex1 Ex1-0.05227 0.02834-0.03747 0.19784 0.96126 1.00000-0.05928 So So 0.05771 0.21558-0.08222 0.47716 0.03080-0.05928 1.00000 Most of the partial correlations are low, indicating the dataset is a good candidate for factor analysis. Kaiser s Measure of Sampling Adequacy: Overall MSA = 0.71989084 R Ed LF U1 U2 W X 0.61670670 0.78859635 0.55889920 0.36478072 0.48354600 0.87428975 0.77376748 Kaiser s Measure of Sampling Adequacy: Overall MSA = 0.71989084 Age M N NW Ex0 Ex1 So 0.83093935 0.49996109 0.74849041 0.79935809 0.75324516 0.75694606 0.76869090 Both U1, the unemployment rate of urban males per 1000 of age 14-24 and U2, the unemployment rate of urban males per 1000 of age 35-39, have MSAs below 0.50. We should remove U1 first and then see if U2 is still below. This was done and both were removed due to low MSAs. The new output is shown below. Kaiser s Measure of Sampling Adequacy: Overall MSA = 0.78770205 R Ed LF W X Age 0.67936838 0.82727158 0.65356697 0.85844228 0.76275195 0.87378056 Kaiser s Measure of Sampling Adequacy: Overall MSA = 0.78770205 M N NW Ex0 Ex1 So 0.72810525 0.73441897 0.85206257 0.74180187 0.75624142 0.83513151 Prior Communality Estimates: ONE Everything looks good, so we will proceed with the analysis. 7

Stage 4: Deriving Factors and Assessing Overall Fit Selecting the Factor Extraction Method There are two main methods of extracting factors: Common Factor Analysis and Component Factor Analysis. Both methods deal with the variance of the variables slightly differently. Partitioning the Variance of a Variable: As mentioned in the introductory section, factor analysis breaks each variable into two parts, the common factor and the unique factor. It also breaks the variance into two parts, the communality and the specificity. Or in your text, the common variance and the specific variance. If we denote the communality by h 2 i and the specificity by u 2 i, for a given variable, the variance of the variable can be written V ar(x i ) = h 2 i + u2 i. Numerically, factor analysis tries to solve for the factor loadings l ij and the communalities h 2 i. What your text refers to as error variance is the remaining variance that is unaccounted for by the factor analysis. Common Factor Analysis versus Component Analysis: Component analysis is the preferred method when the goal of the research is data reduction, while common factor analysis is to identify latent relationships. Component Analysis: In component analysis, the basic idea is to find the first (ranked by the amount of variance explained) m uncorrelated factors that explain the greatest proportion of the variance in the variables. When the original variables are standardized, the factor loading l ij is the correlation between x i and F j. The factors themselves are derived from the first m principal components C j, that is F j = C j /(V arc j ) 1/2, so that the factors have unit variance. Hence, the correlation matrix for the factors has ones down the diagonal. Hence, we are factoring the total variance. Thus this approach focuses on explaining as much of the variance as possible so that the factors capture as much of the information as possible and are then useful in other analyses such as regression. Common Factor (also called iterated components) Analysis: Here the 1s in the diagonal of the correlation matrix are substituted with the communalities. With the communalities in the diagonal we are factoring the variance associated with the common factors. Thus this approach is selecting factors that maximize the total communality. Hence, this approach focuses on the underlying relationships. Common factor analysis is also call iterated components because the solution is solved iteratively. The steps are outlined below: 1. Find the initial communalities 2. Substitute the communalities for the diagonal elements in the correlation matrix. 3. Extract m principal components from the modified matrix. 4. Multiply the principal components coefficients by the standard deviation of the respective principal components to obtain the factor loadings. 5. Compute new communalities from the computed factor loadings. 8

6. Replace communalities in step 2 with these new communalities and repeat steps 3, 4, and 5. 7. Continue iterating, stopping when communalities do not change by more than a very small amount. We do not need to be too concerned with this, because SAS will perform it all for us. Algebraic Explanation of Component Factor Analysis In factor analysis, the variables usually are standardized x i = (X i X i )/S i so that they all have unit variance. That is V ar (x i ) = 1. Recall that V ar (x i ) = 1 = h 2 i + u 2 i the sum of the communality and the specificity. Component factor analysis starts with the principal components, linear combinations of the variables that are all mutually independent and orthogonal. That is, the j th principal component for p variables is given by: C j = a 1j x 1 + a 2j x 2 + + a 1p x p It can be shown algebraically that the system of all m principal component equations can be inverted to give x 1 = a 11 C 1 + a 21 C 2 + a p1 C p. x p = a 1p C 1 + a 2p C 2 + a pp C p Now, since F j = C j V ar (C j ) 1/2 C j = F j V ar (C j ) 1/2 x i = a i1 V ar (C 1 ) 1/2 F 1 + + a im V ar (C m ) 1/2 F m l ij = a ij V ar (C j ) 1/2 since x i = l i1 F 1 + l i2 F 2 + + l im F m + ɛ i. 9

So, V ar (x i ) = V ar (l i1 F 1 + l i2 F 2 + + l im F m + ɛ i ) = l 2 i1v ar (F 1 ) + l 2 i2v ar (F 2 ) + l 2 imv ar (F m ) + V ar (ɛ i ) = j l ij + u 2 i Recall that V ar (x i ) = 1 = h 2 i + u2 i = 1 h2 i = j l ij. The take-home message is that the factors are found by finding the principal components and the loadings are found by finding the correlations between the factors and the variables, and that component factor analysis is optimized to explain as much of the total variance as possible by way of principal components. US Crime Data As you may have already guessed from the SAS code, for these data, as will be the case most often, we will use the component method. The component method is the default method, makes fewer assumptions, whereas the common factor method often has several problems such as multiple solutions and unestimable communalites. Most of the time, the component method is the safer choice. Criteria for the Number of Factors to Extract There are several criteria for stopping the selection of factors. No one method is agreed upon to be superior to any other. 1. Latent Root: This approach chooses only those factors whose associated eigenvalues (latent roots) are greater than or equal to 1. This criterion is based on theoretical rationales developed using true population correlation coefficients. It appears to correctly estimate the number of factors when the communalities are high and the number of variables is not too large. With low communalities, then this criterion can be adjusted by setting the cut-off value for the latent roots to be equal to the average communality. 2. A priori: Here the number of factors is based on the investigators expert knowledge of the number of factors. 3. Percentage Variance Explained: This method also relies on expert knowledge and the goals of the researcher. The investigators decide before hand how much of the variance needs to be explained for the results to be meaningful, useful and achievable. 4. Scree Test: In this method, the eigenvalues are plotted against the number of factors and we choose the cut-off based on a sharp decrease in slope, indicating sharply decreasing improvement per additional factor. 5. Heterogeneity of Respondents: We use this method when certain variables do not load strongly until later factors. In general, the later the factor the less variance it explains. However, if certain variables we consider important do not load strongly in the first factors, we may want to retain later factors where these variables do load. 10

SAS outputs the eigenvectors and chooses the first few factors that all have eigenvalues greater than or equal to one. The FACTOR Procedure Initial Factor Method: Principal Components Eigenvalues of the Correlation Matrix: Total = 12 Average = 1 Eigenvalue Difference Proportion Cumulative 1 5.82933528 3.38024207 0.4858 0.4858 2 2.44909321 1.02419777 0.2041 0.6899 3 1.42489544 0.75764705 0.1187 0.8086 4 0.66724839 0.24157344 0.0556 0.8642 5 0.42567495 0.08278861 0.0355 0.8997 6 0.34288634 0.06196322 0.0286 0.9283 7 0.28092313 0.04394889 0.0234 0.9517 8 0.23697424 0.06752310 0.0197 0.9714 9 0.16945114 0.05079557 0.0141 0.9855 10 0.11865557 0.06892260 0.0099 0.9954 11 0.04973297 0.04460364 0.0041 0.9996 12 0.00512933 0.0004 1.0000 3 factors will be retained by the MINEIGEN criterion. Stage 5: Interpreting the Factors The idea is to look where the high loadings are to see if certain groups of variables load high. Interpretation can sometimes be made easier with factor rotation. In factor rotation, the axes are rotated according to various maximization criteria. The basic idea here is to find rotations that produce loadings on the variables that maximally distinguish the variables from each other. Cross loadings are when a variable has loadings that are similar across different factors. If a variable has cross loadings, we may want to delete it. 1. Quartimax: this method tries to maximally differentiate loadings across rows. I find this method the easiest to interpret. However, it apparently fails often, usually creating the first factor with high loadings on most or all variables. 2. Varimax: this method maximizes the differentiation across columns. It apparently usually produces more stable estimates. 3. Equimax: this method simulaneously performs Quartimax and Varimax. SAS does not include this method in its toolbox. Our general approach will be to use both Quartimax and Varimax and compare the loadings and choose the best one, i.e., the one that produces the least and fewest cross loadings. SAS output for the rotated loadings, using both Quartimax and Varimax is shown below. 11

Quartimax Rotation The FACTOR Procedure Rotation Method: Quartimax Rotated Factor Pattern Factor1 Factor2 Factor3 R R -0.01341 0.88262 0.28368 Ed Ed -0.77515 0.21671 0.41995 LF LF -0.34502 0.05705 0.70661 W W -0.80778 0.49241 0.03083 X X 0.89676-0.22525-0.01739 Age Age 0.80831-0.13087 0.22285 M M -0.17914 0.00606 0.86321 N N -0.10630 0.59252-0.53969 NW NW 0.86249 0.20262-0.17768 Ex0 Ex0-0.47362 0.84565-0.08734 Ex1 Ex1-0.48852 0.83152-0.10051 So So 0.86101 0.00977-0.22470 12

Variance Explained by Each Factor Factor1 Factor2 Factor3 4.8217139 2.9383889 1.9432212 Varimax Rotation The FACTOR Procedure Rotation Method: Varimax Rotated Factor Pattern Factor1 Factor2 Factor3 R R 0.11542 0.87012 0.29872 Ed Ed -0.70530 0.29393 0.49023 LF LF -0.27416 0.07754 0.73512 W W -0.73823 0.58206 0.11025 X X 0.85902-0.32754-0.10016 Age Age 0.80322-0.23008 0.14851 M M -0.10270 0.00335 0.87561 N N -0.08119 0.61535-0.51807 NW NW 0.86203 0.10562-0.24951 Ex0 Ex0-0.37595 0.89709-0.03088 Ex1 Ex1-0.39350 0.88516-0.04293 So So 0.83373-0.08440-0.29949 Variance Explained by Each Factor Factor1 Factor2 Factor3 4.2655137 3.3337570 2.1040532 Upon examination of both methods loadings we see that N, population size, has two almost equally-weighted, somewhat low, loadings. We may want to delete this variable and repeat the analysis. Both rotation methods produce comparable results no cross-loadings other than N. The factor patterns seem clearer for the Quartimax rotation, so we will choose this one. We also want to assess the communalities. Recall that the communalities are proportion of variance in each variable explained by the factor. If no factor explains a reasonable amount of the variance for a given variable, that variable is a candidate for deletion. A rule of thumb is that all variables having communalities less than 0.50 should be deleted unless there is a good reason not to. 13

Communalities Final Communality Estimates: Total = 9.703324 R Ed LF W X Age 0.85966771 0.82417233 0.62158402 0.89592513 0.85522298 0.72015284 M N NW Ex0 Ex1 So 0.77725912 0.65364897 0.81651329 0.94706233 0.94018788 0.79192735 All of our communalities are greater than 0.50, except that LF and N are somewhat low. We have three borderline reasons for deleting N, so we will. LF is borderline only in its communality but has a clear factor loading pattern, and we might be particularly interested in this variable, since the employment conditions in a city are fairly important in the overall quality of life. We will retain this variable. Final Factor Loadings Rotated Factor Pattern Factor1 Factor2 Factor3 R R -0.03036 0.89299 0.25338 Ed Ed -0.76883 0.21331 0.42774 LF LF -0.32050 0.02980 0.78285 W W -0.82121 0.47933 0.02607 X X 0.90431-0.22254-0.00231 Age Age 0.81709-0.09567 0.19063 M M -0.15523 0.03678 0.84851 NW NW 0.85089 0.22191-0.20996 Ex0 Ex0-0.50051 0.83587-0.10834 Ex1 Ex1-0.51573 0.82411-0.12575 So So 0.85120 0.04534-0.28232 Variance Explained by Each Factor Factor1 Factor2 Factor3 4.8437004 2.5627433 1.7683426 Final Communality Estimates: Total = 9.174786 R Ed LF W X Age 0.86255536 0.81956502 0.71645745 0.90482641 0.86731048 0.71312626 M NW Ex0 Ex1 So 0.74542180 0.81734132 0.96092849 0.96095334 0.80630034 14

Now comes the more subjective step: Labeling the factors. The first step is to separate the variables according to the highest factor loading. The table below shows the group of variables that load highest on the first factor, and then the group that load highest on the second factor and likewise for the third factor. Our job now is to see what general label we might be able to give the three groups. Can you think of meaningful labels? variable Factor1 Factor2 Factor3 Ed -0.76883 0.21331 0.42774 W -0.82121 0.47933 0.02607 X 0.90431-0.22254-0.00231 Age 0.81709-0.09567 0.19063 NW 0.85089 0.22191-0.20996 So 0.8512 0.04534-0.28232 R -0.03036 0.89299 0.25338 Ex0-0.50051 0.83587-0.10834 Ex1-0.51573 0.82411-0.12575 LF -0.3205 0.0298 0.78285 M -0.15523 0.03678 0.84851 The first group includes: education; median value assets or family income; number of families per 1000 earning below 1/2 the median income; the number of males of age 14-24 per 1000; the number of non-whites per 1000; and a dummy variable indicating whether the city is in the southern US. Is there a label we can give this group of variables? We might call this the high risk factor, since certain levels of all these variables are known to increase the risk of crime. The second group includes: crime rate; per capita expenditures on police in 1960; per capita expenditures on police in 1959. The second factor could be called the crime factor because it includes the crime rate and money spent on the police force. The third group includes: labor force participation per 1000 males of age 14-24; number of males per 1000 females. This factor could be called the male factor because it could be thought of as measuring the life satisfaction of the males in the city. Stage 6: Validation of Factor Analysis Use of a confirmatory perspective: We ll cover this in more detail when we get to structural equation modeling. Assessing Factor Structure Stability: This is a form of cross validation and involves splitting the sample in two and running the analysis for both halves to see how they 15

compare to each other and to the results of the analysis from the full data set. If the answers vary wildly, then the technique is not robust. Detecting influential observations: If any unusual observations are observed during exploratory analysis, then they should be removed to see how greatly they affect the results. If the results change drastically, then the researcher should consider removing the influential observation. Stage 7: Additional Uses of Factor Analysis Results We will skip this section. We are presumably not experts on these data and so our interpretation may seem a little shady. Rest assured that when you are intimately familiar with your data this process is much more fun and satisfying. However, many people feel that factor analysis is a little shady. This is pretty much an old view that is dying as the utility of the method for making sense of large numbers of variables in hugely complicated areas of study is becoming clear. This also explains the huge popularity of structural equation modeling SEM is seen as a more rigorous sort of factor analysis. 5 Appendix 5.1 References 1. http://www.psych.cornell.edu/darlington/factor.htm 2. Afifi, Abdelmonem, Virginia Clark, Susanne May (2004) Computer-Aided Multivariate Analysis, 4 th ed. Chapman & Hall/CRC. 3. Darlington, Richard B., Sharon Weinberg, and Herbert Walberg (1973). Canonical variate analysis and related techniques. Review of Educational Research, 453-454. 4. Rubenstein, Amy S. (1986). An item-level analysis of questionnaire-type measures of intellectual curiosity. Cornell University Ph. D. thesis. 16