Exploratory Factor Analysis


 Quentin Robbins
 2 years ago
 Views:
Transcription
1 Exploratory Factor Analysis Definition Exploratory factor analysis (EFA) is a procedure for learning the extent to which k observed variables might measure m abstract variables, wherein m is less than k. In EFA, we indirectly measure nonobservable behavior by taking measures on multiple observed behaviors. Conceptually, in using EFA we can assume either nominalist or realist constructs, yet most applications of EFA in the social sciences assume realist constructs. Assumptions 1. Typically, realism rather than nominalism: Abstract variables are real in their consequences. 2. Normally distributed observed variables. 3. Continuouslevel data. 4. Linear relationships among the observed variables. 5. Content validity of the items used to measure an abstract concept. 6. E(e i ) = 0 (random error). 7. All observed variables are influenced by all factors (see: model specification in CFA). 8. A sample size greater than 30 (more is better). Terminology (lots of synonyms): Factor = Abstract Concept = Abstract Construct = Latent Variable = Eigenvector. Comparison of Exploratory Factor Analysis and OLS Regression In OLS regression, we seek to predict a point, a value of a dependent variable (y) from the value of an independent variable (x). The diagram below indicates the value of y expected from a given value of x. The error represents the extent to which we fail in predicting y from x.
2 In EFA, we seek to predict a vector that best describes a relationship between the items used to measure the vector. The diagram below indicates the value of the vector F, expected from the correlation of X 1 and X 2. The error represents the extent to which we fail in predicting the vector from the correlation of X 1 and X 2. EFA assumes that X 1 and X 2 are linearly dependent, based upon their relationship to some underlying (i.e., abstract, latent) variable (i.e., construct, concept). In OLS regression, we solve the (standardized) equation: Y = X, where: Y is a vector of dependent variables, is a vector of parameter estimates, X is a vector of independent variables, is a vector of errors. In EFA, we solve the (standardized) equation: X = F, where: X is a vector of k observed variables, is a vector of k parameter estimates, F is a vector of m factors (abstract concepts, latent variables), is a vector of k errors.
3 The EFA Model Consider this simple model that consists of a single factor with two observed variables: 1 X F 2 X Note: When we address the topic of confirmatory factor analysis, we will designate abstract concepts with the greek letters and. Because most literature on EFA uses the designation F, we will use it in this lecture. We have two equations to solve: X 1 = 1 F X 2 = 2 F var(x i ) = E(x i  x ) 2 Note: for standardized variables, the mean of x = Thus, var(x i ) = E(X i ) 2 3. X i = i F + i i 4. var(x i ) = E( i F + i i ) 2 5. var(x i ) = i 2 E[F 2 ] + i 2 E[ i ] i i E[F, i ] 6. var(x i ) = i 2 var(f) + i 2 var[ i ] + 2 i I cov(f, i ) Assume: 1. cov(f, i ) = 0 (i.e., random errors in measurement). 2. var(f) = 1 (i.e., standardized measure of F, or ontologically, "the construct has a unit value"). 3. var[ i ] = 1 (i.e., standardized measure of, or ontologically, "the construct has a unit value"). Therefore: 1. var(x i ) = i 2 + i 2 = 1 (i.e., x is a standardized variable). 2. Because cov(f,x i ) = i 3. and because var(f) + i cov(f, i ) = then, for standardized variables, i = r F,Xi (i.e., the correlation of F and X i ). 4. Example: cov(x 1,X 2 ) = 1 2 var(f) = 1 2 = r X1,X2 (i.e., the correlation of X 1 and X 2 ).
4 Summary: 1. The parameter estimate (i.e., "factor loading"), i = r F,Xi (i.e., for principle components factor analysis, this parameter is identical to ). 2. The product of two factor loadings for two variables caused by the same factor (i.e., factorial complexity = 1) is equal to the correlation between the two observed variables. 3. The "communality" or item reliability of X i is equal to i In principle components exploratory factor analysis, the communality of X i is identical in concept to the coefficient of determination (Rsquare) in OLS regression analysis. [Note: Later, we will discuss various forms of EFA. Principle components EFA relies upon the unweighted correlation matrix among the observed variables, and therefore is analogous to OLS regression analysis with a known number of factors.] Estimating the EFA Model 1. X i is caused by F m, where m = the number of factors. 2. F causes X i, where i = 1k and k = the number of items that are caused by F. 3. X i = i F m + i. 4. To solve this equation, we need to measure F. 5. Our approach: a. We know X i (the observed variable). b. We will estimate i and use this estimate to determine i. [i.e., i + i = 1]. 5. Because X i can be caused by m factors, EFA becomes an exercise in determining the number of factors that cause X i and the parameter estimates ( i ) of each F on each X i. Determining the Number of Factors That Affect Each Observed Variable A factor is an abstract concept. In a realist (vs. nominalist) sense, this concept "causes" observable behavior in the same manner that the length of a table top "causes" the ruler to measure its longest dimension as its length. If one were to measure the longest dimension of a table top twice, and the table top did not change in its dimensions between the two measurements of it, and the measurements were taken carefully, and the measuring instrument (i.e., the ruler) were stable and consistent rather than wiggly and wobbly, then the two measurements should equal one another exactly. Similarly, if one were to measure selfesteem twice using, for example, the Rosenberg SelfEsteem Scale, and selfesteem did not change between the two measurements of it, and the measurements were taken carefully, and all ten items in the Rosenberg SelfEsteem Scale had equal content validity, and the Rosenberg Self Esteem Scale itself was a stable and consistent measuring instrument, then people should respond equally to all ten items on the scale (taking into account that half the items are worded in reverse conceptual order). This result should occur because one's selfesteem "causes" one to respond accordingly to the items on the Rosenberg SelfEsteem Scale. In mathematical terms, if the above conditions for measuring selfesteem are met, then the matrix of responses for the ten items on the scale should have a rank of 1, wherein the figures shown in columns 29 should be identical to those found in column 1 (assuming the items define the columns and the cases define the rows). That is, once we know a person's response to the first question in the Rosenberg SelfEsteem Scale, then we know the person's responses to the remaining nine items. Conceptually, given that each item on the scale is intended equally to reflect selfesteem, then this outcome is exactly what we would expect to observe. Thus, the ten items on the Rosenberg SelfEsteem Scale would represent a single, abstract concept (i.e., factor): selfesteem.
5 With this conceptual and mathematical logic in mind, we know we can determine the number of factors affecting responses to the i = 1k items by calculating the rank of the matrix of responses to the observed variables (i.e., X) because rank less than k indicates singularity in the matrix (i.e., at least two columns are measuring the same thing). This approach is logically consistent, but it fails in practice because, 1) not all items in a scale have equal content validity in reflecting the abstract concept and 2) people do not necessarily behave in a logically consistent manner. Therefore, to determine the number of factors causing responses to a set of observed variables, we need a measure of linear dependency that is probabilistic rather than deterministic. Consider the relationship between the rank and determinant of a matrix for a system of two linear equations, wherein the rows and columns provide unique information. 2x + 3y = 13 4x + 5y = x * y = 23 Solve for x, y: 1. 2x = 13 3y 2. x = 13/2 3/2y 3. 4 (13/2 3/2y) + 5y = y + 5y = y = 3 6. x = 13/2 9/2 = 2. Now, consider the relationship between the rank and determinant of a matrix for a system of two linear equations, wherein the rows and columns do not provide unique information. That is, note that the second equation is identical to 2 * the first equation. 2x + 6y = 22 4x + 12y = x * y = 44 Solve for x, y: 1. 2x = 22 6y 2. x = 11 3y 3. 4 (11 3y) + 12y = y + 12y = = 44. Result: Because of the linear dependence between row 1 and row 2 of the matrix, we cannot find a unique solution for x and y.
6 Consider the rank of the second matrix: multiply Row 1 by 1/ multiply Row 1 by 4 and add to Row the rank of this matrix equals 1. Thus, if a matrix has a perfect linear dependence, then its rank is less than k (the number of rows and columns). So, we can determine the number of factors by calculating the rank of the matrix, but this procedure requires perfect linear dependence, a result that is highly unlikely to occur in practice. Consider the definition of an eigenvector: X is an eigenvector of a matrix A if there exists a scalar, such that A x = x. That is, an eigenvector is a representation of linear dependence in a square matrix. To find the eigenvector(s) of a matrix, we solve for X: 1. A x = x. 2. A x  x = However, it is impossible to subtract a scalar from a matrix. It is possible, however, to subtract a scalar from the diagonal of a matrix. So, we insert "1" into the equation in the form of the Identity matrix. 4. (A  I)X = Let B = (A  I), such that BX = Note: To solve this equation, we will need to calculate the inverse of A. Not all matrices have an inverse. If a matrix has a rank less than k, then the matrix does not have an inverse. Also, if a matrix has a rank less than k, then the determinant of the matrix = If BX = 0, and B has an inverse, then X= B 1 0 and X = 0, which means that the matrix A has no eigenvector, meaning no indication of linear dependence. 8. Thus, X is an eigenvector of A if and only if B does not have an inverse. 9. If B does not have an inverse, then it has Det = 0 (and therefore perfect linear dependence). 10. So, X is an eigenvector of A, if and only if: Det(A  I) = 0 [i.e., the characteristic equation]. Unlike the rank of a matrix, which is deterministic, the determinant of a matrix is probabilistic, ranging in value from minus infinity to plus infinity. Therefore, the determinant of a matrix can be used to indicate the degree of linear dependence in square matrix. Thus, the solution to estimating the EFA equation is to establish a criterion of linear dependence by which to deem a matrix as containing one or more eigenvectors (i.e., factors). The approach is to solve for which is called the eigenvalue of the matrix. Handwritten notes attached to this course packet describe the Power Method and GramSchmidt Algorithm as procedures for estimating wherein the Power Method is a logically correct but impractical approach and the GramSchmidt Algorithm is the approach used in statistical analysis packages. An example of the matrix algebra used by the GramSchmidt Algorithm is attached to the course packet.
7 Calculation of After determining the number of factors in a matrix, the next step in estimating the EFA equation is to calculate the parameters in (discussed in detail below). Summary Determining the number of factors underlying a matrix of observed variables involves calculating the extent to which the matrix contains linear dependency. The rank of a matrix indicates perfect linear dependency, which is unlikely to occur in practice. The determinant of the equation for an eigenvector (i.e., wherein an eigenvector represents a factor) is probabilistic. Thus, we can calculate the determinant associated with an eigenvector to infer the presence of a factor. We achieve this goal by establishing a decision criterion by which to deem a matrix as containing one or more linear dependencies. We will discuss a mathematical logic for establishing this criterion later in this course. For principle components EFA, we will set this criterion as equal to 1. If an eigenvector has an associated eigenvalue of 1 or greater, then we will state that this vector represents an underlying abstract construct. The number of eigenvectors in a matrix of k columns and rows is equal to k. Thus, the Gram Schmidt Algorithm will calculate k eigenvalues for a matrix of size k. The calculation of eigenvalues is a "zerosum" game in that the degree of linear dependency calculated for one eigenvector reduces the size of the eigenvalue for the next vector, and so on. In principle components EFA, for example, the sum of eigenvalues is equal to k. Indeterminancy and Establishing a Scale Unfortunately, the calculation of eigenvectors from eigenvalues is indeterminant because of the linear dependence(s) in X. Consider this matrix: A = The eigenvalues of A are 1 and 5. Solve for x: (A I)X = 0 at 1 = (A (1)I)X = The vector X is: X 1 X 2 3. Then: ( ) X 1 = 0 ( ) X 2 = 0 4. So, 2 2 X 1 = X 2 0 or: 2X 1 + 2X 2 = 0 4X 1 + 4X 2 = 0 These equations cannot be solved!
8 5. To solve the equations, one of the values in the X matrix must be set to a value. 6. Let X 2 = 1, which indicates a "unit vector," or if you will, "The vector has the value of itself." This process is called, "setting the scale" for the equation. 7. If X 2 = 1, then, where 1 = 1 2X = 0, X 1 = Solve for x: (A I)X = 0 at 2 = 5: ( ( 5) 1 0 ) X 1 = 0 ( ) X 2 = 0 or: 4X 1 + 2X 2 = 0 4X 12X 2 = 0 (X 2 is set to 1) So, X 1 =.5. The equation can be solved. But only if one of the vectors is set to a value of 1. Therefore, the matrix of factor loadings is arbitrary because the eigenvectors are arbitrary. The Philosophy of the Social Sciences In the social sciences we measure variables that have no mass and therefore cannot be directly observed with the senses. At the same time, the social sciences are conducted under the same rules of theory development and testing as those used in the physical and life sciences. There are no exceptions or exemptions in science. If the social sciences must operate under the same rules of theory development and testing as required of all sciences, yet without the opportunity to observe phenomena through the senses (or extensions of them, such as microscopes, telescopes, and such), then some concession must be made. The concession made is the indeterminancy of measuring abstract concepts. Social sciences must assume that the abstract vector has some fixed length. Typically, this fixed length is set to 1. The result of this concession is that to some extent, all measures of abstract concepts are arbitrary. Indeterminacy in deriving eigenvalues 1. Ontology: Must make a claim about reality. Realism: Abstract concepts are real in their consequences. Abstract concepts "exist," and this existence is equal to itself = Epistemology: Cannot measure something that has no concrete existence. X = F + a. Known: X, which is the vector of observed variables. b. We do not know the number of F or the scores on F. We use the GS algorithm to determine eigenvalues for each eigenvector in R (the correlation matrix). An eigenvalue is the extent to which one eigenvector is correlated with another eigenvector. If an eigenvector "stands alone" or "to some extent represents an association with another eigenvector" then the eigenvalue will be greater than or equal to 1, respectively. If the eigenvalue ge 1, then we claim that we have determined the existence of an abstract variable.
9 c. An eigenvalue is the extent to which an eigenvector must be "altered" to reduce the determinant of R to (near) zero, wherein the lower the determinant the greater the "singularity" of R, and the greater the extent to which we identify the existence of an abstract variable. Characteristic Equation: Det (A I) = 0. Consider the matrix: Row 2 is nearly the double of Row 1. Setting the determinant to zero will "remove" Row 2, and thereby show singularity. If we "remove" Row 2, then we are "removing" much of the informational value of Row 1 as well. Thus, will be higher than one, indicating the existence of an abstract variable that affects both rows. d. We cannot solve the characteristic equation for an eigenvector unless we reduce the indeterminacy in the system of equations defined by A. One of the vectors of A must be set to a constant. Thus, ontologically, we have "set the scale" of our abstract variable to equal a constant (= 1). Note: In CFA, we can set the scale by setting on of the elements of to 1. Calculation of Factor Loadings Procedures Other Than Maximum Likelihood The calculation of the factor loadings (i.e., the matrix) is: [factor loadings] = [eigenvectors] * [eigenvalues] 1/2 reliability of the item in predicting the factor. That is, the factor loadings equal the Maximum Likelihood Factor Analysis For ML factor analysis the factor loadings (A) are estimated as: R = AA' + U 2, where R = the correlation matrix, and U 2 = 1 the item reliability (i.e., communality). Maximum likelihood EFA calculates weights for each element in the matrix, wherein these weights represent the communality of each observed variable and where observed variables with higher communality are given more weight. Consider the SAS output for the example labeled "Kim and Mueller: Tables 45, Figure 5 ( Note that the SAS output provides a variance explained by each factor, which equals the sum of the squared estimates for each observed variable on a factor. Thus, the unweighted variance explained by Factor 1 equals = The SAS output also provides the weights for each variable, which
10 reflect the communality of each observed variable and where this communality has been further enhanced to the extent that its reliability is stronger than the reliability of the other observed variables. These weights are shown in the table labeled "Final Communality Estimates and Variable Weights." Therefore, the weighted variance explained by Factor 1 equals (.8 2 * 2.78) + (.7 2 * 1.96) + (.6 2 * 3.57) + (.0 2 * 2.78) + (.0 2 * 1.56) = See: Harmon, Harry H Modern Factor Analysis, Third Edition. Chicago, The University of Chicago Press. Pp Principle Components EFA and OLS Regression After calculating the factor scores, one can regress each observed variable on these scores to reproduce exactly the matrix. The Rsquare for the OLS regression will equal the item reliability (i.e., communality) of the observed variable. Factor Scales [Scores] Once the EFA equation has been estimated, one can calculate scores on an abstract variable. The most common procedures are to calculate either the sum or the mean of responses to the observed variables caused by the factor. For example, to calculate a score on selfesteem, wherein EFA showed that the ten items on the Rosenberg SelfEsteem Scale are caused by a single abstract concept, one might add responses to the ten items on the scale. I recommend calculating the mean score across the ten items to retain the same measurement response scale as the one used for the ten observed variables. Other approaches to calculating factor scales account for varying item reliabilities in representing the abstract construct. Regression Method This method assumes that the observed variables represent the population of variables affected by the abstract concept (i.e., perfect content validity). = X( R 1 ), where: is the estimated score on the abstract variable, X is the matrix of standardized scores on the observed variables, is the matrix of parameter estimates of the effect of F on X. R 1 is the inverse of the correlation matrix. Recall that in OLS regression we estimate the equation: Y = X + We assume that the errors are random and uncorrelated with Y or X. Thus, in OLS regression, we solve for : = X'Y (X'X) 1
11 Similarly, in principle components factor analysis, we estimate the equation: X = F + We assume that the errors are random and uncorrelated with X or F. Thus, in principle components factor analysis, we solve for : = F'X (F'F) 1 Solving for F yields the equation shown above: = X( R 1 ) See Gorsuch, pages , formula See Harmon, pages , formula Least Squares Method This method assumes that the observed variables represent a sample from the population of variables affected by the abstract concept (i.e., imperfect content validity). = X( ') 1, where: is the estimated score on the abstract variable, X is the matrix of standardized scores on the observed variables, is the matrix of parameter estimates of the effect of F on X. Bartlett's Criterion This method gives more weight to observed variables with higher item reliability (i.e., imperfect content validity). = XU 2 ( ' U 2 ) 1, where: is the estimated score on the abstract variable, X is the matrix of standardized scores on the observed variables, is the matrix of parameter estimates of the effect of F on X. U is the matrix of 1 minus the item reliability.
12 Evaluation of Factor Scales 1. Factor scales can be correlated with one another even if the factors are orthogonal. 2. Correlations among oblique factor scales do not necessarily equal the correlations among the oblique factors. 3. A factor scale is said to be univocal if its partial correlation with other factors = Factor scales include two indeterminacies: 1) they are based upon indeterminate parameter estimates, 2) they do not account for unique error variance in F. Reliability of Factor Scales F = [var( ) (1 h i 2 )w i 2 ] / var( ), where: F (symbol Rho, for ): the reliability of the factor scale, w i = '(R 1 ) var( ) = correlation matrix, with all elements weighted by w i. Extraction Procedures in EFA Various forms of EFA are defined, wherein these forms rely upon various assumptions about the nature of social reality. These forms and assumptions are described below. All forms of EFA rely upon the same algorithm to calculate eigenvalues: the GramSchmidt Algorithm (also: QR and QL algorithms). Therefore, the various forms of EFA differ only in the matrix evaluated by the GS Algorithm. The GramSchmidt Algorithm calculates k eigenvalues associated with k eigenvectors for a square matrix (i.e., the correlation matrix or some weighted version of it). The various forms of EFA, therefore, are defined solely by their treatment of the matrix of correlations among the observed variables, prior to this matrix being evaluated using the GS Algorithm. Principle Components Characteristic equation: Det (R I) = 0, where R is the correlation matrix among the observed variables (i.e., the X matrix) with 1's on the diagonal. This is the "least squares" approach. Indeed, once the factor structure (i.e., number of factors and loadings of each X on each factor) is calculated, the scores on X and F can be input into OLS regression analysis to exactly reproduce the and matrices. Principle components is the procedure most often applied in EFA. The criterion used to deem an eigenvector as a factor is an eigenvalue of 1 or greater. Principle Axis; Common Factor Characteristic equation: Det (R 1 I) = 0, where R 1 is the correlation matrix among the observed variables (i.e., the X matrix) with the item reliabilities (i.e., commonalities) on the diagonal.
13 The principle axis (or common factor) form of EFA assumes that the items in X will vary in their content validity as indicators of F. Therefore, the input matrix is weighted to account for differing item reliabilities among the items in X. Conducting principle axis EFA requires initial estimates of the item reliabilities. Recall that item reliability equals the coefficient of determination (Rsquare) for the item as one observed outcome of the abstract concept. Therefore, prior communalities (i.e., item reliabilities) can be estimated through a series of OLS regression equations. Consider a factor structure with a single factor and three observed variables. Prior communalities for each X i are estimated as the Rsquare statistic for the regression of each X i on the remaining elements in X. X 1 = X 2 + X 3 + e (R 2 = prior communality for X 1 ). X 2 = X 1 + X 3 + e (R 2 = prior communality for X 2 ). X 3 = X 1 + X 2 + e (R 2 = prior communality for X 3 ). Principle axis EFA is not often used. The criterion used to deem an eigenvector as a factor is an eigenvalue of 0 or greater. Maximum Likelihood Characteristic equation: Det (R 2 I) = 0, where R 2 is the correlation matrix among the observed variables (i.e., the X matrix) with weighted item reliabilities (i.e., commonalities) on the diagonal. Observed variables with more reliability are given more weight. R 2 = U 1 (R U 2 ) U 1 : the correlation matrix divided by the square of the prior communalities. Maximum likelihood EFA assumes that the items in X will vary in their content validity as indicators of F. Therefore, the input matrix is weighted to account for differing item reliabilities among the items in X. The ML procedure calculates prior communalities in the same manner as is done for the principle axis procedure. The ML procedure is commonly used in EFA, especially when one assumes significant correlations among multiple factors. The criterion used to deem an eigenvector as a factor is an eigenvalue of 0 or greater. Alpha Characteristic equation: Det (R 3 I) = 0, where R 3 is the correlation matrix among the observed variables (i.e., the X matrix) with weighted item reliabilities (i.e., commonalities) on the diagonal. Observed variables with less reliability are given more weight (see: Correction for attenuation). R 3 = H 1 (R U 2 ) H 1 : the correlation matrix divided by the square of 1 minus the prior communalities, wherein U 2 + H 2 = 1.
14 Alpha EFA assumes that the items in X will vary in their content validity as indicators of F. Therefore, the input matrix is weighted to account for differing item reliabilities among the items in X, but giving more weight to items with less reliability. The alpha procedure calculates prior communalities in the same manner as is done for the principle axis procedure. I do not recall seeing a peerreviewed publication that used alpha EFA. The criterion used to deem an eigenvector as a factor is an eigenvalue of 0 or greater. Image Characteristic equation: Det (R 4 I) = 0, where R 4 is the correlation matrix among the observed variables (i.e., the X matrix) with weighted item reliabilities (i.e., commonalities) on the diagonal. Prior communalities are adjusted to reflect that they are derived from a sample of the population. R 4 = (R S 2 ) R 1 (R S 2 ): the correlation matrix divided by the square of the correlation matrix, subtracting the variances of the observed variables from the diagonal. S 2 = the diagonal matrix of the variances of the observed variables. The image procedure calculates prior communalities in the same manner as is done for the principle axis procedure. I do not recall seeing a peerreviewed publication that used image EFA. The criterion used to deem an eigenvector as a factor is an eigenvalue of 0 or greater. Unweighted Least Squares Characteristic equation: Det (R I) = 0, where R is the correlation matrix among the observed variables (i.e., the X matrix) with 1's on the diagonal. This approach differs from principle components in that it uses an iterative procedure to calculate the factor loadings, as compared with the procedure shown below. I do not recall seeing a peerreviewed publication that used unweighted least squares EFA. The criterion used to deem an eigenvector as a factor is an eigenvalue of 1 or greater. Generalized Least Squares Characteristic equation: Det (R I) = 0, where R is the correlation matrix among the observed variables (i.e., the X matrix) with 1's on the diagonal. This approach differs from principle components in that it relies upon a direct estimation of the factor loadings, as compared with the procedure shown below. I do not recall seeing a peerreviewed publication that used generalized least squares EFA. The criterion used to deem an eigenvector as a factor is an eigenvalue of 1 or greater.
15 The GramSchmidt (QR and QL) Algorithm As noted in the attached paper by Yanovsky, the QRdecomposition (also called the QR factorization) of a matrix is a decomposition of the matrix into an orthogonal matrix and a triangular matrix. Note: In this algorithm, the number of rows in the correlation matrix is referenced with the letter k (rather than the letter m, which is used in the notes above). 1. Define the magnitude of X = X, which is the length of X. X = [x x x k 2 ] 1/2 2. Two or more vectors are orthogonal if they all have a length of 1 and are uncorrelated with one another (cosin = 0). 3. Consider two sets of orthogonal vectors: {x 1, x 2, x 3 } {q 1, q 2, q 3 } where the set q is a linear combination of the set x (i.e., q is the same vector, rotated). 4. If the set q is a linear combination of the set x, then q and x have the same eigenvalues. 5. Thus, by creating successive sets of q, the QR algorithm can iteratively arrive at the set of eigenvalues describing x. 6. The QR and QL algorithms are identical, except that the QL uses the lower rather than upper half of the correlation matrix. Thus, if one conducts EFA on the same data using two different statistical software packages, wherein one uses the QR and the other uses the QL algorithm, then the parameter estimates will be identical but lined up under different columns (i.e., factors). Steps in the GramSchmidt (QR and QL) Algorithm 1. calculate r kk = [<x k, x k >] 1/2, which is the length of X. 2. set q k = (1 / r kk )X k, (i.e., Kaiser normalization of the vector X). 3. calculate r kj = <x j, q k >, wherein q = x rotated. 4. replace x j by x j r kj q k. (i.e., determine the eigenvalues of q). Rotation The GramSchmidt Algorithm projects the k eigenvectors within a space of k dimensions. These initial vectors can be difficult to interpret. The purpose of rotation is to find a simpler and more easily interpretable pattern matrix by retaining the number of factors and the final communalities of each of the observed variables in X. Rotation assumes either orthogonal axes (90 0 angle, indicating no correlation among the factors) or oblique axes (angles other than 90 0, indicating correlations among the factors).
16 There are three approaches to rotation. Graphic (not commonly used). Orthogonal: Rotate the axes by visual inspection of the vectors. Oblique: 1. Establish a reference axis that is perpendicular to a "primary" axis (the vector with the largest eigenvalue). 2. Plot the second vector. 3. Measure, the angle between F 1 and F Cosin = the correlation between F 1 and F 2. Rotation to a Target Matrix (not commonly used). 1. Specify a pattern matrix (rotated factor pattern) of interest. 2. Rotate the eigenvectors to this matrix. 3. Use hypothesis testing to determine the extent to which the pattern matrix equals the theoretically derived target matrix. Analytic (commonly used). Orthogonal: 1. Varimax (most commonly used): maximize the squared factor loadings by columns of the factor pattern. That is, maximize the interpretability of the factors. 2. Quartimax (not often used): maximize the squared factor loadings by rows of the factor pattern. That is, maximize the interpretability of the observed variables. 3. See also: Equimax, Biquartimax. Oblique: 1. Minimize errors in estimating, the angle between F 1 and F See: HarrisKaiser (used in SAS), direct oblimin (used in SPSS), Quartimin, Covarimin, Bivarimin, Oblimax, and Maxplane. Normalization After rotation from oblique procedures, the resulting vectors are no longer of unit length. Normalization (see: Kaiser Normalization) resets the vectors to a standardized length of 1.
Common factor analysis
Common factor analysis This is what people generally mean when they say "factor analysis" This family of techniques uses an estimate of common variance among the original variables to generate the factor
More informationA Introduction to Matrix Algebra and Principal Components Analysis
A Introduction to Matrix Algebra and Principal Components Analysis Multivariate Methods in Education ERSH 8350 Lecture #2 August 24, 2011 ERSH 8350: Lecture 2 Today s Class An introduction to matrix algebra
More informationCHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.
CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES From Exploratory Factor Analysis Ledyard R Tucker and Robert C MacCallum 1997 180 CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES In
More informationExploratory Factor Analysis and Principal Components. Pekka Malo & Anton Frantsev 30E00500 Quantitative Empirical Research Spring 2016
and Principal Components Pekka Malo & Anton Frantsev 30E00500 Quantitative Empirical Research Spring 2016 Agenda Brief History and Introductory Example Factor Model Factor Equation Estimation of Loadings
More informationDoing Quantitative Research 26E02900, 6 ECTS Lecture 2: Measurement Scales. OlliPekka Kauppila Rilana Riikkinen
Doing Quantitative Research 26E02900, 6 ECTS Lecture 2: Measurement Scales OlliPekka Kauppila Rilana Riikkinen Learning Objectives 1. Develop the ability to assess a quality of measurement instruments
More informationFactor Analysis. Advanced Financial Accounting II Åbo Akademi School of Business
Factor Analysis Advanced Financial Accounting II Åbo Akademi School of Business Factor analysis A statistical method used to describe variability among observed variables in terms of fewer unobserved variables
More informationReview Jeopardy. Blue vs. Orange. Review Jeopardy
Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 03 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?
More informationFactor Analysis. Principal components factor analysis. Use of extracted factors in multivariate dependency models
Factor Analysis Principal components factor analysis Use of extracted factors in multivariate dependency models 2 KEY CONCEPTS ***** Factor Analysis Interdependency technique Assumptions of factor analysis
More informationFACTOR ANALYSIS NASC
FACTOR ANALYSIS NASC Factor Analysis A data reduction technique designed to represent a wide range of attributes on a smaller number of dimensions. Aim is to identify groups of variables which are relatively
More informationFactor Analysis. Chapter 420. Introduction
Chapter 420 Introduction (FA) is an exploratory technique applied to a set of observed variables that seeks to find underlying factors (subsets of variables) from which the observed variables were generated.
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More informationMultivariate Analysis (Slides 13)
Multivariate Analysis (Slides 13) The final topic we consider is Factor Analysis. A Factor Analysis is a mathematical approach for attempting to explain the correlation between a large set of variables
More informationExploratory Factor Analysis: rotation. Psychology 588: Covariance structure and factor models
Exploratory Factor Analysis: rotation Psychology 588: Covariance structure and factor models Rotational indeterminacy Given an initial (orthogonal) solution (i.e., Φ = I), there exist infinite pairs of
More informationFACTOR ANALYSIS EXPLORATORY APPROACHES. Kristofer Årestedt
FACTOR ANALYSIS EXPLORATORY APPROACHES Kristofer Årestedt 20130428 UNIDIMENSIONALITY Unidimensionality imply that a set of items forming an instrument measure one thing in common Unidimensionality is
More informationFACTOR ANALYSIS. Factor Analysis is similar to PCA in that it is a technique for studying the interrelationships among variables.
FACTOR ANALYSIS Introduction Factor Analysis is similar to PCA in that it is a technique for studying the interrelationships among variables Both methods differ from regression in that they don t have
More information4. There are no dependent variables specified... Instead, the model is: VAR 1. Or, in terms of basic measurement theory, we could model it as:
1 Neuendorf Factor Analysis Assumptions: 1. Metric (interval/ratio) data 2. Linearity (in the relationships among the variablesfactors are linear constructions of the set of variables; the critical source
More informationOverview of Factor Analysis
Overview of Factor Analysis Jamie DeCoster Department of Psychology University of Alabama 348 Gordon Palmer Hall Box 870348 Tuscaloosa, AL 354870348 Phone: (205) 3484431 Fax: (205) 3488648 August 1,
More informationA Brief Introduction to SPSS Factor Analysis
A Brief Introduction to SPSS Factor Analysis SPSS has a procedure that conducts exploratory factor analysis. Before launching into a step by step example of how to use this procedure, it is recommended
More informationIntroduction to Matrix Algebra
Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra  1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary
More informationStatistics in Psychosocial Research Lecture 8 Factor Analysis I. Lecturer: Elizabeth GarrettMayer
This work is licensed under a Creative Commons AttributionNonCommercialShareAlike License. Your use of this material constitutes acceptance of that license and the conditions of use of materials on this
More informationPrincipal Components Analysis (PCA)
Principal Components Analysis (PCA) Janette Walde janette.walde@uibk.ac.at Department of Statistics University of Innsbruck Outline I Introduction Idea of PCA Principle of the Method Decomposing an Association
More informationUnderstanding and Using Factor Scores: Considerations for the Applied Researcher
A peerreviewed electronic journal. Copyright is retained by the first or sole author, who grants right of first publication to the Practical Assessment, Research & Evaluation. Permission is granted to
More information1 2 3 1 1 2 x = + x 2 + x 4 1 0 1
(d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which
More informationLinear Algebra Review. Vectors
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length
More informationFactor Analysis. Factor Analysis
Factor Analysis Principal Components Analysis, e.g. of stock price movements, sometimes suggests that several variables may be responding to a small number of underlying forces. In the factor model, we
More information( % . This matrix consists of $ 4 5 " 5' the coefficients of the variables as they appear in the original system. The augmented 3 " 2 2 # 2 " 3 4&
Matrices define matrix We will use matrices to help us solve systems of equations. A matrix is a rectangular array of numbers enclosed in parentheses or brackets. In linear algebra, matrices are important
More informationCanonical Correlation
Chapter 400 Introduction Canonical correlation analysis is the study of the linear relations between two sets of variables. It is the multivariate extension of correlation analysis. Although we will present
More informationSummary of week 8 (Lectures 22, 23 and 24)
WEEK 8 Summary of week 8 (Lectures 22, 23 and 24) This week we completed our discussion of Chapter 5 of [VST] Recall that if V and W are inner product spaces then a linear map T : V W is called an isometry
More informationTopic 10: Factor Analysis
Topic 10: Factor Analysis Introduction Factor analysis is a statistical method used to describe variability among observed variables in terms of a potentially lower number of unobserved variables called
More information15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
More information2. Linearity (in relationships among the variablesfactors are linear constructions of the set of variables) F 2 X 4 U 4
1 Neuendorf Factor Analysis Assumptions: 1. Metric (interval/ratio) data. Linearity (in relationships among the variablesfactors are linear constructions of the set of variables) 3. Univariate and multivariate
More informationExploratory Factor Analysis Brian Habing  University of South Carolina  October 15, 2003
Exploratory Factor Analysis Brian Habing  University of South Carolina  October 15, 2003 FA is not worth the time necessary to understand it and carry it out. Hills, 1977 Factor analysis should not
More informationFactor analysis. Angela Montanari
Factor analysis Angela Montanari 1 Introduction Factor analysis is a statistical model that allows to explain the correlations between a large number of observed correlated variables through a small number
More informationThe basic unit in matrix algebra is a matrix, generally expressed as: a 11 a 12. a 13 A = a 21 a 22 a 23
(copyright by Scott M Lynch, February 2003) Brief Matrix Algebra Review (Soc 504) Matrix algebra is a form of mathematics that allows compact notation for, and mathematical manipulation of, highdimensional
More informationby the matrix A results in a vector which is a reflection of the given
Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the yaxis We observe that
More information2.1: MATRIX OPERATIONS
.: MATRIX OPERATIONS What are diagonal entries and the main diagonal of a matrix? What is a diagonal matrix? When are matrices equal? Scalar Multiplication 45 Matrix Addition Theorem (pg 0) Let A, B, and
More informationSimilarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
More informationUsing the Singular Value Decomposition
Using the Singular Value Decomposition Emmett J. Ientilucci Chester F. Carlson Center for Imaging Science Rochester Institute of Technology emmett@cis.rit.edu May 9, 003 Abstract This report introduces
More informationIntroduction to Principal Components and FactorAnalysis
Introduction to Principal Components and FactorAnalysis Multivariate Analysis often starts out with data involving a substantial number of correlated variables. Principal Component Analysis (PCA) is a
More informationLINEAR ALGEBRA. September 23, 2010
LINEAR ALGEBRA September 3, 00 Contents 0. LUdecomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................
More informationOrthogonal Diagonalization of Symmetric Matrices
MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding
More informationUsing row reduction to calculate the inverse and the determinant of a square matrix
Using row reduction to calculate the inverse and the determinant of a square matrix Notes for MATH 0290 Honors by Prof. Anna Vainchtein 1 Inverse of a square matrix An n n square matrix A is called invertible
More informationExploratory Factor Analysis
Introduction Principal components: explain many variables using few new variables. Not many assumptions attached. Exploratory Factor Analysis Exploratory factor analysis: similar idea, but based on model.
More informationMatrices provide a compact notation for expressing systems of equations or variables. For instance, a linear function might be written as: b k
MATRIX ALGEBRA FOR STATISTICS: PART 1 Matrices provide a compact notation for expressing systems of equations or variables. For instance, a linear function might be written as: y = x 1 b 1 + x 2 + x 3
More informationLecture L3  Vectors, Matrices and Coordinate Transformations
S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3  Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between
More informationChapter 6. Orthogonality
6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be
More informationUnified Lecture # 4 Vectors
Fall 2005 Unified Lecture # 4 Vectors These notes were written by J. Peraire as a review of vectors for Dynamics 16.07. They have been adapted for Unified Engineering by R. Radovitzky. References [1] Feynmann,
More informationNonlinear Iterative Partial Least Squares Method
Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., RichardPlouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for
More informationLinear Dependence Tests
Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks
More informationNotes for STA 437/1005 Methods for Multivariate Data
Notes for STA 437/1005 Methods for Multivariate Data Radford M. Neal, 26 November 2010 Random Vectors Notation: Let X be a random vector with p elements, so that X = [X 1,..., X p ], where denotes transpose.
More information13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.
3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in threespace, we write a vector in terms
More informationExtending the debate between Spearman and Wilson 1929: When do single variables optimally reproduce the common part of the observed covariances?
1 Extending the debate between Spearman and Wilson 1929: When do single variables optimally reproduce the common part of the observed covariances? André Beauducel 1 & Norbert Hilger University of Bonn,
More informationLecture No. # 02 ProloguePart 2
Advanced Matrix Theory and Linear Algebra for Engineers Prof. R.Vittal Rao Center for Electronics Design and Technology Indian Institute of Science, Bangalore Lecture No. # 02 ProloguePart 2 In the last
More informationSPSS ADVANCED ANALYSIS WENDIANN SETHI SPRING 2011
SPSS ADVANCED ANALYSIS WENDIANN SETHI SPRING 2011 Statistical techniques to be covered Explore relationships among variables Correlation Regression/Multiple regression Logistic regression Factor analysis
More information1 Introduction. 2 Matrices: Definition. Matrix Algebra. Hervé Abdi Lynne J. Williams
In Neil Salkind (Ed.), Encyclopedia of Research Design. Thousand Oaks, CA: Sage. 00 Matrix Algebra Hervé Abdi Lynne J. Williams Introduction Sylvester developed the modern concept of matrices in the 9th
More informationPsychology 7291, Multivariate Analysis, Spring 2003. SAS PROC FACTOR: Suggestions on Use
: Suggestions on Use Background: Factor analysis requires several arbitrary decisions. The choices you make are the options that you must insert in the following SAS statements: PROC FACTOR METHOD=????
More informationIntroduction to Matrix Algebra I
Appendix A Introduction to Matrix Algebra I Today we will begin the course with a discussion of matrix algebra. Why are we studying this? We will use matrix algebra to derive the linear regression model
More informationInverses. Stephen Boyd. EE103 Stanford University. October 27, 2015
Inverses Stephen Boyd EE103 Stanford University October 27, 2015 Outline Left and right inverses Inverse Solving linear equations Examples Pseudoinverse Left and right inverses 2 Left inverses a number
More information10.3 POWER METHOD FOR APPROXIMATING EIGENVALUES
55 CHAPTER NUMERICAL METHODS. POWER METHOD FOR APPROXIMATING EIGENVALUES In Chapter 7 we saw that the eigenvalues of an n n matrix A are obtained by solving its characteristic equation n c n n c n n...
More informationNotes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 918/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
More informationOLS in Matrix Form. Let y be an n 1 vector of observations on the dependent variable.
OLS in Matrix Form 1 The True Model Let X be an n k matrix where we have observations on k independent variables for n observations Since our model will usually contain a constant term, one of the columns
More informationBasics Inversion and related concepts Random vectors Matrix calculus. Matrix algebra. Patrick Breheny. January 20
Matrix algebra January 20 Introduction Basics The mathematics of multiple regression revolves around ordering and keeping track of large arrays of numbers and solving systems of equations The mathematical
More informationLecture 7: Factor Analysis. Laura McAvinue School of Psychology Trinity College Dublin
Lecture 7: Factor Analysis Laura McAvinue School of Psychology Trinity College Dublin The Relationship between Variables Previous lectures Correlation Measure of strength of association between two variables
More informationMATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix.
MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix. Matrices Definition. An mbyn matrix is a rectangular array of numbers that has m rows and n columns: a 11
More informationIterative Methods for Computing Eigenvalues and Eigenvectors
The Waterloo Mathematics Review 9 Iterative Methods for Computing Eigenvalues and Eigenvectors Maysum Panju University of Waterloo mhpanju@math.uwaterloo.ca Abstract: We examine some numerical iterative
More informationApplied Linear Algebra I Review page 1
Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties
More informationMatrices in Statics and Mechanics
Matrices in Statics and Mechanics Casey Pearson 3/19/2012 Abstract The goal of this project is to show how linear algebra can be used to solve complex, multivariable statics problems as well as illustrate
More informationAu = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.
Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry
More informationSolutions to Linear Algebra Practice Problems
Solutions to Linear Algebra Practice Problems. Find all solutions to the following systems of linear equations. (a) x x + x 5 x x x + x + x 5 (b) x + x + x x + x + x x + x + 8x Answer: (a) We create the
More informationHow to report the percentage of explained common variance in exploratory factor analysis
UNIVERSITAT ROVIRA I VIRGILI How to report the percentage of explained common variance in exploratory factor analysis Tarragona 2013 Please reference this document as: LorenzoSeva, U. (2013). How to report
More informationEigenvalues, Eigenvectors, Matrix Factoring, and Principal Components
Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they
More informationDimensionality Reduction: Principal Components Analysis
Dimensionality Reduction: Principal Components Analysis In data mining one often encounters situations where there are a large number of variables in the database. In such situations it is very likely
More informationSOLVING LINEAR SYSTEMS
SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis
More information1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
More information11/20/2014. Correlational research is used to describe the relationship between two or more naturally occurring variables.
Correlational research is used to describe the relationship between two or more naturally occurring variables. Is age related to political conservativism? Are highly extraverted people less afraid of rejection
More informationFactor Rotations in Factor Analyses.
Factor Rotations in Factor Analyses. Hervé Abdi 1 The University of Texas at Dallas Introduction The different methods of factor analysis first extract a set a factors from a data set. These factors are
More informationSolution of Linear Systems
Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start
More informationEC327: Advanced Econometrics, Spring 2007
EC327: Advanced Econometrics, Spring 2007 Wooldridge, Introductory Econometrics (3rd ed, 2006) Appendix D: Summary of matrix algebra Basic definitions A matrix is a rectangular array of numbers, with m
More informationA matrix over a field F is a rectangular array of elements from F. The symbol
Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F) denotes the collection of all m n matrices over F Matrices will usually be denoted
More informationExploratory Factor Analysis of Demographic Characteristics of Antenatal Clinic Attendees and their Association with HIV Risk
Doi:10.5901/mjss.2014.v5n20p303 Abstract Exploratory Factor Analysis of Demographic Characteristics of Antenatal Clinic Attendees and their Association with HIV Risk Wilbert Sibanda Philip D. Pretorius
More informationLinear Systems. Singular and Nonsingular Matrices. Find x 1, x 2, x 3 such that the following three equations hold:
Linear Systems Example: Find x, x, x such that the following three equations hold: x + x + x = 4x + x + x = x + x + x = 6 We can write this using matrixvector notation as 4 {{ A x x x {{ x = 6 {{ b General
More informationSTUDY GUIDE LINEAR ALGEBRA. David C. Lay University of Maryland College Park AND ITS APPLICATIONS THIRD EDITION UPDATE
STUDY GUIDE LINEAR ALGEBRA AND ITS APPLICATIONS THIRD EDITION UPDATE David C. Lay University of Maryland College Park Copyright 2006 Pearson AddisonWesley. All rights reserved. Reproduced by Pearson AddisonWesley
More informationVector algebra Christian Miller CS Fall 2011
Vector algebra Christian Miller CS 354  Fall 2011 Vector algebra A system commonly used to describe space Vectors, linear operators, tensors, etc. Used to build classical physics and the vast majority
More informationExploratory Factor Analysis
Exploratory Factor Analysis ( 探 索 的 因 子 分 析 ) Yasuyo Sawaki Waseda University JLTA2011 Workshop Momoyama Gakuin University October 28, 2011 1 Today s schedule Part 1: EFA basics Introduction to factor
More information10.3 POWER METHOD FOR APPROXIMATING EIGENVALUES
58 CHAPTER NUMERICAL METHODS. POWER METHOD FOR APPROXIMATING EIGENVALUES In Chapter 7 you saw that the eigenvalues of an n n matrix A are obtained by solving its characteristic equation n c nn c nn...
More information1 Eigenvalues and Eigenvectors
Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x
More informationFactor Analysis: Statnotes, from North Carolina State University, Public Administration Program. Factor Analysis
Factor Analysis Overview Factor analysis is used to uncover the latent structure (dimensions) of a set of variables. It reduces attribute space from a larger number of variables to a smaller number of
More informationCS3220 Lecture Notes: QR factorization and orthogonal transformations
CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss
More informationThe Inverse of a Matrix
The Inverse of a Matrix 7.4 Introduction In number arithmetic every number a ( 0) has a reciprocal b written as a or such that a ba = ab =. Some, but not all, square matrices have inverses. If a square
More information8 Square matrices continued: Determinants
8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You
More informationOverview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 3 Linear Least Squares Prof. Michael T. Heath Department of Computer Science University of Illinois at UrbanaChampaign Copyright c 2002. Reproduction
More informationImages and Kernels in Linear Algebra By Kristi Hoshibata Mathematics 232
Images and Kernels in Linear Algebra By Kristi Hoshibata Mathematics 232 In mathematics, there are many different fields of study, including calculus, geometry, algebra and others. Mathematics has been
More informationSimilar matrices and Jordan form
Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive
More informationSCHOOL OF MATHEMATICS MATHEMATICS FOR PART I ENGINEERING. Self Study Course
SCHOOL OF MATHEMATICS MATHEMATICS FOR PART I ENGINEERING Self Study Course MODULE 17 MATRICES II Module Topics 1. Inverse of matrix using cofactors 2. Sets of linear equations 3. Solution of sets of linear
More informationPresentation 3: Eigenvalues and Eigenvectors of a Matrix
Colleen Kirksey, Beth Van Schoyck, Dennis Bowers MATH 280: Problem Solving November 18, 2011 Presentation 3: Eigenvalues and Eigenvectors of a Matrix Order of Presentation: 1. Definitions of Eigenvalues
More informationLinear Equations and Inverse Matrices
Chapter 4 Linear Equations and Inverse Matrices 4. Two Pictures of Linear Equations The central problem of linear algebra is to solve a system of equations. Those equations are linear, which means that
More informationMATH 240 Fall, Chapter 1: Linear Equations and Matrices
MATH 240 Fall, 2007 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 9th Ed. written by Prof. J. Beachy Sections 1.1 1.5, 2.1 2.3, 4.2 4.9, 3.1 3.5, 5.3 5.5, 6.1 6.3, 6.5, 7.1 7.3 DEFINITIONS
More informationEigenvalues and Eigenvectors
Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution
More information