How To Know If A Model Is True Or False

Size: px
Start display at page:

Download "How To Know If A Model Is True Or False"

Transcription

1 Evaluation of Goodness-of-Fit Indices for Structural Equation Models Stanley A. Mulaik, Larry R. James, Judith Van Alstine, Nathan Bennett, Sherri Lind, and C. Dean Stilwell Georgia Institute of Technology Discusses how current goodness-of-fit indices fail to assess parsimony and hence disconfirmability of a model and are insensitive to misspecifications of causal relations (a) among latent variables when measurement model with many indicators is correct and (b) when causal relations corresponding to free parameters expected to be nonzero turn out to be zero or near zero. A discussion of philosophy of parsimony elucidates relations of parsimony to parameter estimation, disconfirmability, and goodness of fit. AGFI in LISREL is rejected. A method of adjusting goodness-of-fit indices by a parsimony ratio is described. Also discusses less biased estimates of goodness of fit and a relative normedfit index for testing fit of structural model exclusive of the measurement model. By a goodness-of-fit index, in structural equations modeling, we mean an index for assessing the fit of a model to data that ranges in possible value between zero and unity, with zero indicating a complete lack of fit and unity indicating perfect fit. Although chi-square statistics are often used as goodness-of-fit indices, they range between zero and infinity, with zero indicating perfect fit and a large number indicating extreme lack of fit. We prefer to call chi-square and other indices with this property lack-of-fit indices. For a recent discussion of both lack-of-fit and goodness-of-fit indices, see Wheaton (I 988). In this article we evaluate the use of goodness-of-fit indices for the assessment of the fit of structural equation models to data. Our aim is to review their rationales and to assess their strengths and weaknesses. We also consider other aspects of the problem of evaluating a structural equation model with goodness-of-fit indices. For example, are certain goodness-of-fit indices to be used only in certain stages of research (a contention of Sobel & Bohrnstedt, 1985)? Or, how biased are estimates of goodness of fit in small samples? What bearing does parsimony have on assessing the goodness of fit of the model? Can goodness-of-fit indices focus on the fit of certain aspects of a model as opposed to the fit of the overall model? For example, to what extent do current goodness-of-fit indices fail to reveal poor fits in the structural submodel among the latent variables because of good fits in the measurement model relating latent variables to manifest indicators? We describe a goodness-of-fit index now This article is based in part on a paper presented by the first author to the Society of Multivariate Experimental Psychology at its Annual Meeting in Atlanta, Georgia, October 30 to November l, We are indebted to Chris Hertzog for comments made to earlier versions of this article, particularly in connection with the relative normedfit index, which we thought we had invented, only to discover that he had independently invented the same index a short time before. We use his name for the index and add corrections for bias in small samples to its formula. Correspondence concerning this article should be addressed to Stanley A. Mulaik, School of Psychology, Georgia Institute of Technology, Atlanta, Georgia used by some researchers that addresses this problem. Finally, to what extent do goodness-of-fit indices fail to represent misspecifications of a model when hypothesized causal paths turn out to have associated with them zero or near-zero estimates for their structural parameters? Our answer is that current goodness-of-fit indices evaluate only certain aspects of a model and must be used judiciously in connection with other methods for the evaluation of a model. Survey of Current Indices Earlier reviews and discussions (Bentler & Bonett, 1980; Sobel & Bohrnstedt, 1985; Specht, 1975; Specht & Warren, 1976) point out that the use of goodness-of-fit indices has grown out of researchers' dissatisfaction with the chi-square statistic traditionally used in assessing the fit of models. Typically, the values of the chi-square statistic for most researchers' models are significant, implying that the researchers must reject their models. And yet, in many of these cases, an inspection of the residuals representing the difference between the elements of the unrestricted sample covariance matrix and those of the estimated hypothetical model covariance matrix for the observed variables reveals that they are small in an absolute sense, giving rise to the impression that the models may not be so theoretically off-target as the significance of the chi-square statistic suggests. Chi-Square Test Justified Just When Test Has Near-Maximum Power Bentler and Bonett (1980) sought to qualify use of the chisquare statistic by pointing out that regarding the chi-square statistic as having a chi-square distribution is justified by asymptotic distribution theory only in large samples, precisely when the power of the statistic to detect small discrepancies between the model and the data becomes very large. Many researchers may regard a rejected model as due to a poor specification on their part of theoretical values for the fixed parameters of the model. But in our opinion, the fault in the model may not always be a misspecification of the parameters of the model Ps3~hological Bulletin, 1989, Vol. 105, No. 3, Copyright 1989 by the American Psychological Association, Inc /89/$

2 EVALUATION OF GOODNESS-OF-FIT INDICES 431 but may reflect a failure to satisfy other conditions necessary for the test of the model (James, Mulaik, & Brett, 1982; Mulaik, 1987). For example, the assumption that the data represent a random sample from a multivariate normal distribution may be wrong. Or, although one may assume that the causal relations between the variables in all subjects in a sample is adequately described by the model, there may be a few isolated individuals in the sample for whom the model is not appropriate. Or the assumption that one has achieved a completely closed system of variables may be incorrect, even though one may have ineluded in the model those causal variables that account for a substantial portion of the variance in the dependent variables. Consequently, researchers have desired an index that does not simply tell them that their model does or does not fit the data precisely but also indicates how closely their model fits the data. Even if the model is to be rejected by the chi-square test, a high degree of fit may suggest that much is to be salvaged in the model, because a more careful assessment of the model's assumptions and the manner in which the data conforms to these assumptions may reveal where the discrepancy lies. Indices Patterned After Multiple Correlation The squared multiple correlation has served as a paradigra to inspire a number of goodness-of-fit indices in causal modeling. For example, Specht (1975) developed a generalized multiple correlation coefficient to indicate how well variation among the exogenous variables in a causal model determines the variation among the endogenous variables. However, such an index has a more special purpose than that of an index of the fit of a whole model to data, and we do not consider it in detail here. However, Specht (1975) also offered an index Q analogous to the generalized multiple correlation coefficient to indicate how well a causal model reproduces the observed covariance matrix: Q= ISl/l~jl, where I S I is the determinant of the actually observed, unconstrained variance--covariance matrix for the observed variables and [ ]~jl is the determinant of the reproduced covariance matrix under (overidentified) Model j. This index varies between zero and unity, with zero indicating total lack of fit and unity indicating perfect fit. Note that the numerator, which is a function of the observed data to be explained by a model, remains constant for a given set of data, and the denominator varies with different models offered in explanation of the data. This is just the reverse of a coefficient of determination that has an unconstrained estimate of the variance to be explained in its denominator and varies in value with the values of predicted variance under various models in the numerator. Therefore, the Q index cannot be interpreted as a proportion of, say, total variation accounted for. Furthermore, whereas in principle a zero value for Q indicates complete lack of fit, in practice against almost any worst fitting null model, Q will be bounded from below by some value greater than zero. This is because the determinant ISI of an empirical covariance matrix S involving only moderately correlated variables and no linear dependencies among variables will almost always be greater than zero, whereas the determinant I~jl of the covariance matrix for any most restricted null model, say, one that generates a diagonal covari- ance matrix of zero off-diagonal covariances, will be finite and not always much larger than I S I. It would seem that a rational choice for a goodness-of-fit index would require that a worst fitting model, say, the null model, would have a zero value for its goodness-of-fit index. Q does not provide this. Normed-Fit Index Nested models concept. Before we consider the normed-fit index, we must first consider what is meant by a nested sequence of models, because the normed-fit index depends on comparisons of lack of fit between models in a nested sequence of models. Actually, the term a nested sequence of models is used in two ways in the literature. According to Bentler and Bonett (1980), one way, the parameter-nested sequence of models, involves a sequence of models nested strictly according to their parameters; the other way, the covariance matrix-nested sequence of models, uses nested in a less restricted sense to refer to a sequence of models nested according to their covariance matrices. The distinction between these two forms of nesting is given as follows: On the one hand, a parameter-nested sequence of models is a sequence of similar models having the same parameters but ordered according to increasingly more restricted a priori constraints placed on their parameters. For example, a typical model in a nested sequence may have five parameters, bi b2 b3 b4 bs. Beginning with a completely unrestricted model with no a priori restrictions on its parameters, we may construct an increasingly restricted nested sequence of models by fixing one additionai parameter in each succeeding model: Ms: bl b2 b3 /74 b5 Ml: bi b2 ba b4 0 M2: bl b2 b3 0 0 M3: bl b M4: bl Mo: O. Model M~ is the least restricted model, because all of its parameters are free to range over the set of real numbers in the estimation of the parameters. Succeeding models are more restricted variants of those preceding them. For a model in the sequence (other than the first), those parameters corresponding to fixed parameters in the preceding model in the sequence are also fixed to the same values in the current and subsequent models. Certain additional parameters, corresponding to some of the free parameters in the preceding model, are then fixed to certain values in the current and subsequent models. The remaining parameters, also corresponding to the remaining free parameters of the preceding model, are left free in the current model. Aside from simply fixing parameters to specified point values to place constraints on the parameters, one has other, sometimes less restrictive, forms of constraint. For example, one may constrain a model by requiring that a parameter fall within a specified interval, and in a subsequent, more restricted model, one may require that the parameter then take a specific value in this interval. One may constrain a parameter by requiring its

3 432 MULAIK, JAMES, VAN ALSTINE, BENNETT, LIND, STILWELL estimated value to equal the estimated value of another parameter. See Bentler and Bonett (1980) and Steiger, Shapiro, and Browne (1985) for discussion of the various ways of constraining parameters in nested sequences of models. Now, the most important characteristic of parameter-nested sequences of increasingly restricted models is that they can display increasing (but never decreasing) magnitudes of lack of fit as one assesses the fit of each successive, more restricted model in the seciuence to a given set of data (Bentler & Bonett, 1980). Because the fit of structural equation models is assessed by comparing a model's reproduced covariance matrix to the observed sample covariance matrix, then any other sequence of models that generates the same sequence of reproduced covariance matrices as does a parameter-nested sequence of models will generate the same sequence of lack-of-fit index values as does the parameter-nested sequence. Consequently, a covariance matrix-nested sequence of models is a sequence of structural equation models that generates the same sequence of reproduced covariance matrices as does some parameter-nested sequence of structural equation models. Many nested sequences of models described in the literature are only covariance matrix nested, deemed by researchers to be more convenient to use as proxies for their more strictly parameter-nested counterparts. For example, each model of the nested sequence, going from a saturated model (having perfect fit to the observed covariance matrix) through the measurement model to a more constrained structural model, and then to a null model, can be shown to correspond to a model of a parameter-nested sequence in having the same reproduced covariance matrix as the parameter-nested model. It is important to realize that although a parameter-nested sequence of models may be unique, a covariance matrix-nested sequence of models may correspond to any number of distinct parameter-nested sequences of models, all of which generate the same sequence of covariance matrices. The importance of nested sequences of models is that they may be used in connection with a rational sequence of tests designed to provide information about distinct aspects of a structural equation model embedded within the sequence. The principle of nested sequences of models is not of recent origin, having been described by Roy (1958), Roy and Bargmann (1958), Kabe (1963), Bock and Haggard (1968), and Mulaik (1972) in connection with step-down procedures in multivariate analysis. Discussions of such nested sequences in structural equations modeling are given in Bentler and Bonett (1980) and James, Mulaik, and Brett (1982), and the reader is referred to these references for details. It is also important to note that sequences of chi-square difference tests comparing differences in lack of fit between adjacent models in a nested sequence of models are asymptotically independent (Steiger, Shapiro, & Browne, 1985). Such tests permit one to isolate where fit and lack of fit arise in a model in the nested sequence. Normed index for comparing models. Bentler and Bonett (1980) described a normed index for the comparison of fit of two nested models against a given set of data. The index can be constructed using any one of a number of lack-of-fit indices as the basis for the measurement of lack of fit of a model to data, such as the chi-square index obtained when fitting the model with maximum likelihood or generalized least squares estima- tion or the sum of squared residuals obtained using least squares estimation. We present Bentler and Bonett's (1980) normed index here with a slight modification designed to give it greater general application: Akj = (Fk -- Fj)/(Fo - Fh), (1) where Fh, ~, Fk, and Fo are the lack-of-fit indices of four increasingly restricted nested models, Mh, Mj, Mk, and 34o, respectively, with M0 known as the null model. In effect, the difference in the lack of fit of the most restricted null model M0 and the least restricted model Mh is used as a norm by which to evaluate the difference between the two intermediate models. (Benfler and Bonett [ 1980] did not include Fh in the denominator of their index; but our index is equivalent to theirs if one takes Fh to be the lack of fit of a saturated or just-identified model, which has a lack of fit of zero.) Over the total range of increasing a priori restrictions on parameters in the sequence of nested models, beginning with Mh and ending with Mo, Akj gives the proportion of the difference in lack of fit between the most and least restricted models contributed by the difference in restrictions between the two intermediate models, Mj and gk. Normed-fit index. A popular index that is a specialization of the normed index for comparing models and that, when used in certain contexts, reflects the proportion of total information "accounted for" by a model is the normed-fit index of Bentler and Bonett (1980), which, with some license in notation on our part, is given as NFI(j) = (Fo - Fj)/(Fo - Fs), (2) where F0 is a lack-of-fit measure, for example, chi-square (for maximum likelihood estimation) or the sum of squared residuals (for unrestricted least squares estimation) when comparing the sample covariance matrix with the hypothetical covariance matrix derived from the parameters of a null model; F/is the comparable lack-of-fit measure (chi-square or sum of squared residuals) when comparing the sample covariance matrix with the hypothetical covariance matrix derived from the parameters of a less restricted model (Model j); and Fs is the comparable lack-of-fit measure when comparing the sample covariance matrix with the hypothetical covariance matrix derived from the parameters of a saturated or just-identified model. A saturated model has as many parameters to estimate as there are observed parameters in the sample covariance matrix from which to derive estimates of those parameters. Consequently, the estimated (reproduced) covariance matrix for the saturated or just-identified model equals the sample covariance matrix. So, the lackof-fit index Fs for the saturated model equals zero, because there is no discrepancy between the sample covariance matrix and the covariance matrix derived from the estimates of the saturated model's parameters. The rationale of the normed fit index NFI(j) is as follows: Given that the models in a nested sequence are all identified, a nested sequence of models can range, in the most extreme case, from a completely saturated model to a completely specified model having no estimated parameters. Frequently, however, the range of a nested sequence of models is less than this. For example, the most restricted model in the sequence, desig-

4 EVALUATION OF GOODNESS-OF-FIT INDICES 433 nated the null model Mo, may not specify all of its parameters a priori. Now, Fo, the lack of fit of the null model, is the maximum possible lack of fit one might obtain in a nested sequence of models ranging from a saturated or just-identified model (with lack of fit Fs equal to zero) through Model j (with lack of fit Fj) to the null model (with lack of fit equal to Fo). Thus Fo can serve as a norm by which to evaluate the degree to which Model j reduces lack of fit from the maximum possible lack of fit obtained in the nested sequence of models. The ratio of(fo - Fj)/Fo (dropping the expression for Fs in the denominator because it equals zero) thus represents the proportion of the total lack of fit that has been reduced by the use of Model j. It must be obvious by now that one should never try to use the normed-fit index when the lack-of-fit index of the null model is zero, because then one would divide by zero in the calculation of the normed-fit index. On the other hand, if (in the case in which the lack-of-fit index is a chi-square statistic) the lack of fit of the null model is not significant but still not equal to zero, then performing a sequence of difference chi-square tests on the models of the sequence will converge to acceptance of the null model for the nested sequence in question. Null model. In using Bentler's normed-fit index, one must determine the null model relevant to one's purposes whose lack-of-fit index will serve as F0 in the normed-fit index. Bentler and Bonett (1980) and James et al. (1982) argued that in the case of many structural equations models, and in common factor analysis models particularly, the aim is only to account for relationships among a set of observed variables. No attempt is made to account for the variances of these variables, which may be arbitrarily scaled. Thus a natural, most restricted null model would be one in which there is no relationship among the observed variables, with the consequence that the hypothetical covariance matrix is a diagonal matrix with fixed zero off-diagonal covariances and unspecified variances in the diagonal. The variances of this diagonal matrix, being free parameters, are estimated by the sample variances of these variables. Thus the sample covariance matrix will differ from the diagonal covariance matrix of the null model only in terms of the nonzero covariances among the variables. It is these differences, representing causal relationships, that are to be explained by a model. And so the difference in lack of fit between the saturated model and the null model represents the range of information that is to be accounted for by any model that seeks to reduce that lack of fit in a parsimonious way. This difference is thus the norm for the index. On the other hand, the term null model must not be interpreted to always mean a model that postulates no relationships between variables. Bentler and Bonett (1980) regarded a null model as a general, most restricted model against which other less restricted models are to be compared in a nested sequence of models. This concept clearly leaves open the possibility that in some situations, choices for a null model other than the model of no covariances between variables may be appropriate. Any structural model that fixes a structural parameter to a value other than zero (other than for purposes of arbitrarily specifying the metric of a latent variable) will necessarily be nested within a sequence in which the most restricted model in the sequence also fixes the same parameter to the same nonzero value. Such a null model may generate a covariance matrix that is not a diagonal matrix. Thus one may wish to hazard hypotheses that specify a priori not just the zero parameters but the nonzero values of other parameters as well. In the nested sequence of models, beginning with a saturated model, one may, in successive models, fix parameters to specified values in the order of one's decreasing confidence in these specified values. Although the lack-of-fit F0 of the most restricted model in the sequence of models may be significant statistically, one may wish to test the fit of some intermediate model in the sequence involving a subset of the fixed parameters of the most restricted null model about which one has the greatest confidence. The normed-fit index for this intermediate model, however, is not to be interpreted as a "proportion-of-total-covariance" index, because the norm of the index does not in this case correspond to a measure of the covariation to be explained. Rather, the norm corresponds to a measure of the discrepancy between the (possibly nondiagonal) covariance matrix generated under the most restricted model and the covariance matrix generated under the saturated or unrestricted model. Thus the normed-fit index in this case is to be interpreted as the proportional reduction in the lack of fit between the null and saturated models achieved by the intermediate model's fixing fewer and estimating more parameters. Sobel and Bohrnstedt's criticisms. Sobel and Bohrnstedt 0985) criticized the use of the uncorrelated variables null model. They referred to this null model as the "no-factor" null model, because a diagonal covariance matrix would be obtained if there were no common factors among a set of variables. They claimed that it should not be used in other than purely exploratory contexts. They argued that by using the no-factor null model, one might conclude that a relatively large normedfit index suggests that a model in question is scientifically adequate, but in fact the normed-fit index used in this way will not tell one whether the model represents a substantial improvement in knowledge over what is already available. It only tells one, said Sobel and Bohrnstedt, that the model is substantially better than a no-factor model. For example, we may already know that a set of variables are intercorrelated with other than zero correlations, so using the null model of no factors or zero correlations is to use a baseline model (Sobel and Bohrnstedt's new term for the null model) that is already rejected by current knowledge. Sobel and Bohrnstedt (1985) argued further that in many instances one should use some other baseline model in lieu of the no-factor null model. The choice for the baseline model depends on the current theoretical context within which the hypothesis is being considered. For example, in a factor analyric context, we may already assume the existence of two factors but consider a less restricted hypothetical model involving additional factors as a hypothesis. Thus a two-factor model could be the baseline or null model rather than the no-factor model against which our hypothetical model is to be compared. Or, to consider another example, within the context of hypothesizing two factors, we may compare a model, in which certain factor loadings are free to be estimated and are different from one another, against a baseline or null model, in which the same loadings are constrained to be equal to one another. Because a model that is less restricted than the no-factor model is used as the baseline or null model, the normed-fit index is much more sensitive (by having a smaller denominator) to tests of improve-

5 434 MULAIK, JAMES, VAN ALSTINE, BENNETT, LIND, STILWELL ment in fit in going from the baseline model to the hypothesized model. We accept Sobel and Bohrnstedt's (1985) observation that use of the no-factor null model leads to normed-fit indices that may not be very sensitive to important differences between models that are of current theoretical interest. However, we still find the use of the no-factor null model in a goodness-of-fit index useful. The index reveals, in relation to the observed covariance matrix, the proportional degree to which the many relationships observed between variables within that matrix are reproduced by the model. That is useful information for model comparison, especially when the models are not members of the same nested sequence of models. In fact, most competing models in science are not nested one within the other, because they often represent the phenomena in quite different ways with different sets of parameters. Yet it is the more or less absolute, overall fit of these models to the same data that is important information for comparing them. And that is the information that is provided by using a no-factor null model with a normedfit index. (And similar information is provided by the GFIs of LISREL.) For purposes other than this, we believe that the nested models concept has primarily a limited application, that of evaluating a given hypothetical model by testing different aspects of the model (e.g., in the manner described by Bentler & BoneR, 1980, Hertzog, in press, and James et al., 1982). On the other hand, we agree with Sobel and Bohrnstedt (1985) that if two models applied to the same data both obtain normed-fit indices in the.90s, the differences in fit between them may indeed be small, involving only differences in a few parameters, and yet the differences may have considerable theoretical importance at a given historical moment. To deal with the detection of these differences, we think the answer is to magnify these differences in the normed-fit index by the use of less restricted null models and norms other than the difference in lack of fit between the no-factor model and the saturated model (which has zero lack of fit). Sobel and Bohrnstedt (1985) were indeed moving in that direction by suggesting the use of other null models. However, they did not consider anchoring the norm of the normed-fit index in the difference between their baseline model and some model in the nested sequence intermediate between the tested model and the saturated model. (Indeed, their formula for the normed-fit index was the same as Bentler & BoneR's [1980] formula, and so they did not suggest that the norm of the index is a difference in lack of fit between two models.) Had Sobel and Bohrnstedt done so, they would have made an index much more sensitive to the differences they wished to detect than their own modified normed-fit indices. We have more to say about this later on in connection with relative normed-fit indices. Small sample bias of normed-fit index. Marsh, Balla, and McDonald (1988) showed with a Monte Carlo study that the normed-fit index of Bentler and Boner (1980) belongs to a class of goodness-of-fit indices, the Type I incremental-fit indices, which on the average, for small samples less than 200 in size, significantly underestimate the asymptotic value of the same index. A Type 1 incremental-fit index is of the form IFII(F) = (Fo - Fj)/Fo, where F is some basic lack-of-fit index, such as the maximum likelihood fit function value for the model (FF), x 2, x2/df, the likelihood ratio, or the root-mean-square residual (RMR). Thus Fo is the lack-of-ft index for the null model, and F~ is the lack-of-fit index for Model j. In contrast, Marsh et al. (1988) reported a second class of goodness-of-fit indices, the Type 2 incremental-fit indices, which are of the form IFI2(F) = (Fo - Fj)/[Fo - E(F/I Model j is true)] where E( ) is the expected value operator. These indices as a class tend to underestimate their asymptotic value in small samples to a much less degree, and any Type 2 incremental-fit index based on the fit function FF, x 2, x2/df, or on the Akaike (1987) information criterion (AIC) x 2 + 2q(j) or its variant [x 2 + 2(l(j)]/Nas modified by Cudeck and Browne (I 983; where q(j) are the number of parameters estimated in the model) was recommended by Marsh, Balla, and McDonald (1988) in place of the corresponding Type I incremental-fit index. Those committed in the past to using a Type I incrementalfit index such as the normed-fit index of Bentler and Boner (I 980) may be inclined to resist accepting use of Type 2 incremental-fit indices because a rational analysis of their formulas suggests that they measure slightly different aspects of fit. One may still like the Type 1 index because it indicates the proportion of the information about associations between variables explained by a model. The Type 2 incremental-fit indices seem not to have quite this same interpretation. However, whatever controversy there may be over whether to choose the Type 1 or Type 2 incremental-fit index in large samples, this controversy is made moot by the fact, not noted by Marsh et al. (1988), that each of the Type 2 incremental-fit indices recommended by them asymptotically equals the asymptotic value of its corresponding Type l incremental-fit index. Consequently, because a Type 2 incremental-fit index is less biased as an estimator of its asymptotic value, it may be used as a superior estimator of the asymptotic value of the corresponding Type 1 incrementalfit index. For exam#e, when F = x 2, the corresponding Type incremental-fit index (the Bentler-Bonett normed-fit index) equals IFIl(x 2) = ( ~) - X])/X~. But because 2 = (N- I)FF, where FF is the value 0fthe maximum likelihood fit function value for the model and N is the sample size, we may write IFI 1 (x 2) = (FFo - FFA/FFo canceling the factor (N- l), which appears implicitly in expressions in both the numerator and denominator. On the other hand, the corresponding Type 2 incremental-fit index equals IFI2(x 2) = (X 2 _ 2 2 Xj )/(Xo - df) because E(X 2) = dfwhen the model is true. Dividing each element by (N- 1) results in IFI2(x 2) = (FFo - FFj)/[FFo - df/(n- 1)] But asymptotically, as N increases indefinitely, df/(n - 1) approaches zero, with the consequence that IFI2(x 2) asymptotically approaches the asymptotic value of IFI 1 (X2). Similar convergence to the corresponding Type 1 incremental-fit index can

6 EVALUATION OF GOODNESS-OF-FIT INDICES 435 be shown for the other Type 2 incremental-fit indices recommended by Marsh et al. (1988). According to the empirically generated tables of Marsh et al. (1988), the correction for bias due to sample size in using the Type 2 incremental-fit index in place of the corresponding Type 1 incremental-fit index does not overcorrect the bias. On the average, for a given sample size, the Type 2 incremental-fit index is a better estimator of the asymptotic Type 1 incremental-fit index. (A mathematical proof to replace this inductive generalization is still unavailable.) But the interpretation can still be that of a Type 1 incremental-fit index. We should note, however, that in some cases, sampling fluctuations may permit some Type 2 incremental-fit indices to exceed unity by a small amount. GFIs Of LISREL Ji~reskog and SiJrbom (1984) described several variants of a goodness-of-fit index (GFI) reported in output for the LISREL VI program. However, the formulas given by JiSreskog and S/Srborn (1984) for these indices are not given with their rationale. It would seem enlightening to discover a rationale for these indices. The formulas for these indices are as follows: On the one hand, GFI(ML) = 1 - tr($-ls - I)2/tr(~-~S)2 (3) is to be used with models whose parameters are estimated by maximum likelihood (ML) estimation, where $ is the estimated covariance matrix for the observed variables derived from a restricted model; S is the unrestricted, sample covariance matrix (corresponding to the covariance matrix of a saturated model); and tr( ) is the trace or sum of the diagonal elements of the matrix contained within the parentheses. On the other hand, using ~ and S as in Equation 2, GFI(ULS) = 1 - tr(s - ~)2/tr(S 2) (4) is to be used with models whose parameters are estimated by unrestricted least squares (ULS) estimation. Assuming that these indices were invented on the basis of a common principle, it seems that this principle was not that on which the normedfit index in Equation 2 was based. This is most evident in the case of GFI(ML). One must note that the normed-fit index is based on lack-of-fit indices directly derived from an index of lack of fit that is minimized in the process of estimating free parameters. For example, the chi-square statistic used as the lack-of-fit index for maximum likelihood estimation is given as the sample size multiplied by the expression F(ML) = log log ISl + tr(~-~s) - k, (5) which is the loss function to be minimized in maximum likelihood estimation, with k being the number of manifest variables in ~ and S. When the restricted model covariance matrix equals the saturated model covariance matrix S, then log [ [ and log I Sl are equal, and tr(2~-is) equals the trace of an identity matrix that contains k ones in its principal diagonal, with the consequence that F(ML) equals zero. In GFI(ML) we observe the numerator expression tr(~-~s - 1) 2. Although this is related to the comparison oftr(~-~s) with the value k in Equa- tion 5, it is not identical to it. The numerator of GFI(ML) is also not the lack-of-fit index of a null model, and for this reason, GFI(ML) should not be regarded as a case of Bentler and Bonnet's (1980) normed-fit index. Analogy with coe~icient of determination. The GFIs seem more inspired by analogy with the concept of a coefficient of determination, an index also reported in the LISREL output and discussed by Jtreskog and Strbom (1984). (In this respect the GFIs are analogous to Specht's [ 1975] generalized multiple correlation coefficient.) In general, a coefficient of determination may be expressed as 02 = 1 - (error variance/total variance). In the case of GFI(ML), tr($-ls) 2 = tr(]~-ls]~-ls) = tr[( 2-t/2S~,-l/2)(~.-t/2S~,-l/2)] represents the sum of the squares of the elements of (2-1/2S~-1/2), the sample covariance matrix S premultiplied and postmultiplied by the "weight" matrix ~-1/2. The (weighted) "error" in the fit of $ to S is given by the elements of the matrix [$-1/2(S- ~)$-~/2]. The sum of the squares of this weighed error matrix is given by tr[~-t/2(s -- ~)~-1/212 = tr(~-ls -- I) 2, the numerator of GFI(ML) given in Equation 5. Therefore, the proportion of squared (weighted) error tr[z-'~(s - ~)~-'~l ~ in fitt~ the (~ted) matrix ~-'~S~ -'~ is given by the ratio tr(~-ls - )2/tr(~-IS)2. Subtracting this ratio from unity yields a measure of the proportion of weighted information in S that fits the weighted information in ~. This can similarly be seen in GFI(ULS). The error in an element of ~2 may be determined by how much it differs from the corresponding element in S. The sum of the squares of these errors is given by tr(s - 1~)2. On the other hand, the sum of the squares of the elements of S given by tr(s) 2 gives a measure of the total information to be explained by a model. The ratio of tr(s - ~)2/tr(S) 2 gives the proportion of information in S that is in error in fitting to S. Subtracting this ratio from unity gives the proportion of information in S that is fit by 1~. Evidently, the weighting of information in this method of estimation is by the matrix I as opposed to ~-~/2. In general, a GFI is given by GFI = 1 - tr[w-t/2(s - ~)W-l/2]2/tr(W-I/2SW-l/2) 2, where W is some weight matrix, depending on the method of estimation. For maximum likelihood, W = ~; for unrestricted least squares, W = I; and for generalized least squares, W = S (Tanaka & Huha, 1985). The theory behind the weighting matrix W for the GFIs was first given by Bentler (1983), who drew on the work of Browne (1982) and Shapiro (1983) on the theory of generalized least squares estimation in proposing a goodness-of-fit index for models estimated by generalized least squares. Bentler seems, however, not to have realized at that time that this theory is a basis for the goodness-of-fit indices Of LISREL. Using this theory, Tanaka and Huba (1985) were able to derive the goodness-offit indices of LISREL and show that they were optimized by the estimation methods. In addition, they derived a GFI(GLS) index for generalized least squares (GLS) estimation in structural equation modeling, showing that the weight matrix W in this case is S.

7 436 MULAIK, JAMES, VAN ALSTINE, BENNETT, LIND, STILWELL Relation to normed-fit indices. Although the normed-fit index (using the no-factor null model) of Bentler and Bonett (1980) is not based on the same rationale as the GFIs of LISREL, the normed-fit index nevertheless often generates indices similar in magnitude to those of the GFIs. Furthermore, when one bases the norm of the normed-fit index on the no-factor model, the normed-fit index, like the GFI, yields a measure of the proportion of some suitably defined total observed information fit by the model in question. However, using the no-factor null model, the normed-fit index measures the fit of the model to the off-diagonal elements of the covariance matrix, those elements representing the relations between the observed variables. One may wonder why the null model of the normed-fit index is not a null matrix. Although this is a possibility in unrestricted least squares estimation, it is not a possibility in maximum likelihood estimation. The fit function of the maximum likelihood estimation procedure is undefined when the model covariance matrix is a null matrix. The inverse of a null matrix is not defined. So, the lack-of-fit index for the null matrix as model covariance matrix is undefined in maximum fikelihood estimation, and the null matrix cannot be the null model of the normed-fit index in this case. On the other hand, the GFI seeks to measure the fit of the model to the whole covariance matrix. This is possible because in computing the GFI for a model, one does not need to know the fit function value of a null matrix. Marsh et al. (1988) showed that the GFI of LISREL in small samples does not underestimate its asymptotic value to quite the same extent as does the normed fit index, although there is still a notable sample size effect. However, the GFI seems effected by violation of the assumptions made by the maximum likelihood estimation method on which it is based, which is seen in the considerably lower average GFI value obtained for the Students' Evaluation of Teaching Effectiveness data reported by Marsh et al. (1988), which represented a large empirical sample of subjects' responses to a teaching evaluation questionnaire. There is reason to believe that the subjects in this sample were not homogeneous for the factor model. Our recommendation is to continue to use the GFIs for the appropriate method of estimation when the conditions for that method are satisfied and when one has samples at least 200 in size. Tanaka (1987) reported evidence that the normed-fit index varies considerably in value across maximum likelihood and generalized least squares estimations when applied to the same model and data. For example, a model whose free parameters were estimated by maximum likelihood yielded a normed-fit index of.88. When the free parameters of this model were estimated by generalized least squares using the same data, the corresponding normed-fit index was.62. On the other hand, Tanaka found that GFI(ML) and GFI(GLS) as described here yielded the single value of.89 for the same data. One would expect the GFI indices for maximum likelihood estimated models and generalized least squares estimated models to converge asymptotically as sample sizes increase, because the matrices S and should converge as long as the model is correctly specified. However, Tanaka (1987) did not report studies comparing the GFI(ML) and GFI(GLS) indices across possibly misspecified models. The weight matrix for generalized least squares (GLS) estimation will remain the same across all models, whereas it can vary across the models, especially mis- specified ones, in the ease of maximum likelihood (ML) estimation. This might produce a difference in the results. Tanaka (personal communication, June 1988) also indicated that the sample size on which this result was based was N = 112. This sample size is well within the range in which the normed-fit index is seriously underestimated. We cannot yet resolve whether his results reflect a small sample effect rather than a major discrepancy between normed-fit indices and GFIs. Studies are needed to resolve this problem. Tanaka (1987) also did not report comparisons with GFI(ULS) indices, which use the fixed weight matrix I. In any case, although one should expect high correlations between the GFI indices (Anderson & Gerbinf~ 1984), one should use caution in comparing the goodness of fit of models estimated by different methods. Parsimony and the Problem of Inflated Indices A drawback of the normed-fit indices formulated along the lines of Bentler and Bonett's (1980) index and the GFI of JSreskog and S6rbom's (1984) LISREL program was pointed out by James et al. (1982): One can get goodness-of-fit indices approaching unity by simply freeing up more parameters in a model. This is because estimates of free parameters are obtained in such a manner as to get best fits to the observed covariance matrix conditional on the fixed parameters. So, each additional parameter freed to be estimated will remove one less constraint on the final solution with consequently better fits of the model-reproduced covariance matrix to the sample covariance matrix. A just-identified model with as many parameters to estimate as there are independent elements of the observed variables' covariance matrix has a lack-of-fit index of zero and consequently a normed-fit index of unity. The degrees of freedom of the just-identified model is also zero. Hence James et al. (1982) suggested adjusting the normed-fit index for loss of degrees of freedom by multiplying the normed-fit index NFI(j) for Model j by the ratio of the degrees of freedom, dj, of the model divided by the degrees of freedom, do, of the null model. This ratio assumes values between zero and unity and was called the parsimony index of the model. The resulting index PNFI(j) = (djdo)nfi(j) can be called a parsimonious normed-fit index (PNFI). The effect of this multiplication of the normed-fit index by the parsimony index is to reduce the normed-fit index to a value closer to zero. This reduction in value of NFI(j) compensates for the increase in fit of a less restricted model obtained at the expense of degrees of freedom lost in the estimation of free parameters.i In some ways, this index has certain affinities to the Akaike (1987) AIC lack-of-fit index, which also penalizes a model for t One may wonder whether multiplying the normed-fit index by the simple ratio (d/do) and not by some other nonunit power of this ratio, (dfldoy, where c ~ 1, provides the optimal adjustment for loss in degrees of freedom. We favor the simple ratio because each degree of freedom lost corresponds to a parameter estimated, and the ratio is simply reduced by l/d0 for each degree of freedom lost, no matter how many other degrees of freedom have been lost. So, all degrees of freedom (and estimated parameters) are treated equally. Howeve~ further study of this issue is warranted.

8 EVALUATION OF GOODNESS-OF-FIT INDICES 437 losses in degrees of freedom resulting from estimating more parameters, when comparing models according to their lack of fit to the data. However, the PNFI is a goodness-of-fit index, whereas the AIC index is a lack-of-fit index. A comparable parsimonious GFI, PGFI, can be formed from a GFI reported by the LISR~L program: PGFI(j) = (4/do)GFI(j), where it should be noted that do for a GFI equals k(k + 1)/2, the number of independent elements in the diagonal and offdiagonal of the covariance matrix of observed variables, rather than the number of distinct off.diagonal elements, k(k - 1)/2, as is the case with the normed-fit index. Parsimony of a Model Parsimony in the history of science. We believe that in assessing the quality of a model, especially when comparing different models formulated for a given set of data, the goodness of fit of the model should never be taken into account without also taking into account the parsimony of the model. The value of the PNFIs and PGFIs is that they combine information about goodness of fit with information about parsimony into a single index that seeks to compensate for the artifactual increase in fit resulting from estimating more parameters. As a result, these indices may furnish information leading to inferences concerning the acceptance or rejection of a model that differ from inferences based on indices that consider goodness of fit alone. Historically, parsimony in the formulation of theories has been advocated as a virtue in its own right, depending on no other principle. For example, the 14th-century English nominalist philosepher and theologian William of Occam formulated the parsimony principle in what is known today as Occam's razor: Entities are not to be multiplied except as may be necessary. Occam's razor came to signify that theories should be as simple as possible (Jones, 1952). But insisting on simplicity in theories may at times seem arbitrary. Kant (1781/1900) recognized Occam's razor as a regulative principle of reason impelling us to unify experience as much as possible by means of the smallest number of concepts. But Kant cautioned that the principle is not to be applied uncritically, for against it one could cite another regulative principle, that the varieties of things are not to be rashly diminished if we are to capture the individuality and distinctness of things in experience. Toward the end of the 19th century, the Austrian and Kantian physicist Heinrich Hertz put forth the view that our theories are not merely summary descriptions of that which is given to us in experience but are constructs or models actively imposed by us onto experience. There are many models we might construct that account for the relations among a given set of objects. Thus, to choose between competing models, we must evaluate them in terms of their logical or formal consistency, their empirical adequacy, their ability to represent more of the essential relations of the objects, and their simplicity (Janik & Toulmin, 1973). Hertz's stress on simplicity was echoed later by other influential physical scientists (cf. Poincar~, 1902/1952). The simplicity of theories in representing experience was often cited as a fundamental principle by scientists in the 1930s and 1940s. For example, George Herbert Mead (1938) argued that one persists in acting according to a hypothesis as long as it works to solve some problem, and one abandons the hypothesis for another only if that other is simpler. Science pursues the simpler hypothesis because science has found it to be more successful to do so. But Mead's position does not elucidate why science has this success. The quantitative psychologist L. L. Thurstone (1947), an admirer of Mead (Still, 1987), came closer to clarifying the function of parsimony when he argued that "the criterion by which a new ideal construct in science is accepted or rejected is the degree to which it facilitates the comprehension of a class of phenomena which can be thought of as examples of a single construct rather than as individualized events" (L. L. Thurstone, 1947, p. 52). He then argued that in any situation in which a rational equation is proposed as the law governing the relation between two variables, the ideal equation is one in which the number of parameters of the equation that must be estimated is considerably smaller than the number of observations to be subsumed under it. Unfortunately, he did not clarify why the number of parameters to be estimated must be fewer than the number of observations to be subsumed under the curve. Nevertheless, parsimony became a central principle in his use of the method of factor analysis, influencing his concepts of minimum rank, of the overdetermination of factors, and of simple structure. Many of Thurstone's ideas about parsimony presage principles commonly invoked in structural equation modeling. The Austrian philosopher of science Karl Pepper (1934/ 1961) argued that the principle of parsimony does not stand on its own but rather works in the service of a more fundamental principle, the elimination of false theories by experience. He regarded the simplicity or parsimony of a hypothesis to be essential to evaluating the merits of a hypothesis before and after it is subjected to empirical tests. "The epistemological questions which arise in connection with the concept of simplicity," he said, "can all be answered if we equate this concept with degree offalsifiability'" (Pepper, 1934/1961, p. 140). Pepper grasped to a considerable degree the significance of how, in connection with a given set of observations, a hypothesis with few freely estimated parameters may be subjected to more tests of possible disconfirmation than a hypothesis containing numerous freely estimated parameters. His thoughts on this topic pointed the way to seeing how a degree of freedom in the test of a structural equation model corresponds to an independent condition by which the model may be disconfirmed. An example. To see how "falsitiability" (or better, "disconflrmability"), parsimony, parameter estimation, and goodness of fit are interrelated concepts, let us see what is involved, say, in fitting a function to a set of data points, a problem considered both by Thurstonc (1947, p. 52) and by Pepper (1934/1961, p. 138) in connection with parsimony. Our treatment of this problem here is more extensive than theirs and makes the relations among these concepts more perspicuous than Pepper's treatment. The principles to be demonstrated in this example readily generalize to structural equations modeling. Suppose we are given five data points plotted in a two-dimensional coordinate system, and our task is to find a graphical representation of a law that corresponds to a curve that passes through these points under the assumption that they are gener-

9 438 MULAIK, JAMES, VAN ALSTINE, BENNETT, LIND, STILWELL ated according to the same law. Unfortunately, the data does not determine a unique curve that passes through these points, because an unlimited number of curves may be found that pass through these points (Hempel, 1965). And so, as Popper (1934/ 1961) pointed out, a problem for so-called inductive logics of discovery has always been how to choose the optimal curve that fits the points. Frequently, the advice has been to "choose the simplest curve" that fits the points (Popper, 1934/1961, p. 138). Thus linear functions have been regarded as simpler than quadratic functions, and quadratic functions as simpler than quartic functions, and so on. However, Popper pointed out that it is not self-evident that this principle is necessarily the only or the optimal way of ordering functions according to a concept of simplicity. Furthermore, even finding the simplest curve that fits these points in no way guarantees that one has found the law by which these points were generated. The only adequate test of the curve as an inductive generalization from the data is how well it allows one to extrapolate and interpolate to new data points not used in identifying the curve but presumed to be generated by the same process. Hence Popper argued that we should not be preoccupied in these problems with just finding methods that always find curves that fit a given set of data points optimally; rather, we should be concerned with testing hypothetical curves, whatever their origin, against new data. Furthermore, the more ways we are able to subject a curve to a test against data, and the more the curve passes these tests, the more corroboration we have for use of the curve; and such a curve is preferred. 2 Given that a researcher has a certain number of data elements that he or she may hypothesize are generated by the same functional process, parsimony in formulating this hypothesis concerns the proportion of these data elements that will be used in estimating parameters to uniquely identify this function. It is quite possible that no parameters will need to be estimated, that the parameter values are already given from other sources. This is the most parsimonious situation with respect to use of the data at hand, for it leaves all of the data available for testing of the hypothesis. But if the data are consulted to determine the values of some of the parameters, then it must be realized that the data elements used in this determination are then unavailable for testing the model, because the estimated curve will then pass through these data elements necessarily, and one cannot speak with respect to them of a possibility of disconfirmation of the hypothesis. For example, consider that in the example of five points, we may hypothesize that a quadratic function fits the five points. A quadratic equation is of the form y = ao + a~x + a2x 2. We may pick any three of the five points and, using the values of their x and y coordinates, substitute these values into the quadratic equation to form three simultaneous equations linear in the unknown coefficients a0,., a2. Solving this system of equations for ao,., a2, we then identify an equation that fits the three points exactly. However, we cannot test the resulting equation against these same three points, because the curve based on the equation necessarily passes through them. It would make no sense to talk about a possible lack of fit here. But there remain two points not used in estimating the parameters of the curve against which we can now test the adequacy of the hypothesis. If the resulting second-degree equation fails to pass through either of these two points, the hypothesis is disconfirmed. Hence each of these two remaining points corresponds to a condition by which the hypothesis may be disconfirmed, and statisticians speak of these conditions as degrees of freedom. 3 Suppose we had come to the five data points with a seconddegree curve whose three parameters were already completely specified by either previous experience or pure conjecture. In this case we would not need to estimate any parameters, and so all five data points would be available for testing the curve. Here, the degrees of freedom for the test of the curve against data is equal to the number of points, five, against which the curve may be compared for lack of fit. The difference between the previous case, in which we had to estimate three parameters and thus lost three degrees of freedom, and the present case lies in the gain in degrees of freedom, because no parameters have to be estimated. In short, one can use data in two ways: One can use it to estimate parameters of functions and thereby lose it for testing goodness of fit, or one can forego using it to estimate parameters and use it for testing a prespecified hypothesis for goodness of fit. We now see what is meant by saying, "One loses a degree of freedom for each parameter estimated," which occurs frequently in discussions of structural equations models. We also see why lower degree polynomials seem simpler, because they require using fewer independent elements of the data for the 2 It is easy to believe that here, Popper ( 1934/1961) finally succumbed to the temptations of the very inductivism he sought to overturn, for he seems to argue that a hypothetical curve is better (more likely to pass tests in the future?) because it has passed more tests. But Popper resisted offering such an inductive justification for why a well-corroborated hypothesis is better. A more appropriate way to see why Popper says a weucorroborated hypothesis is better is to see that this is just what Popper means by a better theory, that he stands ready to offer no further reasons for such a definition. One might say that with such a move, Popper abandoned his avowed intention to provide a purely rational basis for doing science. But maybe there is no such thing as acting in a purely rational way, for as Wittgcnstein (1953) pointed out, we always come to a point at which we run out of reasons and must say, "This is simply what we do." 3 The curve-fitting example used here is an oversimplification but makes clear the points to be made. Statisticians usually use all of the data points in estimating the free parameters of a function to be fit to the data but treat the system of simultaneous equations associated with the data points as possibly inconsistent (Schneider, Steeg, & Young, 1982). Estimates of the free parameters are obtained by minimizing some lack-of-fit function applied to all the data points, which has the effect of identifying a component of the data in some subspace of the data space (the "reproduced" data space) from which the free parameters are then uniquely determined. When the lack-of-fit function used is least squares or its variants, it can be shown that the dimensionality of the reproduced data space is (locally) equal to the number of free parameters estimated and is further, by the projection theorem (Brockwell & Davis, 1987; Deutsch, 1965; Schneider et al., 1982), orthogonal to the residual data space, which in turn has dimensionality equal to the number of data points minus the number of estimated parameters. Degrees of freedom in this case equal the dimensionality of the residual data space. Thus the free parameters are determined by a component of the data not used in assessing the lack of fit, which nevertheless is reproduced perfectly as a function oftbe estimated parameters.

10 EVALUATION OF GOODNESS-OF-FIT INDICES 439 estimation of parameters and leave more of these elements for testing the fit of the model to the data. We also see why estimating more parameters increases goodness of fit artifactually, because more components of the data are then made to fit the model. We also see why using all of the data elements to determine a curve that fits all of them perfectly is unparsimonious, because it requires a high-degree polynomial and the estimating of many parameters and leaves no data elements available for testing the empirical fit of the curve. At this point we should be able to see that the simplicity of a model depends not so much on its dimensionality but on the number of free parameters that must be estimated. In the exampies we have used so far, the equation to be estimated was of less degree than the number of points available. But the relevant case that can be extended by analogy to structural modeling is one in which the equation has more parameters than data points, that is, an equation of higher degree than the number of data points. In this ease one must fix at least as many parameters as is necessary to reduce the number of free parameters to no more than the number of data points available with which to estimate them. We will preferably fix even more parameters than this, so that we will have fewer parameters to estimate than data points available and thereby be in a position to test the resulting equation against some subset of points (Mulalk, 1987). But simplicity is not gauged by the number of parameters in the equation but by the paucity of parameters that must be estimated or, inversely, by the number of degrees of freedom by which the equation may be tested. Parsimony ratio. We should also see now why a ratio of the degrees of freedom (of the test) of a model to the total number of relevant degrees of freedom in the data (the parsimony ratio) reflects the parsimony or simplicity of the model. Only in the case ofthose models that estimate very few of the available parameters will this ratio approach unity. Given two models with equally high goodness-of-fit indices in connection with the same data, the one to be preferred is the one with the higher parsimony ratio, because it has been subjected to more potentially disconfirming tests. Keep in mind that good fit can come about in two ways: (a) by a hypothesis that correctly constrains parameters of the model and (b) by estimating many parameters, which necessarily contributes to good fit no matter what the data are. Consequently, the parsimony ratio reflects an upper bound to the proportion of the independent elements in the data that are relevant to the assessment of goodness of fit. Parsimonious-fit index is not the same as a goodness-of-fit index. Some researchers have been dismayed when normed-fit indices in the high.90s drop to parsimonious normed-fit indices in the.50s. They have been reluctant to report parsimonious normed-fit indices in the.50s because they believe it suggests that something is wrong with their models. But this need not be the interpretation. The parsimonious normed-fit index is not simply a goodness-of-fit index: Rather, it is an index that seeks to combine two logically interdependent pieces of information about a model, the goodness of fit of the model and the parsimony of the model, into a single index that gives a more realistic assessment of how well the model has been subjected to tests against available data and passed those tests. Steiger (1987) suggested that goodness of fit and parsimony are just two of many dimensions of a multidimensional preference function that in- dividual researchers may use in evaluating models. Researchers might, he suggested, consider attaching different weights to parsimony and goodness of fit. Although this may be so, it must be kept in mind that goodness of fit and parsimony are logically interdependent dimensions: Low parsimony implies high goodness of fit. To assess what is objective and not simply artifact in the goodness of fit of a model, one must consider how parsimonious the model is in its use of the data in achieving that goodness of fit. Weighting parsimony and goodness of fit equally strikes us as the only rational thing to do. It is not inconceivable to have acceptable models with nonsignificant chi-squares, goodness-of-fit indices in the high.90s, and parsimonions-fit indices in the.50s. A nonsignificant chisquare means that a model is statistically acceptable insofar as the constraints on its parameters are consistent with aspects of the data not used in the estimation of free parameters. Goodness-of-fit indices will always be near unity when chi-square is nonsignificant and may even be near unity when chi-square is significant, indicating that the model with its constrained and estimated parameters reproduces the data very well, although statistically there is a detectable discrepancy. But reproducing the data is not the same as a test of a completely specified model. A moderate parsimonious-fit index corresponding to a high normed-fit index of goodness-of-fit index indicates that much of the good fit, that which is due principally to the estimated values of the free parameters, remains untested, unexplained (from outside the data), and in question. The parsimonious-fit index should be especially useful when comparing models, for it simultaneously takes into account the goodness of fit of the model to data and the parsimony of the model. Thus one can clearly see the difference in quality of two models that fit the same data equally well when one of the models is far more parsimonious than the other. One can also see the difference in quality of two models that have equal parsimony ratios when one fits the data better than the other. Inadequacies of Adjusted Goodness-of-Fit Index JiSreskog and SiSrbom (1984) described an adjusted goodness-of-fit index (AGFI) designed to compensate for the increase in goodness of fit of a less restricted model obtained by estimating more free parameters: AGFI (1 - GFl)[k(k + l)/2d], where GFI is the goodness-of-fit index, k is the number of manifest variables in the model, and d is the degrees of freedom of the model to which GFI applies. With GFI formulated in analogy with the coefficient of determination, AGFI is apparently formulated in analogy with the correction for bias of a squared multiple correlation coefficient (an index of determination; cf. Guilford, 1950, p. 434): cr 2 = 1 - (1 - R2)[(N - I)/(N- k- 1)], where cr u denotes the squared multiple correlation corrected for bias, R 2, which is the original uncorrected squared multiple correlation; N is the total number of observations; (N - 1) is the total number of potential degrees of freedom, with one degree of freedom lost in the estimation of the mean of the dependent variable in a null model of no relation (all regression co-

11 440 MULAIK, JAMES, VAN ALSTINE, BENNETT, LIND, STILWELL efficients are fixed equal to zero except the intercept); and (ivk - 1) is the number of degrees of freedom of the prediction model, with one degree of freedom lost for each parameter estimated of the multiple regression equation with k predictors, which has k + 1 parameters. (Guilford's [ 1950] statement that one degree of freedom is lost in estimating the mean of each variable is misleading.) The correction for bias of the squared multiple correlation has the defect that it can take on negative values when the number of predictor variables k in relation to N is large. Although the AGFI uses the same information as the parsimonious-fit index of James et ai. (1982), it does not use this information in a completely rational way, for the resulting AGFI, like the correction for bias of the squared multiple correlation, can take on negative values, as Jtreskog and Strbom (1984, p. 1.40) noted. It is informative to see how the AGFI could be negative with an example: Suppose GFI =.90 in a model with 20 manifest variables and two degrees of freedom. AGFI in this case equals A corresponding parsimoniousfit index for this case, obtained by multiplying the parsimony index of 2/210 (formulated in relation to a null model that seeks to account for all of the information in the covariance matrix of the manifest variables, which contains 2 l0 distinct elements) by GFI would equal Furthermore, for a just-identified or saturated model with zero degrees of freedom and GFI equal to 1.00, AGFI is undefined. But the corresponding parsimonious-fit index would equal zero. On the other hand, the AGFI is not very sensitive to losses in degrees of freedom for models with moderately high degrees of freedom. For example, with GFI =.90, k = 20, and d = 150, AGFI =.86, a reduction of only.04. But 60, or 28.5% of the 210 potential degrees of freedom, have been lost in going to 150 degrees of freedom. A corresponding parsimonious-fit index, assuming a null model with 2 l0 degrees of freedom, would equal [(150)/(210)] (.90) =.642. Thus the AGFI index does not have the rational norm of a meaningful zero point as does the parsimonious-fit index of James et al. (1982). A negative AGFI may be diagnostic of a poor model (as was suggested by one reviewer of this article), but because zero and negative values have no rationale in the formulation of the AGFI, it is difficult to know what further interpretation to give to them. Computational Formulas for Use With LISREL Output Because current versions of the LISREL program report several goodness-of-fit indices, including the AGFI, it may be helpful for the researcher to be able to convert these indices into a parsimonious-fit index along the lines of that of James et al. (1982). When the aim is to account for all of the information in the variance-covariance matrix for the observed variables, then a parsimonious GFI is given by PGFI(I) = [2d/k(k + I)]GFI, where d is the degrees of freedom of the tested model and k the number of observed variables in the model, with GFI being the goodness-of-fit index, computed by LISREL (see Jtreskog & Strborn, 1984, p. we.40). When the aim of one's model is to account for just the rela- tionships between the observed variables and hence only the covariances between the observed variables, the potential degrees of freedom of a null model with a covariance matrix among the observed variables equal to a diagonal matrix with free diagonal parameters is equal to k(k - 1)/2, and we should compute PNFI2 = {2d/[k(k- 1)]} [(F0 - Fj)/(Fo - d)], where PNFI2 is the Type 2 parsimonious normed-fit index, d are the degrees of freedom of the model being tested; k are the number of observed variables; Fj is the lack-of-fit index for the model being tested (chi-square for maximum likelihood or the sum of the squared residuals for unrestricted least squares estimation); and F0 is the lack-of-fit index for the null model whose covariance matrix is hypothesized to be a diagonal matrix with free diagonal elements (chi-square or sum of squared residuals, depending on method of estimation). 4 In this formula, obtaining F0 may present the most problems, especially when maximum likelihood estimation is used. In this case it is recommended that one simply test a model in which the covariance matrix for the observed variables is hypothesized to be a diagonal covariance matrix with free diagonal variance parameters and let F0 be the chi-square of the test of fit of the model to the sample covariance matrix. In the case of unrestricted least squares estimation, F 2 = RMR2[k(k + 1)/2] = tr(s - ~)2 F0 = {[RMR2k(k + 1)/2]/[1 -GFI(ULS)]} - V = tr[s - diag(s)] z where RMR is the root-mean-square residual reported in the LISREL output, (see Jtreskog & Strbom, 1984, p. 1.41), k is the number of observed variables, GFI(ULS) is the goodness-of-fit index reported in the LISREL output, and V is the sum of squared sample variances. These formulas simply take advantage of data provided in the LISREL output to obtain the necessary sum of squared residuals in these indices. Relative Normed-Fit Indices The various goodness-of-fit indices described up to now assess the fit of the full structural model's reproduced covariance matrix to the actual observed covariance matrix for the manifest variables of the model. But a little-recognized drawback of these goodness-of-fit indices is that they are usually heavily influenced by the goodness of fit of the measurement model portion of the overall model and only reflect, to a much lesser degree, the goodness of fit of the causal model portion of the overall model. It is quite possible to have a model in which the 4 In the case of ULS estimation, din this equation should be replaced by E[tr(S - ~)2]the model for is true]. Unfortunately, an expression for this term is not now available in a readily usable form, although provisional analysis suggests that it is equal to the sum of the variances of the respective elements of the sample variance--covariance matrix, with each of these variances converging asymptotically to zero as sample size increases indefinitely. It is recommended that one simply set d to zero in this case, realizing that the resulting parsimonious normedfit index will likely be underestimated on average in small samples.

12 EVALUATION OF GOODNESS-OF-FIT INDICES 441 Figure 1. Model by which artificial data set was generated, that is, the "correct model" measurement model portion involving relations of the latent variables to manifest indicator variables is correctly specified but in which the causal model portion involving structural relations among the latent variables is misspecified and to still have a goodness-of-fit index for the overall model in the high.80s and.90s. To illustrate, suppose we have some data generated by the model whose path diagram is given in Figure 1. Suppose further that a researcher hypothesizes a model according to the model given in Figure 2. Notice that the researcher has correctly specified the measurement submodel (involving relations between manifest and latent variables) by specifying correctly the number of latent variables for the model and the relations of these latent variables to the manifest variables of the model. But notice also that the researcher has incorrectly specified the causal relations between the latent variables. One would hope that with the structural submodel of the relations between the latent variables of central theoretical concern, the traditional goodness-of-fit indices would be highly sensitive to the misspecification of the structural submodel as given in connection with Figure 2. But they are not. We used Monte Carlo methods to generate a sample from a multivariate normal distribution whose population covariance matrix was determined by a model consistent with the model in Figure 1 and then tested the model in Figure 2 against these data. We obtained a Type 2 adjusted normed-fit index of.932 for the fit of the model in Figure 2 to the data. (The chi-square statistic for the fit of the model in Figure 2 indicated a significant lack of fit, but our point concerns interpretation of a high goodness-of-fit index.) This index is quite high and would be accepted by many researchers as a very promising fit. It does not differ very much from the Type 2 adjusted normed-fit index of.994 that was obtained when the correct model in Figure 1 was applied against the data (whose chisquare was not significant).

13 442 MULAIK, JAMES, VAN ALSTINE, BENNETT, LIND, STILWELL Figure 2. Misspecified model applied to artificial data. This disparity between the influence of the fits of the measurement and structural submodels on the goodness-of-fit index for the overall structural model usually arises because, in pursuing the goal of parsimony, the researcher generates a model in which the measurement model portion of the model usually contains the bulk of the parameters of the model. With few latent variables and many manifest indicators for each latent variable, the number of parameters involving relations of manifest indicators to latent variables is much greater than the number of parameters involving relations between the few latent variables. The parameters of the measurement model may then determine the greater portion of the covariances among the manifest variables, especially if the manifest indicator variables are highly reliable indicators of the latent variables. One way to deal with this problem is with a relative goodness-of-fit index (Hertzog, in press; Lerner, Hertzog, Hooker, Hassibi, & Thomas, 1988). The aim of this index is to assess the relative fit of the structural or causal model among the latent variables independently of assessing the fit of the hypothesized relations of the indicator variables to the latent variables. James et al. (1982), as influenced by Bentler and Bonett (1980), described a nested sequence of models to be used in assessing the fit of a model to data. These are (a) tbe just-identified or saturated model, (b) the measurement model (a confirmatory factor analysis model used to test the model of relations between latent variables and manifest indicators while leaving relations among the latent variables saturated), (c) the structural relations model that imposes some constraints on the relations among the latent variables, (d) the uncorrelated latent variables model, and (e) the null model of no relations between the manifest variables. This sequence of models is a covariance matrix-nested sequence. The measurement model, as a factor analysis model, does not specify causal relations among the la-

14 EVALUATION OF G(~DNESS-OF-FIT INDICES 443 tent variables but corresponds to and has equal fit to the data, as does a model in which the causal relations among the latent variables are fully saturated. Let Fu be the lack-of-fit index (chi-square) for the model of uncorrelated latent variables. We use this model as the null model for construction of a normed-fit index for the structural model. Let Fm be the lack-of-fit index (chi-square) for the confirmatory factor analysis model used to test the measurement model. Let Fj be the lack-of-fit index for the structural relations model of interest. Now define RNFI(j) = (Fu - Fi)/[Fu -Fm - (dj - din)] as the Type 2 adjusted relative normed-fit index for the struc-.tural model ofcansal relations among the latent variables of the full structural equation model, which contains a correction for bias according to principles given by Marsh et al. (1988). Here, the norm for the normed-fit index is the difference in the lack of fit between the unrelated variables model and the measurement model. A corresponding relative parsimony ratio for the causal model would be given by RP(j) = [dj - d,,]/[d, - elm], where dj are the degrees of freedom of the structural equation model, arm are the degrees of freedom of the confirmatory factor analysis measurement model, and d, are the degrees of freedom of the uncorrelated latent variables model. When comparing the fit of different causal models defined on the same latent variables, one would multiply RP(j) by RNFI(j) to get a relative parsimonious-fit index appropriate for assessing how well and to what degree the models explain, from outside the data, all possible relations among the latent variables. For the artificial data generated according to the model in Figure 1, we obtained chi-square, the normed-fit index, the parsimonious normed-fit index, the Type 2 adjusted normed-fit index, the Type 2 parsimonious normed-fit index, the LISREL GEl, the parsimonious GFI, the LISREL adjusted GFI, and the Akaike (1987) AIC for each of the following models when applied to the data: (a) the null model, (b) the uncorrelated factors model, (c) the misspecified model (in Figure 2), (d) the correct model (in Figure 1), (e) the measurement model, and (f) the saturated model. The indices for these models are shown in Table 1. It is interesting to see that the normed-fit index and the LISREL GFI are quite comparable. However, the GFI of.281 for the null model reflects the fact that for this index, one has already accounted for a portion of the model by providing estimates of the variances, which are not relevant to the normed-fit index. It is also interesting to note that of the various models, the correct model had the highest parsimonious-fit indices. Although the normed-fit indices for the correct model and the measurement model are almost identical and, in fact, higher for the measurement model, the increase in fit at the expense of loss in degrees of freedom (in comparison with those of the correct model) slightly degrades the quality of the measurement model. This is reflected in the higher parsimonious normed-fit index for the correct model. It is also interesting that the Akaike (1987) (AIC) index reached its smallest value of for the measurement model rather than for the more constrained correct model. Using the data in Table 1, we also computed the Type 2 relative normed-fit index for the correct model MI to be RNFI2(I) = ( )/[ ( )] =.988. On the other hand, the relative normed-fit index for the misspecified model was given by RNFI2(2) = ( )/ [ ( )] =.774. Here we see that the relative normed-fit index magnifies the difference in the fit of the causal model portions of the two structural models far better than does the ordinary normed-fit index, Limitations of Goodness-of-Fit Indices Goodness-of-fit indices indicate how well a model fits data, even when, statistically, it does not do so perfectly. Many models in science are useful because they fit data well, even though it is known that the fit is not perfect. For example, the idealized models of Newtonian mechanics, involving isolated bodies moving in perfect vacuums or oscillating springs free of internal friction, are regarded as useful, approximate descriptions of physical phenomena, even though careful measurements will reveal that they do not perfectly fit the everyday data to which they are usually applied (Giere, 1985). Psychological theories should be similarly regarded as useful when they fit data well although not perfectly. Goodness-of-fit indices serve to indicate such degrees of fit and reinforce researchers for their efforts when the indices approach unity in value. However, researchers should realize that goodness-of-fit indices do not assess all aspects of a model's appropriateness for data. Specifically, goodness-of-fit statistics assess directly the viability of overidentifying restrictions in both the structural and measurement portions of a latent variable model that evolve from fixing or constraining parameters. However, hypotheses regarding structural coefficients that are predicted to be nonzero in the population but are estimated as free parameters in the model are not directly assessed by goodness-of-fit indices. One can obtain a high goodness-of-fit index value for a model in which certain structural coefficients hypothesized to be nonzero but treated as free parameters turn out to have estimated values of zero. The results contradict one's hypothesis, but the index alone does not indicate this. Thus goodness-of-fit indices should only be used conditionally on a significant chi-square for the appropriate null model (that is, if one rejects the hypothesis that all structural coefficients are simultaneously equal to zero) and on the significance of tests of individual parameters of special salience to a model. But it is also important to realize that tests of the fit or the lack of fit of a model do not depend on the validity of the model alone (Garrison, 1986; James et al., 1982). This is because most research hypotheses are stated in the following way: If certain foundational theories 7"1,..., Tp are true and background conditions C1, C2,..., Ck are the case and Model X is true, then consequence O should be observed. Now, if consequence O is not observed, this may mean that Model X is false, but it logically can also mean that any number of the foundational theories Tl,..., Tp or background conditions Cl,..., Ck are false while Model X is true. The test of the model cannot logically isolate where the failure to confirm it comes from. On the other hand, if consequence O is observed, this is no guarantee that Model X is true, for it is logically possible that the reason O is observed is because some other model under other background

15 444 MULAIK, JAMES, VAN ALSTINE, BENNETT, LIND, STILWELL Table 1 Chi-Squares Degrees of Freedom, NFI, PNFI, NFI2, PNFI2, GFI, PGFI, AGFI, and AIC for Models Tested Against Artificial Example Model Description x: df NFI PNFI NFI2 PNFI2 GFI PGFI AGFI AIC Mo Null model Mu Uncorrelated factors Me Misspecified model M~ Correct model M~ Measurement model Ms Saturated Div/ model Note. NFI = normed-fit index; PNFI = parsimonious normed-fit index; NFI2 = Type 2 adjusted normedfit index; PNFI2 = Type 2 parsimonious normed-fit index; GFI = LISP, EL goodness-of-fit index; PGFI = parsimonious goodness-of-fit index; AGFI = LISREL adjusted goodness-of-fit index; AIC ffi Akaike (1987) information criterion. theories and conditions is the case. Such logical indeterminacies in the use of experience to confirm or disconfirm hypotheses are dealt with pragmatically by most researchers by embedding their hypotheses in specific theoretical frameworks that they are more or less strongly committed to treat as true (perhaps for good formal as well as empirical reasons) and by seeking in their experimental and observational techniques to assure themselves that the appropriate background conditions are reasonably satisfied. Their decisions to confirm or disconfirm their hypotheses are then made conditional on their assumptions, which may be modified with subsequent thought and experience (Anne, 1970). We have mentioned that the testing of models generally depends on reasonably establishing that certain background conditions are the case. Discussions of these background conditions as they apply in structural equations modeling are succinctly given in James et al. (1982) and Mulaik (1986, 1987). When performing tests of the fit of a model, it is assumed for the purposes of eliminating the ambiguity of the test that these background assumptions are met. The test is not regarded as a test of these assumptions but of the model. It is possible to test these background assumptions in separate studies, but tests of these background assumptions themselves will depend on the satisfaction of other assumptions not assessed by these tests. Consequently, in any research activity there is always an element of faith regarding the reasonableness and appropriateness of one's assumptions. The researcher can only proceed on the basis of his or her assumptions, knowing that whatever conclusions are drawn from research are only provisional and at risk of being either rejected by others who do not share these assumptions or overturned by the results of future research that shows these assumptions to be untenable. Conclusion Goodness-of-fit indices are often used to supplement chisquare tests of lack of fit in evaluating the acceptability of structural equation and other models. A high goodness-of-fit index may be an encouraging sign that a model is useful even when it fails to fit exactly on statistical grounds. However, a major limitation of most goodness-of-fit indices now in current use is that index values near unity can give the false impression that much is explained by the constraints on the parameters of the model when in fact the high degree of fit results from freeing most of the parameters so that they can be estimated from the data. In principle, one can get a goodness-of-fit index value of unity by estimating as many parameters in the model as there are independent elements potentially available in the data. Such a model explains nothing, for it has nothing in it from outside the data. Furthermore, nothing has been confirmed about the model. A way to compensate for high goodness-of-fit index values obtained at the expense of loss of degrees of freedom is to multiply it by the parsimony ratio, which in general is the ratio of the degrees of freedom in the test of a model to the total number of potentially relevant degrees of freedom available in the data. The resulting product is called a parsimonious-fit index and is best interpreted as indicating roughly the proportion of the independent elements of data determined by the hypothesized constraints of the model. However, assessing the acceptability of a model depends on more than considering the parameter constraints used to specify the model. Poor fits may be obtained, not because one's specification of parameter constraints is wrong, but because one's assumption about how the data is distributed probabilistically is incorrect, leading one, especially in the case of maximum likelihood estimation, to obtain poor estimates of the free parameters. One may also have errors in the data or violate any number of other background assumptions required by one's model (James et al., 1982; Mulaik, 1986, 1987). Violations of these assumptions may have any number of unknown effects on the parsimonious-fit index and must be taken into account when evaluating the model. Traditional goodness-of-fit indices also are unduly influenced by the good fits in the measurement portions of the model and can yield values in the.90s even when the structural relations among the latent variables of the model are seriously

16 EVALUATION OF GOODNESS-OF-FIT INDICES 445 misspecitied. We report a new index, the relative normed-fit index, formulated independently by us and by Hertzog (in press), that allows one to assess the fit of the causal model concerning just the relations between the latent variables of a structural equation. The principles on which this new index are based allow for the formulation of other indices to magnify differences in degrees of fit in connection with specific aspects of a model. Traditional goodness-of-fit indices also will yield high values when structural parameters reflecting hypothesized causal paths are left free to be estimated and turn out empirically to have near-zero values, which may correspond to zero population values. Such misspeeitications in a model must be tested by means other than the traditional goodness-of-fit index. References Akaike, H. (1987). Factor analysis and AIC. Psychometr/ka, 52, Anderson, J. C., & Gerbing, D. W. (1984). The effect of sampling error on convergence, improper solutions, and goodness-of-fit indices for maximum likelihood confirmatory factor analysis. Psychometrika, 49, Aline, B. (1970). Rationalism, empiricism, and pragmatism. New York: Random House. Bentler, E (1983). Some contributions to efficient statistics in structural models: Specification and estimation of moment structures. Psychometrika, 48, Bentler, P., & Bonett, D. G. (1980). Significance tests and goodness of fit in the analysis of eovariance structures. Psychological Bulletin, 88, Boek, R. D., & Haggard, E. A. (1968). The use of multivariate analysis in behavioral research. In Dean K. Whitla (Ed.), Handbook of measurement and assessment in the behavioral sciences (pp ). Reading, MA: Addison-Wesley. BrockweU, P. J., & Davis, R. A. (1987). Time series: Theory and methoats. New York: Springer-Verlag. Browne, M. W. (1982). Covariance structures. In D. M. Hawkins (Ed.), Topics in applied multivariate analysis (pp ). London: Cambridge University Press. Cudeek, R., & Browne, M. W. (1983). Cross-validation of eovariance structures. Multivariate Behavioral Research, I8, Deutsch, R. (1965). Estimation theory. Engle~xxt Cliffs, NJ: Prentice- HalL Garrison, J. W. (1986). Some principles of postpositivistic philosophy of science. Educational Researcher, 15, Giere, R. N. (1985). Constructive realism. In P. M. Churchland & C. A. Hooker (Eds. ), Images of science. Chicago: University of Chicago Press. Guilford, J. P. (1950). Fundamental statistics in psychology and education (2nd ed.). New York: McGraw-Hill. Hempcl, C. G. (1965). Aspects of scientific explanation. New Yorlc Free Press. Hertzog, C. (in press). On the utility of structural equation models in developmental research. In P. B. Baltes, D. L. Feathcrman, & R. M. Lerner (Eds. ), Life-span development and behavior (Vol. 9). HiUsdale, N J: Erlhaum. James, L. R., Mulaik, S. A., & Brett, J. (1982). Causalanalysis: Models, assumptions and data. Beverly Hills, CA: Sage. Janik, A., & Tonlmin, S. (1973). I4qttgenstein's Vienna. New York: Simon and Schuster. Jones, W. T. (1952). A history of western philosophy. New York: Harcourt, Brace. Ji~reskng, K. G., & Strbom, D. G. (1984). LISREL II1. Mooresville, IN: Scientific Software, Inc. Kabe, D. G. (1963). Stepwise multivariate linear regression. Journal of the American Statistical Association, 58, Kant, I. (1900). Critique of pure reason (J. M. D. Meiklejohn, Trans.). New York: Wiley. (Original work published 1781 ) Lem~ J. V., Hertzog, C., Hook~ K. A., I-Iassibi, M., & Thomas, A. (1988). A longitudinal study of negative emotional states and adjustment from early childhood through adolescence. Child Development, 59, Marsh, H. W., Balia, J. R., & McDonald, R. P. (1988). Goodness-offit indices in confirmatory factor analysis: The effect of sample size. Psychological Bulletin, 103, Mead, G. H. (1938). The philosophy of the act. Chicago: University of Chicago Press. Mulaik, S. A. (1972). The foundations of factor analysis. New York: McGraw-Hill. Mulaik, S. A. (1986). Toward a synthesis of deterministic and probabilistic formulations of causal relations by the functional relation concept. Philosophy of Science, 53, Mulaik, S. A. (1987). Toward a conception of causality appficable to exp~imentation and causal modefmg. Child Development, 58, Poincar~, H. (1952). Science and hypothesis. (W. J. Greenstreet, Trans.). New York: Dover. (Original work published 1902) Popper, K. R. (1961). The logic of scientific discovery (translated and revised by the author). New York: Science Editions. (Original work published 1934) Roy, S. N. (1958). Step-down procedure in multivariate analysis. Annals of Mathematical Statistics, 29, Roy, S. N., & Bargmann, R. E. (1958). Tests of multiple independence and the associated confidence bounds. Annals of Mathematical Statistics, 29, Schneider, D. M., Steeg, M., & Young, F. H. (1982). Linear algebra: A concrete introduction. New York: Macmillan. Shapiro, A. (1983). Asymptotic distribution theory in the analysis of covariance structures (a unified approach). South African Statistical Journal, 17, Sohel, M. E., & Bohrnstedt, G. W. (1985). Use of null models in evaluating the fit ofeovariance structure models. In N. B. Tuma (Ed.), Sociological methodology. San Francisco: Jnssey-Bass. Specht, D. A. (1975). On the evaluation of causal models. Social Science Research, 4, Spccht, D. A., &Warren, R. D. (1976). Comparing cansal modds. InD. R. Heise (Fxt), Soc/ologica/methaddo~ San Francisco, C~ Jossey-Ba~ Steiger, J. H. (1987, October). R.M.S. confidence intervals for goodness of fit in the analysis of covariance structures. Paper presented to the annual meeting of the Society for Multivariate Experimental Psychology, Vancouver, British Columbia, Canada. Steig~; J. H., Shapiro, A., & Browne, M. W. (1985). On the multivariate asymptotic distribution of sequential chi-square statistics. Psychometrika, 50, Still, A. (1987). L. L. Thurstone: A new assessment. British Journal of Mathematical and Statistical Psycholog)z, 40, Tanaka, J. S. (1987). "How big is big enough7": Sample size and goodhess of fit in structural equation models with latent variables. Child Development, 58, Tanaka, J. S., & Huba, G. J. (1985). A fit index for covariance structure models under arbitrary GLS estimation. British Journal of Mathematical and Statistical Psychology,, 38, Thurstone, L. L. (1947). Multiple factor analysis. Chicago: University of Chicago Press. Wheaton, B. (1988). Assessment of fit in overidenfified models. In J. S. Long (Ed.), Common problems/proper solutions (pp ). Beverly Hills, CA: Sage. Wittsenstein, L. (1953). Philosophical investigations (G. E. M. Anscombe, Trans.). New York: Macmillan. Received August 18, 1987 Revision received April 26, 1988 AeeeptedAugust 19, 1988

Indices of Model Fit STRUCTURAL EQUATION MODELING 2013

Indices of Model Fit STRUCTURAL EQUATION MODELING 2013 Indices of Model Fit STRUCTURAL EQUATION MODELING 2013 Indices of Model Fit A recommended minimal set of fit indices that should be reported and interpreted when reporting the results of SEM analyses:

More information

Overview of Factor Analysis

Overview of Factor Analysis Overview of Factor Analysis Jamie DeCoster Department of Psychology University of Alabama 348 Gordon Palmer Hall Box 870348 Tuscaloosa, AL 35487-0348 Phone: (205) 348-4431 Fax: (205) 348-8648 August 1,

More information

Applications of Structural Equation Modeling in Social Sciences Research

Applications of Structural Equation Modeling in Social Sciences Research American International Journal of Contemporary Research Vol. 4 No. 1; January 2014 Applications of Structural Equation Modeling in Social Sciences Research Jackson de Carvalho, PhD Assistant Professor

More information

Evaluating the Fit of Structural Equation Models: Tests of Significance and Descriptive Goodness-of-Fit Measures

Evaluating the Fit of Structural Equation Models: Tests of Significance and Descriptive Goodness-of-Fit Measures Methods of Psychological Research Online 003, Vol.8, No., pp. 3-74 Department of Psychology Internet: http://www.mpr-online.de 003 University of Koblenz-Landau Evaluating the Fit of Structural Equation

More information

How to report the percentage of explained common variance in exploratory factor analysis

How to report the percentage of explained common variance in exploratory factor analysis UNIVERSITAT ROVIRA I VIRGILI How to report the percentage of explained common variance in exploratory factor analysis Tarragona 2013 Please reference this document as: Lorenzo-Seva, U. (2013). How to report

More information

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C. CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES From Exploratory Factor Analysis Ledyard R Tucker and Robert C MacCallum 1997 180 CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES In

More information

http://www.jstor.org This content downloaded on Tue, 19 Feb 2013 17:28:43 PM All use subject to JSTOR Terms and Conditions

http://www.jstor.org This content downloaded on Tue, 19 Feb 2013 17:28:43 PM All use subject to JSTOR Terms and Conditions A Significance Test for Time Series Analysis Author(s): W. Allen Wallis and Geoffrey H. Moore Reviewed work(s): Source: Journal of the American Statistical Association, Vol. 36, No. 215 (Sep., 1941), pp.

More information

STATISTICA Formula Guide: Logistic Regression. Table of Contents

STATISTICA Formula Guide: Logistic Regression. Table of Contents : Table of Contents... 1 Overview of Model... 1 Dispersion... 2 Parameterization... 3 Sigma-Restricted Model... 3 Overparameterized Model... 4 Reference Coding... 4 Model Summary (Summary Tab)... 5 Summary

More information

SPSS and AMOS. Miss Brenda Lee 2:00p.m. 6:00p.m. 24 th July, 2015 The Open University of Hong Kong

SPSS and AMOS. Miss Brenda Lee 2:00p.m. 6:00p.m. 24 th July, 2015 The Open University of Hong Kong Seminar on Quantitative Data Analysis: SPSS and AMOS Miss Brenda Lee 2:00p.m. 6:00p.m. 24 th July, 2015 The Open University of Hong Kong SBAS (Hong Kong) Ltd. All Rights Reserved. 1 Agenda MANOVA, Repeated

More information

Simple Linear Regression Inference

Simple Linear Regression Inference Simple Linear Regression Inference 1 Inference requirements The Normality assumption of the stochastic term e is needed for inference even if it is not a OLS requirement. Therefore we have: Interpretation

More information

Chapter 6: Multivariate Cointegration Analysis

Chapter 6: Multivariate Cointegration Analysis Chapter 6: Multivariate Cointegration Analysis 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie VI. Multivariate Cointegration

More information

Factorial Invariance in Student Ratings of Instruction

Factorial Invariance in Student Ratings of Instruction Factorial Invariance in Student Ratings of Instruction Isaac I. Bejar Educational Testing Service Kenneth O. Doyle University of Minnesota The factorial invariance of student ratings of instruction across

More information

Auxiliary Variables in Mixture Modeling: 3-Step Approaches Using Mplus

Auxiliary Variables in Mixture Modeling: 3-Step Approaches Using Mplus Auxiliary Variables in Mixture Modeling: 3-Step Approaches Using Mplus Tihomir Asparouhov and Bengt Muthén Mplus Web Notes: No. 15 Version 8, August 5, 2014 1 Abstract This paper discusses alternatives

More information

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

More information

Chapter 4: Vector Autoregressive Models

Chapter 4: Vector Autoregressive Models Chapter 4: Vector Autoregressive Models 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie IV.1 Vector Autoregressive Models (VAR)...

More information

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

More information

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( ) Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates

More information

Goodness of fit assessment of item response theory models

Goodness of fit assessment of item response theory models Goodness of fit assessment of item response theory models Alberto Maydeu Olivares University of Barcelona Madrid November 1, 014 Outline Introduction Overall goodness of fit testing Two examples Assessing

More information

Introduction to Principal Components and FactorAnalysis

Introduction to Principal Components and FactorAnalysis Introduction to Principal Components and FactorAnalysis Multivariate Analysis often starts out with data involving a substantial number of correlated variables. Principal Component Analysis (PCA) is a

More information

Penalized regression: Introduction

Penalized regression: Introduction Penalized regression: Introduction Patrick Breheny August 30 Patrick Breheny BST 764: Applied Statistical Modeling 1/19 Maximum likelihood Much of 20th-century statistics dealt with maximum likelihood

More information

SAS Software to Fit the Generalized Linear Model

SAS Software to Fit the Generalized Linear Model SAS Software to Fit the Generalized Linear Model Gordon Johnston, SAS Institute Inc., Cary, NC Abstract In recent years, the class of generalized linear models has gained popularity as a statistical modeling

More information

PARTIAL LEAST SQUARES IS TO LISREL AS PRINCIPAL COMPONENTS ANALYSIS IS TO COMMON FACTOR ANALYSIS. Wynne W. Chin University of Calgary, CANADA

PARTIAL LEAST SQUARES IS TO LISREL AS PRINCIPAL COMPONENTS ANALYSIS IS TO COMMON FACTOR ANALYSIS. Wynne W. Chin University of Calgary, CANADA PARTIAL LEAST SQUARES IS TO LISREL AS PRINCIPAL COMPONENTS ANALYSIS IS TO COMMON FACTOR ANALYSIS. Wynne W. Chin University of Calgary, CANADA ABSTRACT The decision of whether to use PLS instead of a covariance

More information

Survey, Statistics and Psychometrics Core Research Facility University of Nebraska-Lincoln. Log-Rank Test for More Than Two Groups

Survey, Statistics and Psychometrics Core Research Facility University of Nebraska-Lincoln. Log-Rank Test for More Than Two Groups Survey, Statistics and Psychometrics Core Research Facility University of Nebraska-Lincoln Log-Rank Test for More Than Two Groups Prepared by Harlan Sayles (SRAM) Revised by Julia Soulakova (Statistics)

More information

Chi Square Tests. Chapter 10. 10.1 Introduction

Chi Square Tests. Chapter 10. 10.1 Introduction Contents 10 Chi Square Tests 703 10.1 Introduction............................ 703 10.2 The Chi Square Distribution.................. 704 10.3 Goodness of Fit Test....................... 709 10.4 Chi Square

More information

Structural Equation Modelling (SEM)

Structural Equation Modelling (SEM) (SEM) Aims and Objectives By the end of this seminar you should: Have a working knowledge of the principles behind causality. Understand the basic steps to building a Model of the phenomenon of interest.

More information

Basic Concepts in Research and Data Analysis

Basic Concepts in Research and Data Analysis Basic Concepts in Research and Data Analysis Introduction: A Common Language for Researchers...2 Steps to Follow When Conducting Research...3 The Research Question... 3 The Hypothesis... 4 Defining the

More information

Extending the debate between Spearman and Wilson 1929: When do single variables optimally reproduce the common part of the observed covariances?

Extending the debate between Spearman and Wilson 1929: When do single variables optimally reproduce the common part of the observed covariances? 1 Extending the debate between Spearman and Wilson 1929: When do single variables optimally reproduce the common part of the observed covariances? André Beauducel 1 & Norbert Hilger University of Bonn,

More information

Network quality control

Network quality control Network quality control Network quality control P.J.G. Teunissen Delft Institute of Earth Observation and Space systems (DEOS) Delft University of Technology VSSD iv Series on Mathematical Geodesy and

More information

Multivariate Analysis of Variance (MANOVA): I. Theory

Multivariate Analysis of Variance (MANOVA): I. Theory Gregory Carey, 1998 MANOVA: I - 1 Multivariate Analysis of Variance (MANOVA): I. Theory Introduction The purpose of a t test is to assess the likelihood that the means for two groups are sampled from the

More information

Pragmatic Perspectives on the Measurement of Information Systems Service Quality

Pragmatic Perspectives on the Measurement of Information Systems Service Quality Pragmatic Perspectives on the Measurement of Information Systems Service Quality Analysis with LISREL: An Appendix to Pragmatic Perspectives on the Measurement of Information Systems Service Quality William

More information

Introduction to Path Analysis

Introduction to Path Analysis This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike License. Your use of this material constitutes acceptance of that license and the conditions of use of materials on this

More information

How To Check For Differences In The One Way Anova

How To Check For Differences In The One Way Anova MINITAB ASSISTANT WHITE PAPER This paper explains the research conducted by Minitab statisticians to develop the methods and data checks used in the Assistant in Minitab 17 Statistical Software. One-Way

More information

Normality Testing in Excel

Normality Testing in Excel Normality Testing in Excel By Mark Harmon Copyright 2011 Mark Harmon No part of this publication may be reproduced or distributed without the express permission of the author. mark@excelmasterseries.com

More information

SEM Analysis of the Impact of Knowledge Management, Total Quality Management and Innovation on Organizational Performance

SEM Analysis of the Impact of Knowledge Management, Total Quality Management and Innovation on Organizational Performance 2015, TextRoad Publication ISSN: 2090-4274 Journal of Applied Environmental and Biological Sciences www.textroad.com SEM Analysis of the Impact of Knowledge Management, Total Quality Management and Innovation

More information

Time Series Analysis

Time Series Analysis Time Series Analysis Identifying possible ARIMA models Andrés M. Alonso Carolina García-Martos Universidad Carlos III de Madrid Universidad Politécnica de Madrid June July, 2012 Alonso and García-Martos

More information

Factor Analysis. Principal components factor analysis. Use of extracted factors in multivariate dependency models

Factor Analysis. Principal components factor analysis. Use of extracted factors in multivariate dependency models Factor Analysis Principal components factor analysis Use of extracted factors in multivariate dependency models 2 KEY CONCEPTS ***** Factor Analysis Interdependency technique Assumptions of factor analysis

More information

Chapter 6: The Information Function 129. CHAPTER 7 Test Calibration

Chapter 6: The Information Function 129. CHAPTER 7 Test Calibration Chapter 6: The Information Function 129 CHAPTER 7 Test Calibration 130 Chapter 7: Test Calibration CHAPTER 7 Test Calibration For didactic purposes, all of the preceding chapters have assumed that the

More information

Simple Regression Theory II 2010 Samuel L. Baker

Simple Regression Theory II 2010 Samuel L. Baker SIMPLE REGRESSION THEORY II 1 Simple Regression Theory II 2010 Samuel L. Baker Assessing how good the regression equation is likely to be Assignment 1A gets into drawing inferences about how close the

More information

Elements of statistics (MATH0487-1)

Elements of statistics (MATH0487-1) Elements of statistics (MATH0487-1) Prof. Dr. Dr. K. Van Steen University of Liège, Belgium December 10, 2012 Introduction to Statistics Basic Probability Revisited Sampling Exploratory Data Analysis -

More information

Richard E. Zinbarg northwestern university, the family institute at northwestern university. William Revelle northwestern university

Richard E. Zinbarg northwestern university, the family institute at northwestern university. William Revelle northwestern university psychometrika vol. 70, no., 23 33 march 2005 DOI: 0.007/s336-003-0974-7 CRONBACH S α, REVELLE S β, AND MCDONALD S ω H : THEIR RELATIONS WITH EACH OTHER AND TWO ALTERNATIVE CONCEPTUALIZATIONS OF RELIABILITY

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

1 Theory: The General Linear Model

1 Theory: The General Linear Model QMIN GLM Theory - 1.1 1 Theory: The General Linear Model 1.1 Introduction Before digital computers, statistics textbooks spoke of three procedures regression, the analysis of variance (ANOVA), and the

More information

1 Portfolio mean and variance

1 Portfolio mean and variance Copyright c 2005 by Karl Sigman Portfolio mean and variance Here we study the performance of a one-period investment X 0 > 0 (dollars) shared among several different assets. Our criterion for measuring

More information

Multivariate Normal Distribution

Multivariate Normal Distribution Multivariate Normal Distribution Lecture 4 July 21, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2 Lecture #4-7/21/2011 Slide 1 of 41 Last Time Matrices and vectors Eigenvalues

More information

I n d i a n a U n i v e r s i t y U n i v e r s i t y I n f o r m a t i o n T e c h n o l o g y S e r v i c e s

I n d i a n a U n i v e r s i t y U n i v e r s i t y I n f o r m a t i o n T e c h n o l o g y S e r v i c e s I n d i a n a U n i v e r s i t y U n i v e r s i t y I n f o r m a t i o n T e c h n o l o g y S e r v i c e s Confirmatory Factor Analysis using Amos, LISREL, Mplus, SAS/STAT CALIS* Jeremy J. Albright

More information

Evaluating Goodness-of-Fit Indexes for Testing Measurement Invariance

Evaluating Goodness-of-Fit Indexes for Testing Measurement Invariance STRUCTURAL EQUATION MODELING, 9(2), 233 255 Copyright 2002, Lawrence Erlbaum Associates, Inc. Evaluating Goodness-of-Fit Indexes for Testing Measurement Invariance Gordon W. Cheung Department of Management

More information

OPRE 6201 : 2. Simplex Method

OPRE 6201 : 2. Simplex Method OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2

More information

Poisson Models for Count Data

Poisson Models for Count Data Chapter 4 Poisson Models for Count Data In this chapter we study log-linear models for count data under the assumption of a Poisson error structure. These models have many applications, not only to the

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

Descriptive Statistics

Descriptive Statistics Descriptive Statistics Primer Descriptive statistics Central tendency Variation Relative position Relationships Calculating descriptive statistics Descriptive Statistics Purpose to describe or summarize

More information

T-test & factor analysis

T-test & factor analysis Parametric tests T-test & factor analysis Better than non parametric tests Stringent assumptions More strings attached Assumes population distribution of sample is normal Major problem Alternatives Continue

More information

Testing The Quantity Theory of Money in Greece: A Note

Testing The Quantity Theory of Money in Greece: A Note ERC Working Paper in Economic 03/10 November 2003 Testing The Quantity Theory of Money in Greece: A Note Erdal Özmen Department of Economics Middle East Technical University Ankara 06531, Turkey ozmen@metu.edu.tr

More information

171:290 Model Selection Lecture II: The Akaike Information Criterion

171:290 Model Selection Lecture II: The Akaike Information Criterion 171:290 Model Selection Lecture II: The Akaike Information Criterion Department of Biostatistics Department of Statistics and Actuarial Science August 28, 2012 Introduction AIC, the Akaike Information

More information

Simple Second Order Chi-Square Correction

Simple Second Order Chi-Square Correction Simple Second Order Chi-Square Correction Tihomir Asparouhov and Bengt Muthén May 3, 2010 1 1 Introduction In this note we describe the second order correction for the chi-square statistic implemented

More information

11. Analysis of Case-control Studies Logistic Regression

11. Analysis of Case-control Studies Logistic Regression Research methods II 113 11. Analysis of Case-control Studies Logistic Regression This chapter builds upon and further develops the concepts and strategies described in Ch.6 of Mother and Child Health:

More information

Course Text. Required Computing Software. Course Description. Course Objectives. StraighterLine. Business Statistics

Course Text. Required Computing Software. Course Description. Course Objectives. StraighterLine. Business Statistics Course Text Business Statistics Lind, Douglas A., Marchal, William A. and Samuel A. Wathen. Basic Statistics for Business and Economics, 7th edition, McGraw-Hill/Irwin, 2010, ISBN: 9780077384470 [This

More information

Fractionally integrated data and the autodistributed lag model: results from a simulation study

Fractionally integrated data and the autodistributed lag model: results from a simulation study Fractionally integrated data and the autodistributed lag model: results from a simulation study Justin Esarey July 1, 215 Abstract Two contributions in this issue, Grant and Lebo (215) and Keele, Linn

More information

An Introduction to Regression Analysis

An Introduction to Regression Analysis The Inaugural Coase Lecture An Introduction to Regression Analysis Alan O. Sykes * Regression analysis is a statistical tool for the investigation of relationships between variables. Usually, the investigator

More information

MULTIPLE LINEAR REGRESSION ANALYSIS USING MICROSOFT EXCEL. by Michael L. Orlov Chemistry Department, Oregon State University (1996)

MULTIPLE LINEAR REGRESSION ANALYSIS USING MICROSOFT EXCEL. by Michael L. Orlov Chemistry Department, Oregon State University (1996) MULTIPLE LINEAR REGRESSION ANALYSIS USING MICROSOFT EXCEL by Michael L. Orlov Chemistry Department, Oregon State University (1996) INTRODUCTION In modern science, regression analysis is a necessary part

More information

Application of a Psychometric Rating Model to

Application of a Psychometric Rating Model to Application of a Psychometric Rating Model to Ordered Categories Which Are Scored with Successive Integers David Andrich The University of Western Australia A latent trait measurement model in which ordered

More information

" Y. Notation and Equations for Regression Lecture 11/4. Notation:

 Y. Notation and Equations for Regression Lecture 11/4. Notation: Notation: Notation and Equations for Regression Lecture 11/4 m: The number of predictor variables in a regression Xi: One of multiple predictor variables. The subscript i represents any number from 1 through

More information

Recall this chart that showed how most of our course would be organized:

Recall this chart that showed how most of our course would be organized: Chapter 4 One-Way ANOVA Recall this chart that showed how most of our course would be organized: Explanatory Variable(s) Response Variable Methods Categorical Categorical Contingency Tables Categorical

More information

Chapter 7: Simple linear regression Learning Objectives

Chapter 7: Simple linear regression Learning Objectives Chapter 7: Simple linear regression Learning Objectives Reading: Section 7.1 of OpenIntro Statistics Video: Correlation vs. causation, YouTube (2:19) Video: Intro to Linear Regression, YouTube (5:18) -

More information

MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS

MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS MSR = Mean Regression Sum of Squares MSE = Mean Squared Error RSS = Regression Sum of Squares SSE = Sum of Squared Errors/Residuals α = Level of Significance

More information

Module 3: Correlation and Covariance

Module 3: Correlation and Covariance Using Statistical Data to Make Decisions Module 3: Correlation and Covariance Tom Ilvento Dr. Mugdim Pašiƒ University of Delaware Sarajevo Graduate School of Business O ften our interest in data analysis

More information

Introduction to. Hypothesis Testing CHAPTER LEARNING OBJECTIVES. 1 Identify the four steps of hypothesis testing.

Introduction to. Hypothesis Testing CHAPTER LEARNING OBJECTIVES. 1 Identify the four steps of hypothesis testing. Introduction to Hypothesis Testing CHAPTER 8 LEARNING OBJECTIVES After reading this chapter, you should be able to: 1 Identify the four steps of hypothesis testing. 2 Define null hypothesis, alternative

More information

Department of Economics

Department of Economics Department of Economics On Testing for Diagonality of Large Dimensional Covariance Matrices George Kapetanios Working Paper No. 526 October 2004 ISSN 1473-0278 On Testing for Diagonality of Large Dimensional

More information

Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree of PhD of Engineering in Informatics

Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree of PhD of Engineering in Informatics INTERNATIONAL BLACK SEA UNIVERSITY COMPUTER TECHNOLOGIES AND ENGINEERING FACULTY ELABORATION OF AN ALGORITHM OF DETECTING TESTS DIMENSIONALITY Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree

More information

Please follow the directions once you locate the Stata software in your computer. Room 114 (Business Lab) has computers with Stata software

Please follow the directions once you locate the Stata software in your computer. Room 114 (Business Lab) has computers with Stata software STATA Tutorial Professor Erdinç Please follow the directions once you locate the Stata software in your computer. Room 114 (Business Lab) has computers with Stata software 1.Wald Test Wald Test is used

More information

Nonlinear Iterative Partial Least Squares Method

Nonlinear Iterative Partial Least Squares Method Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for

More information

Study Guide for the Final Exam

Study Guide for the Final Exam Study Guide for the Final Exam When studying, remember that the computational portion of the exam will only involve new material (covered after the second midterm), that material from Exam 1 will make

More information

LOGNORMAL MODEL FOR STOCK PRICES

LOGNORMAL MODEL FOR STOCK PRICES LOGNORMAL MODEL FOR STOCK PRICES MICHAEL J. SHARPE MATHEMATICS DEPARTMENT, UCSD 1. INTRODUCTION What follows is a simple but important model that will be the basis for a later study of stock prices as

More information

An Empirical Study on the Effects of Software Characteristics on Corporate Performance

An Empirical Study on the Effects of Software Characteristics on Corporate Performance , pp.61-66 http://dx.doi.org/10.14257/astl.2014.48.12 An Empirical Study on the Effects of Software Characteristics on Corporate Moon-Jong Choi 1, Won-Seok Kang 1 and Geun-A Kim 2 1 DGIST, 333 Techno Jungang

More information

Presentation Outline. Structural Equation Modeling (SEM) for Dummies. What Is Structural Equation Modeling?

Presentation Outline. Structural Equation Modeling (SEM) for Dummies. What Is Structural Equation Modeling? Structural Equation Modeling (SEM) for Dummies Joseph J. Sudano, Jr., PhD Center for Health Care Research and Policy Case Western Reserve University at The MetroHealth System Presentation Outline Conceptual

More information

2. Simple Linear Regression

2. Simple Linear Regression Research methods - II 3 2. Simple Linear Regression Simple linear regression is a technique in parametric statistics that is commonly used for analyzing mean response of a variable Y which changes according

More information

Is a Single-Bladed Knife Enough to Dissect Human Cognition? Commentary on Griffiths et al.

Is a Single-Bladed Knife Enough to Dissect Human Cognition? Commentary on Griffiths et al. Cognitive Science 32 (2008) 155 161 Copyright C 2008 Cognitive Science Society, Inc. All rights reserved. ISSN: 0364-0213 print / 1551-6709 online DOI: 10.1080/03640210701802113 Is a Single-Bladed Knife

More information

2013 MBA Jump Start Program. Statistics Module Part 3

2013 MBA Jump Start Program. Statistics Module Part 3 2013 MBA Jump Start Program Module 1: Statistics Thomas Gilbert Part 3 Statistics Module Part 3 Hypothesis Testing (Inference) Regressions 2 1 Making an Investment Decision A researcher in your firm just

More information

CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA

CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA We Can Early Learning Curriculum PreK Grades 8 12 INSIDE ALGEBRA, GRADES 8 12 CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA April 2016 www.voyagersopris.com Mathematical

More information

CREDIT SCREENING SYSTEM SELECTION

CREDIT SCREENING SYSTEM SELECTION 1 JOURNAL OF FINANCIAL AND QUANTITATIVE ANALYSIS JUNE 1976 CREDIT SCREENING SYSTEM SELECTION Michael S. Long * Recent financial literature has discussed how a creditor 1 should determine its investigation

More information

Gerry Hobbs, Department of Statistics, West Virginia University

Gerry Hobbs, Department of Statistics, West Virginia University Decision Trees as a Predictive Modeling Method Gerry Hobbs, Department of Statistics, West Virginia University Abstract Predictive modeling has become an important area of interest in tasks such as credit

More information

Quantitative Methods for Finance

Quantitative Methods for Finance Quantitative Methods for Finance Module 1: The Time Value of Money 1 Learning how to interpret interest rates as required rates of return, discount rates, or opportunity costs. 2 Learning how to explain

More information

Introduction to Regression and Data Analysis

Introduction to Regression and Data Analysis Statlab Workshop Introduction to Regression and Data Analysis with Dan Campbell and Sherlock Campbell October 28, 2008 I. The basics A. Types of variables Your variables may take several forms, and it

More information

Business Statistics. Successful completion of Introductory and/or Intermediate Algebra courses is recommended before taking Business Statistics.

Business Statistics. Successful completion of Introductory and/or Intermediate Algebra courses is recommended before taking Business Statistics. Business Course Text Bowerman, Bruce L., Richard T. O'Connell, J. B. Orris, and Dawn C. Porter. Essentials of Business, 2nd edition, McGraw-Hill/Irwin, 2008, ISBN: 978-0-07-331988-9. Required Computing

More information

Schools Value-added Information System Technical Manual

Schools Value-added Information System Technical Manual Schools Value-added Information System Technical Manual Quality Assurance & School-based Support Division Education Bureau 2015 Contents Unit 1 Overview... 1 Unit 2 The Concept of VA... 2 Unit 3 Control

More information

Online Appendices to the Corporate Propensity to Save

Online Appendices to the Corporate Propensity to Save Online Appendices to the Corporate Propensity to Save Appendix A: Monte Carlo Experiments In order to allay skepticism of empirical results that have been produced by unusual estimators on fairly small

More information

Chapter 1 Introduction. 1.1 Introduction

Chapter 1 Introduction. 1.1 Introduction Chapter 1 Introduction 1.1 Introduction 1 1.2 What Is a Monte Carlo Study? 2 1.2.1 Simulating the Rolling of Two Dice 2 1.3 Why Is Monte Carlo Simulation Often Necessary? 4 1.4 What Are Some Typical Situations

More information

Association Between Variables

Association Between Variables Contents 11 Association Between Variables 767 11.1 Introduction............................ 767 11.1.1 Measure of Association................. 768 11.1.2 Chapter Summary.................... 769 11.2 Chi

More information

Assessing the Relative Fit of Alternative Item Response Theory Models to the Data

Assessing the Relative Fit of Alternative Item Response Theory Models to the Data Research Paper Assessing the Relative Fit of Alternative Item Response Theory Models to the Data by John Richard Bergan, Ph.D. 6700 E. Speedway Boulevard Tucson, Arizona 85710 Phone: 520.323.9033 Fax:

More information

MATHEMATICAL METHODS OF STATISTICS

MATHEMATICAL METHODS OF STATISTICS MATHEMATICAL METHODS OF STATISTICS By HARALD CRAMER TROFESSOK IN THE UNIVERSITY OF STOCKHOLM Princeton PRINCETON UNIVERSITY PRESS 1946 TABLE OF CONTENTS. First Part. MATHEMATICAL INTRODUCTION. CHAPTERS

More information

Simple Predictive Analytics Curtis Seare

Simple Predictive Analytics Curtis Seare Using Excel to Solve Business Problems: Simple Predictive Analytics Curtis Seare Copyright: Vault Analytics July 2010 Contents Section I: Background Information Why use Predictive Analytics? How to use

More information

Factor analysis. Angela Montanari

Factor analysis. Angela Montanari Factor analysis Angela Montanari 1 Introduction Factor analysis is a statistical model that allows to explain the correlations between a large number of observed correlated variables through a small number

More information

An introduction to Value-at-Risk Learning Curve September 2003

An introduction to Value-at-Risk Learning Curve September 2003 An introduction to Value-at-Risk Learning Curve September 2003 Value-at-Risk The introduction of Value-at-Risk (VaR) as an accepted methodology for quantifying market risk is part of the evolution of risk

More information

Least Squares Estimation

Least Squares Estimation Least Squares Estimation SARA A VAN DE GEER Volume 2, pp 1041 1045 in Encyclopedia of Statistics in Behavioral Science ISBN-13: 978-0-470-86080-9 ISBN-10: 0-470-86080-4 Editors Brian S Everitt & David

More information

Factor Analysis. Chapter 420. Introduction

Factor Analysis. Chapter 420. Introduction Chapter 420 Introduction (FA) is an exploratory technique applied to a set of observed variables that seeks to find underlying factors (subsets of variables) from which the observed variables were generated.

More information

TAKE-AWAY GAMES. ALLEN J. SCHWENK California Institute of Technology, Pasadena, California INTRODUCTION

TAKE-AWAY GAMES. ALLEN J. SCHWENK California Institute of Technology, Pasadena, California INTRODUCTION TAKE-AWAY GAMES ALLEN J. SCHWENK California Institute of Technology, Pasadena, California L INTRODUCTION Several games of Tf take-away?f have become popular. The purpose of this paper is to determine the

More information

Econometrics Simple Linear Regression

Econometrics Simple Linear Regression Econometrics Simple Linear Regression Burcu Eke UC3M Linear equations with one variable Recall what a linear equation is: y = b 0 + b 1 x is a linear equation with one variable, or equivalently, a straight

More information

USING MULTIPLE GROUP STRUCTURAL MODEL FOR TESTING DIFFERENCES IN ABSORPTIVE AND INNOVATIVE CAPABILITIES BETWEEN LARGE AND MEDIUM SIZED FIRMS

USING MULTIPLE GROUP STRUCTURAL MODEL FOR TESTING DIFFERENCES IN ABSORPTIVE AND INNOVATIVE CAPABILITIES BETWEEN LARGE AND MEDIUM SIZED FIRMS USING MULTIPLE GROUP STRUCTURAL MODEL FOR TESTING DIFFERENCES IN ABSORPTIVE AND INNOVATIVE CAPABILITIES BETWEEN LARGE AND MEDIUM SIZED FIRMS Anita Talaja University of Split, Faculty of Economics Cvite

More information

Inflation. Chapter 8. 8.1 Money Supply and Demand

Inflation. Chapter 8. 8.1 Money Supply and Demand Chapter 8 Inflation This chapter examines the causes and consequences of inflation. Sections 8.1 and 8.2 relate inflation to money supply and demand. Although the presentation differs somewhat from that

More information

Example G Cost of construction of nuclear power plants

Example G Cost of construction of nuclear power plants 1 Example G Cost of construction of nuclear power plants Description of data Table G.1 gives data, reproduced by permission of the Rand Corporation, from a report (Mooz, 1978) on 32 light water reactor

More information

AP Physics 1 and 2 Lab Investigations

AP Physics 1 and 2 Lab Investigations AP Physics 1 and 2 Lab Investigations Student Guide to Data Analysis New York, NY. College Board, Advanced Placement, Advanced Placement Program, AP, AP Central, and the acorn logo are registered trademarks

More information