Advanced mediating effects tests, multi-group analyses, and measurement model assessments in PLS-based SEM
|
|
|
- Dustin Crawford
- 9 years ago
- Views:
Transcription
1 Advanced mediating effects tests, multi-group analyses, and measurement model assessments in PLS-based SEM Ned Kock Full reference: Kock, N. (2014). Advanced mediating effects tests, multi-group analyses, and measurement model assessments in PLS-based SEM. International Journal of e-collaboration, 10(1), Abstract Use of the partial least squares (PLS) method has been on the rise among e-collaboration researchers. It has also seen increasing use in a wide variety of fields of research. This includes most business-related disciplines, as well as the social and health sciences. The use of the PLS method has been primarily in the context of PLS-based structural equation modeling (SEM). This article discusses a variety of advanced PLS-based SEM uses of critical coefficients such as standard errors, effect sizes, loadings, cross-loadings and weights. Among these uses are advanced mediating effects tests, comprehensive multi-group analyses, and measurement model assessments. KEYWORDS: Multivariate Statistics, Partial Least Squares, Structural Equation Modeling, WarpPLS, Mediating Effects, Multi-group Analyses 1
2 Introduction The partial least squares (PLS) method has been increasingly used by e-collaboration researchers, as well as by researchers in related fields, such as information systems. It also has seen increasing use in most business-related areas of investigation, as well as in fields of research related to the social and health sciences. This is particularly the case in the context of PLS-based structural equation modeling (SEM). This article discusses a variety of advanced tests used in PLS-based SEM, with a focus on tests that employ the following coefficients: standard errors, effect sizes, loadings, crossloadings, and weights. The discussion presented here builds on the software WarpPLS, version 4.0 (Kock, 2013). The extensive set of outputs generated by this software makes it particularly useful in the illustration of the tests discussed (Kock, 2010; 2011a; 2011b), and allows for a straightforward discussion of the different steps involved in those tests. As it will be clear in the following sections, the approach taken in this article is very hands-on. This article is aimed at practitioners who need to conduct multivariate analyses as part of their research, analyses that can sometimes be very complex and difficult to conduct, potentially creating a new source of errors. Hopefully this article will make their work somewhat easier, as well as less time-consuming and prone to errors, serving as a handy reference that they can go to whenever they need to conduct advanced mediating effects tests, comprehensive multi-group analyses, and measurement model assessments. Using standard errors and effect sizes for path coefficients Standard errors and effect sizes for path coefficients are provided by WarpPLS in two tables where one standard error and effect size is provided for each path coefficient (see Figure 1). The effect sizes are analogous to Cohen s (1988) f-squared coefficients, but calculated through a novel procedure (described below). Standard errors and effect sizes are provided in the same order as the path coefficients, so that users can easily visualize them; and, in certain cases, use them to perform additional analyses. Standard errors and effect sizes are provided for normal latent variables, as well as for latent variables associated with moderating effects, which are essentially interaction latent variables. These are indicated in column and row names as products between latent variables. For example, the cell whose row is Effe and whose column is Proc*Effi refers to the moderating effect of the latent variable Proc on the link between Effi and Effe. The effect sizes are calculated as the absolute values of the individual contributions of the corresponding predictor latent variables to the R-square coefficients of the criterion latent variable in each latent variable block. Unlike Cohen s (1988) formula for estimation of effect sizes in multiple regression, which has a correction term in its denominator, WarpPLS calculates the exact values of the individual contributions of the corresponding predictor latent variables to the R-square coefficients of the criterion latent variable they point at. It does so by freezing parameters in successive calculations with the predictors included and excluded, leading to exact results and thus obviating the need for corrections. With the effect sizes users can ascertain whether the effects indicated by path coefficients are small, medium, or large. The values usually recommended are 0.02, 0.15, and 0.35; respectively (Cohen, 1988). Values below 0.02 suggest effects that are too weak to be considered relevant 2
3 from a practical point of view, even when the corresponding P values are statistically significant; a situation that may occur with large sample sizes. Figure 1. Standard errors and effect sizes for path coefficients window Use in tests of mediating effects One of the additional types of analyses that may be conducted with standard errors are tests of the significance of mediating effects using the approach discussed by Preacher & Hayes (2004), for linear relationships; and Hayes & Preacher (2010), for nonlinear relationships. The latter, discussed by Hayes & Preacher (2010), assumes that nonlinear relationships are force-modeled as linear; which means that the equivalent test using WarpPLS would use warped coefficients with the earlier linear approach discussed by Preacher & Hayes (2004). The classic approach used for testing mediating effects is the one discussed by Baron & Kenny (1986), which does not rely on standard errors. The mediating effect significance test approach that employs standard errors for path coefficients is as follows. As discussed below, it applies to both linear and nonlinear path coefficients generated by nonlinear multivariate analysis software tools such as WarpPLS. Let us assume that we have two latent variables, X and Y, and that we want to test whether another latent variable M is a significant mediator of the relationship between X and Y. One would normally build the following two models to conduct this test, and calculate all of the path coefficients and P values. The first model would be a simple one, with only two variables and one link: X Y. The second model would be a more complex one, with three variables and three links: X Y and X M Y. As a first step, the test entails calculating the path coefficients and corresponding P values in the first simple model and in the second more complex model, as well as the related standard errors in the second model. Let us assume that the path coefficient in the first simple model (X Y) is r. Since this is a model with only two variables, the path coefficient r is also the bivariate correlation between the two variables. The path coefficients in the second model are referred to as follows: a for the X M link, b for the M Y link, and c for the X Y link. 3
4 We then need to calculate a modified standard error for the product a b, sometimes referred to as Sobel s standard error, using the equation below. Next we calculate the critical ratio: T = (a b) S. This ratio is then used to calculate the P value associated with the product of coefficients a b. S = b S + a S + S S In practice one does not need to build the first simple model with WarpPLS, as the software automatically generates all of the bivariate correlations among latent variables. All one has to do is to get r from the latent variable correlations table generated by WarpPLS. Also, at the time of this writing, a spreadsheet was available from to automate the calculation of Sobel s standard error, the product of path coefficients, as well as the critical ratios and P values discussed here. For a mediating effect to be considered significant, the P value associated with the product of coefficients a b must be significant at a specified level (usually lower than.05). Additionally, the P values associated with a, b, and r must also be significant. If these conditions are met, and the P value associated with c is not significant, we can say that full mediation is occurring. If the P value associated with c is significant, we can say that partial mediation is occurring. It may be advisable, particularly in nonlinear analyses, to employ a variation of this test, which would entail using the total effect coefficient between X and Y, instead of r. The reason is that it is possible that r will be low because two nonlinear effects will cancel each other out in a cross-sectional analysis, but yet will still be associated with a strong lagged effect (e.g., associated with a time lag). This strong lagged effect will be better captured through the use of the total effect coefficient between X and Y than with r; the latter may in some cases artificially suggest no effect. An alternative approach to the analysis of mediating effects, which is arguably much less time-consuming and prone to error than the approaches mentioned above, would be to rely on the estimation of indirect effects. These indirect effects and related P values are automatically calculated by WarpPLS, and allow for the test of multiple mediating effects at once, including effects with more than one mediating variable. Use in multi-group analyses Another type of analysis that can employ standard errors for path coefficients is what is often referred to as a multi-group analysis. One of the main goals of this type of analysis is to compare pairs of path coefficients for identical models but based on different samples. An example would be the analysis of the same model but with data collected in two different countries. This is usually done by first calculating a pooled standard error (S ) for each of the path coefficient pairs in the two models (see, e.g., Keil et al., 2000), using the equation below. In this equation, N is the sample size for the first model, N is the sample size for the second model, S is the standard error for the path coefficient in the first model, and S is the standard error for the path coefficient in the second model. S = ( ) ( ) S + ( ) ( ) S + 4
5 The equation above assumes that the standard errors S and S are not significantly different from one another. That is, it assumes that the absolute difference between those standard errors is indistinguishable from zero; an assumption that is frequently met. If this assumption is violated, S should be calculated based on a different equation. Using this different equation, shown below, is often referred to as employing the Satterthwaite method. The Satterthwaite method is arguably more generic, but apparently less widely used. It also relies on a simpler equation, as can be seen below. Perhaps the reason why it is less widely used is that it frequently yields slightly higher values for S than the pooled standard error method; the differences are usually very small though. S = S + S Following the calculation of S, we then calculate the critical ratio: T = (β β ) S. Here (β β ) is the difference between the path coefficients in the first and second models. This ratio (T ) is then used to calculate the P value associated with the difference between the path coefficients. At the time of this writing, a spreadsheet was available from to automate the calculation of the pooled and Satterthwaite standard errors, the critical ratios, and the associated P values. The procedure just discussed focuses on path coefficients, which are structural model coefficients. This procedure should also be employed with weights. While with path coefficients researchers may be interested in finding statistically significant differences, with weights the opposite is typically the case they will want to ensure that differences are not statistically significant. The reason is that significant differences between path coefficients can be artificially caused by significant differences between weights in different models. Such differences may result from a form of common method bias due to questionnaire translation problems. This is a nontrivial issue in multi-group studies where data is collected in different countries with different languages and cultures. Common method bias can be tested based on full collinearity VIFs (Kock & Lynn, 2012), but only for a given sample. With two separate samples from different countries, for example, common method bias could go unnoticed and significantly bias the results of a multi-group analysis. Let us assume that a study used of the same questionnaire in two different countries, where the questionnaire was translated from one language to another. The translation may have led to a change in meaning for some question-statements, which would be reflected in different weights. These different weights could then artificially inflate differences between path coefficients, suggesting between-country differences. To rule out this possibility, one needs to ensure equivalence of measurement models, which would be indicated by equivalent weights, before structural model elements such as paths are compared. Here P values are expected to be greater, as opposed to lower, than a certain threshold. The recommended threshold is.10 for a conservative test. That is, P values should be greater than.10 for the conclusion that no significant differences exist. This threshold of.10 is twice the.05 threshold used to ascertain that a significant difference exists. Both are based on one-tailed tests. A more relaxed approach would be to employ the same threshold of.05 for the tests involving both path coefficients and weights. 5
6 Some would argue that loadings should be used as well in measurement model equivalence tests, in addition to weights, but in practice using loadings in addition to weights in these tests seems to lead to redundant results. Using combined loadings and cross-loadings Combined loadings and cross-loadings are provided by WarpPLS in a table with each cell referring to an indicator-latent variable link (see Figure 2). Latent variable names are listed at the top of each column, and indicator names at the beginning of each row. In this table, the loadings are from a structure matrix (i.e., unrotated), and the cross-loadings from a pattern matrix (i.e., rotated). Figure 2. Combined loadings and cross-loadings window As with other coefficients discussed earlier, indicator loadings and cross-loadings are provided for normal latent variables, as well as for latent variables associated with moderating effects. These are indicated in column and row names as products between latent variables and indicators, respectively. For example, the column Proc*Effi refers to an interaction latent variable, which itself is associated with one or more moderating effects of the latent variable Proc on the link or links between Effi and one or more latent variables. Since loadings are from a structure matrix, and unrotated, they are always within the -1 to 1 range. This obviates the need for a normalization procedure to avoid the presence of loadings whose absolute values are greater than 1. The expectation here is that loadings, which are shown within parentheses, will be high; and cross-loadings will be low. WarpPLS uses the PLS regression algorithm as its default (Kock, 2012). This algorithm does not allow the structural (a.k.a. inner) model to influence the measurement (a.k.a. outer) model, which tends to minimize collinearity and avoid other problems. A property of PLS regression is that, when only two indicators are used in one latent variable, their unrotated loadings are reported as being the same. This is not an indication of a problem, but the repetition in the table sometimes causes confusion among users of the software. 6
7 Kaiser normalization can be employed to avoid these repeated values (this should be done automatically by the software starting in version 4.0 of WarpPLS). Through a Kaiser normalization, each row of a table of loadings and cross-loadings is divided by the square root of its communality. This has the effect of making the sum of squared values in each row add up to 1. P values are also provided for indicator loadings associated with all latent variables. These P values are often referred to as validation parameters of a confirmatory factor analysis, since they result from a test of a model where the relationships between indicators and latent variables are defined beforehand. Conversely, in an exploratory factor analysis, relationships between indicators and latent variables are not defined beforehand, but inferred based on the results of a factor extraction algorithm. The principal components analysis algorithm is one of the most popular of these algorithms, even though it is often classified as being outside the scope of classical factor analysis. Contrary to popular belief, PLS-based SEM is not a component-based form of SEM; at least not in the sense of principal components analysis. Often one sees researchers refer to PLS-based SEM as an analysis method that has two main stages: a principal components analysis, whereby weights and loadings are calculated; and a path analysis. This is incorrect. PLS-based SEM has two main stages: a PLS regression analysis, whereby weights and loadings are calculated; and a path analysis. Instead of PLS regression, variations known as PLS modes may be employed, but not a principal components analysis. For research reports, users will typically use the table of combined loadings and cross-loadings provided by WarpPLS when describing the convergent validity of their measurement instrument. A measurement instrument has good convergent validity if the question-statements (or other measures) associated with each latent variable are understood by the respondents in the same way as they were intended by the designers of the question-statements. In this respect, two criteria are recommended as the basis for concluding that a measurement model has acceptable convergent validity: that the P values associated with the loadings be lower than 0.05; and that the loadings themselves be equal to or greater than 0.5 (Hair et al., 1987; 2009). Indicators for which these criteria are not satisfied may be considered for removal or reassignment to other latent variables. These criteria do not apply to formative latent variable indicators, which are assessed in part based on P values associated with indicator weights. If the offending indicators are part of a moderating effect, then you should consider removing the moderating effect if it does not meet the requirements for formative measurement (discussed later). As previously noted, moderating effect latent variable names are displayed on the table as product latent variables (e.g., Effi*Proc ). Also as noted earlier, moderating effect indicator names are displayed on the table as product indicators (e.g., Effi1*Proc1 ). High P values for moderating effects, to the point of being nonsignificant at the 0.05 level, may suggest multicollinearity problems; which can be further checked based on the latent variable coefficients generated by WarpPLS, more specifically, the full collinearity variance inflation factors (VIFs). Some degree of collinearity is to be expected with moderating effects, since the corresponding product variables are likely to be correlated with at least their component latent variables. Moreover, moderating effects add nonlinearity to models, which can in some cases compound 7
8 multicollinearity problems. Because of these and other related issues, moderating links should be included in models with caution. Standard errors are also provided for the loadings, in the column indicated as SE, for indicators associated with all latent variables. They can be used in specialized tests. Among other purposes, these standard errors can be used in multi-group analyses, with the same model but different subsamples, to ascertain measurement model equivalence. However, as mentioned earlier, if measurement model equivalence is tested based on weights, then testing it using loadings may be redundant. Using pattern loadings and cross-loadings Pattern loadings and cross-loadings are provided by WarpPLS in a table with each cell referring to an indicator-latent variable link (see Figure 3). Latent variable names are listed at the top of each column, and indicator names at the beginning of each row. In this table, both the loadings and cross-loadings are from a pattern matrix (i.e., rotated). Figure 3. Pattern loadings and cross-loadings window Since these loadings and cross-loadings are from a pattern matrix, they are obtained after the transformation of a structure matrix through a widely used oblique rotation frequently referred to as Promax. The structure matrix contains the Pearson correlations between indicators and latent variables, which are not particularly meaningful prior to rotation in the context of measurement instrument validation. Rotated loadings and cross-loadings are particularly useful in the visual identification by users of mismatches between indicators and latent variables. Such mismatches are usually associated with low loadings and high cross-loadings. Because an oblique rotation is employed, in some cases loadings may be higher than 1 (Rencher, 1998). This could be a hint that two or more latent variables are collinear, although this may not necessarily be the case; better measures of collinearity among latent variables are the full collinearity VIFs reported by WarpPLS with other latent variable coefficients. 8
9 The main difference between oblique and orthogonal rotation methods is that the former assume that there are correlations, some of which may be strong, among latent variables. A case can easily be made in favor of oblique rotation methods in SEM analyses, because by definition latent variables are expected to be correlated. Without correlations among latent variables, no path coefficient would be significant. Technically speaking, it is possible that a research study would hypothesize only neutral relationships between latent variables, but one does not usually find published examples of research studies where this happens. Using structure loadings and cross-loadings Structure loadings and cross-loadings are provided by WarpPLS in a table with each cell referring to an indicator-latent variable link (see Figure 4). Latent variable names are listed at the top of each column, and indicator names at the beginning of each row. In this table, both the loadings and cross-loadings are from a structure matrix (i.e., unrotated). Often these are the only loadings and cross-loadings provided by other PLS-based SEM software tools. Figure 4. Structure loadings and cross-loadings window As the structure matrix contains the Pearson correlations between indicators and latent variables, this matrix is not particularly meaningful or useful prior to rotation in the context of collinearity or measurement instrument validation. Here the unrotated cross-loadings tend to be fairly high, even when the measurement instrument passes widely used validity and reliability test criteria. Still, some researchers recommend using this table as well to assess convergent validity, by following two criteria: that the cross-loadings be lower than 0.5; and that the loadings be equal to or greater than 0.5 (Hair et al., 1987; 2009). Note that the loadings here are the same as those provided in the combined loadings and cross-loadings table. The cross-loadings, however, are different. 9
10 Using indicator weights Indicator weights are provided by WarpPLS in a table, much in the same way as indicator loadings are (see Figure 5). All cross-weights are zero, because of the way they are calculated through PLS regression. Each latent variable score is calculated as an exactly linear combination of its indicators, where the weights are multiple regression coefficients linking the indicators to the latent variable. Figure 5. Indicator weights window P values are provided for weights associated with all latent variables. These values can also be seen, together with the P values for loadings, as the result of a confirmatory factor analysis. In research reports, users may want to report these P values as an indication that formative latent variable measurement items were properly constructed. This also applies to moderating latent variables that pass criteria for formative measurement, when those variables do not pass criteria for reflective measurement. As in multiple regression analysis (Miller & Wichern, 1977; Mueller, 1996), it is recommended that weights with P values lower than 0.05 be considered valid items in a formative latent variable measurement item subset. Formative latent variable indicators whose weights do not satisfy this criterion may be considered for removal. The reference to multiple regression made here is due to the fact that the weights linking indicators to their latent variables, at the measurement model level, are equivalent to the standardized partial regression coefficients linking independent to dependent variables in a typical multiple regression model. The same can generally be said with respect to the path coefficients in individual latent variable blocks. There are indeed many parallels between PLSbased SEM and multiple regression analyses. With the P values provided for weights, users can also check whether moderating latent variables satisfy validity and reliability criteria for formative measurement, if they do not satisfy criteria for reflective measurement. This can help users demonstrate validity and reliability in hierarchical analyses involving moderating effects, where double, triple etc. moderating effects are tested. For instance, moderating latent variables can be created, added to the model as standardized indicators, and then their effects modeled as being moderated by other latent variables; an example of double moderation. 10
11 In addition to P values, VIFs are also provided for the indicators of all latent variables, including moderating latent variables. These can be used for indicator redundancy assessment. In reflective latent variables indicators are expected to be redundant. This is not the case with formative latent variables. In formative latent variables indicators are expected to measure different facets of the same construct, which means that they should not be redundant. The VIF threshold of 3.3 has been recommended in the context of PLS-based SEM in discussions of formative latent variable measurement (Cenfetelli & Bassellier, 2009; Petter et al., 2007). A rule of thumb rooted in the use of WarpPLS for many SEM analyses in the past suggests an even more conservative approach: that capping VIFs to 2.5 for indicators used in formative measurement leads to improved stability of estimates. The multivariate analysis literature, however, tends to gravitate toward higher VIF thresholds, such as 5 and 10. One of the reasons for this is that this literature tends to refer to directly measured variables, not latent variables; particularly not latent variables resulting from collinearity minimization algorithms such as PLS regression. In models with latent variables measured through single indicators, which are not true latent variables, the use of higher VIF thresholds may well be justified. Also, capping VIFs at 2.5 or 3.3 may in some cases severely limit the number of possible indicators available. Given this, it is recommended that VIFs be capped at 2.5 or 3.3 if this does not lead to a major reduction in the number of indicators available to measure formative latent variables. One example would be the removal of only 2 indicators out of 16 by the use of this rule of thumb. Otherwise, the criteria below should be employed. Two criteria, one more conservative and one more relaxed, are recommended by the multivariate analysis literature in connection with VIFs in this type of context. More conservatively, it is recommended that VIFs be lower than 5; a more relaxed criterion is that they be lower than 10 (Hair et al., 1987; 2009; Kline, 1998). High VIFs usually occur for pairs of indicators in formative latent variables, and suggest that the indicators measure the same facet of a formative construct. This calls for the removal of one of the indicators from the set of indicators used for the formative latent variable measurement. These criteria are generally consistent with formative latent variable theory (see, e.g., Diamantopoulos, 1999; Diamantopoulos & Winklhofer, 2001; Diamantopoulos & Siguaw, 2006). Among other characteristics, formative latent variables are expected, often by design, to have many indicators. Yet, given the nature of PLS regression and related algorithms, indicator weights will normally go down as the number of indicators go up, as long as those indicators are somewhat correlated, and thus P values will normally go up as well. Moreover, as more indicators are used to measure a formative latent variable, the likelihood that one or more will be redundant increases. This will be reflected in high VIFs. As with indicator loadings, standard errors are also provided here for the weights, in the column indicated as SE, for indicators associated with all latent variables. These standard errors can be used in specialized tests. Among other purposes, they can be used in multi-group analyses, with the same model but different subsamples. As discussed earlier, here users may want to compare the measurement models to ascertain equivalence, using a multi-group comparison technique such as the one discussed earlier, and thus ensure that any observed between-group differences in structural model coefficients, particularly in path coefficients, are not due to measurement model differences. 11
12 Discussion and conclusion The PLS method has been increasingly used by e-collaboration researchers, as well as by researchers in other fields. This has been particularly true in the context of SEM, although it has also been the case in more basic tests such as variations of ANOVA, ANCOVA, MANOVA, MANCOVA, multiple regression, and path analysis. Conceptually, these more basic tests can all be seen as special cases of SEM. This article discussed a variety of tests that are routinely used in the context of PLS-based SEM, and that can also be selectively used in more basic tests. The focus of the discussion is on tests that employ critical coefficients such as standard errors, effect sizes, loadings, crossloadings and weights. Several recommendations were made here. With the effect sizes users can ascertain whether the effects indicated by path coefficients are small, medium, or large. The values usually recommended are 0.02, 0.15, and 0.35; respectively. Standard errors can be used in advanced tests of mediating effects and multi-group analyses. Loadings and cross-loadings can be used in various tests, particularly validity tests. This applies to rotated and unrotated loadings and cross-loadings. It also applies to loadings and cross-loadings obtained with or without Kaiser normalization adjustments. As loadings and cross-loadings can be used in various tests, so can weights, particularly in the context of formative latent variable measurement. With respect to multi-group analyses, one key issue must be highlighted. The procedure discussed here in connection with path coefficients should also be employed with weights. The reason is that differences between path coefficients can be caused by differences between weights in different models. Therefore, one needs to ensure that the models are equivalent with respect to their measurement, which would be indicated by non-significant differences in weights, before structural coefficients (e.g., path coefficients) are compared. The approach taken in this article is very applied, with illustrations based on the software WarpPLS that were aimed to be as clear and straightforward as possible. The article is aimed at practitioners who need to conduct multivariate analyses as part of their research investigations. These analyses can sometimes be very complex, difficult, and time-consuming. Hopefully this article will make the work of these applied researchers a little easier. Acknowledgments The author is the developer of the software WarpPLS, which has over 5,000 users in 33 different countries at the time of this writing, and moderator of the PLS-SEM distribution list. He is grateful to those users, and to the members of the PLS-SEM distribution list, for questions, comments, and discussions on topics related to the use of WarpPLS. References Cenfetelli, R., & Bassellier, G. (2009). Interpretation of formative measurement in information systems research. MIS Quarterly, 33(4), Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Hillsdale, NJ: Lawrence Erlbaum. 12
13 Diamantopoulos, A. (1999). Export performance measurement: Reflective versus formative indicators. International Marketing Review, 16(6), Diamantopoulos, A., & Siguaw, J.A. (2006). Formative versus reflective indicators in organizational measure development: A comparison and empirical illustration. British Journal of Management, 17(4), Diamantopoulos, A., & Winklhofer, H. (2001). Index construction with formative indicators: An alternative scale development. Journal of Marketing Research, 37(1), Hair, J.F., Black, W.C., Babin, B.J., & Anderson, R.E. (2009). Multivariate data analysis. Upper Saddle River, NJ: Prentice Hall. Hayes, A. F., & Preacher, K. J. (2010). Quantifying and testing indirect effects in simple mediation models when the constituent paths are nonlinear. Multivariate Behavioral Research, 45(4), Keil, M., Tan, B.C., Wei, K.-K., Saarinen, T., Tuunainen, V., & Wassenaar, A. (2000). A crosscultural study on escalation of commitment behavior in software projects. MIS Quarterly, 24(2), Kline, R.B. (1998). Principles and practice of structural equation modeling. New York, NY: The Guilford Press. Kock, N. (2010). Using WarpPLS in e-collaboration studies: An overview of five main analysis steps. International Journal of e-collaboration, 6(4), Kock, N. (2011a). Using WarpPLS in e-collaboration studies: Descriptive statistics, settings, and key analysis results. International Journal of e-collaboration, 7(2), Kock, N. (2011b). Using WarpPLS in e-collaboration studies: Mediating effects, control and second order variables, and algorithm choices. International Journal of e-collaboration, 7(3), Kock, N. (2013). WarpPLS 4.0 User Manual. Laredo, Texas: ScriptWarp Systems. Kock, N., & Lynn, G.S. (2012). Lateral collinearity and misleading results in variance-based SEM: An illustration and recommendations. Journal of the Association for Information Systems, 13(7), Miller, R.B., & Wichern, D.W. (1977). Intermediate business statistics: Analysis of variance, regression and time series. New York, NY: Holt, Rihehart and Winston. Mueller, R.O. (1996). Basic principles of structural equation modeling. New York, NY: Springer. Petter, S., Straub, D., & Rai, A. (2007). Specifying formative constructs in information systems research. MIS Quarterly, 31(4), Preacher, K.J., & Hayes, A.F. (2004). SPSS and SAS procedures for estimating indirect effects in simple mediation models. Behavior Research Methods, Instruments, & Computers, 36 (4), Rencher, A.C. (1998). Multivariate statistical inference and applications. New York, NY: John Wiley & Sons. 13
Common method bias in PLS-SEM: A full collinearity assessment approach
Common method bias in PLS-SEM: A full collinearity assessment approach Ned Kock Full reference: Kock, N. (2015). Common method bias in PLS-SEM: A full collinearity assessment approach. International Journal
Exploratory Factor Analysis Brian Habing - University of South Carolina - October 15, 2003
Exploratory Factor Analysis Brian Habing - University of South Carolina - October 15, 2003 FA is not worth the time necessary to understand it and carry it out. -Hills, 1977 Factor analysis should not
Overview of Factor Analysis
Overview of Factor Analysis Jamie DeCoster Department of Psychology University of Alabama 348 Gordon Palmer Hall Box 870348 Tuscaloosa, AL 35487-0348 Phone: (205) 348-4431 Fax: (205) 348-8648 August 1,
Canonical Correlation Analysis
Canonical Correlation Analysis LEARNING OBJECTIVES Upon completing this chapter, you should be able to do the following: State the similarities and differences between multiple regression, factor analysis,
Multivariate Analysis of Variance (MANOVA)
Multivariate Analysis of Variance (MANOVA) Aaron French, Marcelo Macedo, John Poulsen, Tyler Waterson and Angela Yu Keywords: MANCOVA, special cases, assumptions, further reading, computations Introduction
Chapter Seven. Multiple regression An introduction to multiple regression Performing a multiple regression on SPSS
Chapter Seven Multiple regression An introduction to multiple regression Performing a multiple regression on SPSS Section : An introduction to multiple regression WHAT IS MULTIPLE REGRESSION? Multiple
Exploratory Factor Analysis
Exploratory Factor Analysis ( 探 索 的 因 子 分 析 ) Yasuyo Sawaki Waseda University JLTA2011 Workshop Momoyama Gakuin University October 28, 2011 1 Today s schedule Part 1: EFA basics Introduction to factor
FACTOR ANALYSIS NASC
FACTOR ANALYSIS NASC Factor Analysis A data reduction technique designed to represent a wide range of attributes on a smaller number of dimensions. Aim is to identify groups of variables which are relatively
A Comparison of Variable Selection Techniques for Credit Scoring
1 A Comparison of Variable Selection Techniques for Credit Scoring K. Leung and F. Cheong and C. Cheong School of Business Information Technology, RMIT University, Melbourne, Victoria, Australia E-mail:
UNDERSTANDING ANALYSIS OF COVARIANCE (ANCOVA)
UNDERSTANDING ANALYSIS OF COVARIANCE () In general, research is conducted for the purpose of explaining the effects of the independent variable on the dependent variable, and the purpose of research design
Factor Analysis. Principal components factor analysis. Use of extracted factors in multivariate dependency models
Factor Analysis Principal components factor analysis Use of extracted factors in multivariate dependency models 2 KEY CONCEPTS ***** Factor Analysis Interdependency technique Assumptions of factor analysis
Introduction to Principal Components and FactorAnalysis
Introduction to Principal Components and FactorAnalysis Multivariate Analysis often starts out with data involving a substantial number of correlated variables. Principal Component Analysis (PCA) is a
Moderator and Mediator Analysis
Moderator and Mediator Analysis Seminar General Statistics Marijtje van Duijn October 8, Overview What is moderation and mediation? What is their relation to statistical concepts? Example(s) October 8,
Dimensionality Reduction: Principal Components Analysis
Dimensionality Reduction: Principal Components Analysis In data mining one often encounters situations where there are a large number of variables in the database. In such situations it is very likely
Introduction to Data Analysis in Hierarchical Linear Models
Introduction to Data Analysis in Hierarchical Linear Models April 20, 2007 Noah Shamosh & Frank Farach Social Sciences StatLab Yale University Scope & Prerequisites Strong applied emphasis Focus on HLM
PRINCIPAL COMPONENT ANALYSIS
1 Chapter 1 PRINCIPAL COMPONENT ANALYSIS Introduction: The Basics of Principal Component Analysis........................... 2 A Variable Reduction Procedure.......................................... 2
Applications of Structural Equation Modeling in Social Sciences Research
American International Journal of Contemporary Research Vol. 4 No. 1; January 2014 Applications of Structural Equation Modeling in Social Sciences Research Jackson de Carvalho, PhD Assistant Professor
Factor Analysis. Advanced Financial Accounting II Åbo Akademi School of Business
Factor Analysis Advanced Financial Accounting II Åbo Akademi School of Business Factor analysis A statistical method used to describe variability among observed variables in terms of fewer unobserved variables
Module 3: Correlation and Covariance
Using Statistical Data to Make Decisions Module 3: Correlation and Covariance Tom Ilvento Dr. Mugdim Pašiƒ University of Delaware Sarajevo Graduate School of Business O ften our interest in data analysis
Chapter 7 Factor Analysis SPSS
Chapter 7 Factor Analysis SPSS Factor analysis attempts to identify underlying variables, or factors, that explain the pattern of correlations within a set of observed variables. Factor analysis is often
Multiple Regression: What Is It?
Multiple Regression Multiple Regression: What Is It? Multiple regression is a collection of techniques in which there are multiple predictors of varying kinds and a single outcome We are interested in
Additional sources Compilation of sources: http://lrs.ed.uiuc.edu/tseportal/datacollectionmethodologies/jin-tselink/tselink.htm
Mgt 540 Research Methods Data Analysis 1 Additional sources Compilation of sources: http://lrs.ed.uiuc.edu/tseportal/datacollectionmethodologies/jin-tselink/tselink.htm http://web.utk.edu/~dap/random/order/start.htm
PARTIAL LEAST SQUARES IS TO LISREL AS PRINCIPAL COMPONENTS ANALYSIS IS TO COMMON FACTOR ANALYSIS. Wynne W. Chin University of Calgary, CANADA
PARTIAL LEAST SQUARES IS TO LISREL AS PRINCIPAL COMPONENTS ANALYSIS IS TO COMMON FACTOR ANALYSIS. Wynne W. Chin University of Calgary, CANADA ABSTRACT The decision of whether to use PLS instead of a covariance
Association Between Variables
Contents 11 Association Between Variables 767 11.1 Introduction............................ 767 11.1.1 Measure of Association................. 768 11.1.2 Chapter Summary.................... 769 11.2 Chi
This chapter will demonstrate how to perform multiple linear regression with IBM SPSS
CHAPTER 7B Multiple Regression: Statistical Methods Using IBM SPSS This chapter will demonstrate how to perform multiple linear regression with IBM SPSS first using the standard method and then using the
CALCULATIONS & STATISTICS
CALCULATIONS & STATISTICS CALCULATION OF SCORES Conversion of 1-5 scale to 0-100 scores When you look at your report, you will notice that the scores are reported on a 0-100 scale, even though respondents
A Review of Methods. for Dealing with Missing Data. Angela L. Cool. Texas A&M University 77843-4225
Missing Data 1 Running head: DEALING WITH MISSING DATA A Review of Methods for Dealing with Missing Data Angela L. Cool Texas A&M University 77843-4225 Paper presented at the annual meeting of the Southwest
NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )
Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written
Moderation. Moderation
Stats - Moderation Moderation A moderator is a variable that specifies conditions under which a given predictor is related to an outcome. The moderator explains when a DV and IV are related. Moderation
MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS
MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS MSR = Mean Regression Sum of Squares MSE = Mean Squared Error RSS = Regression Sum of Squares SSE = Sum of Squared Errors/Residuals α = Level of Significance
Sample Size and Power in Clinical Trials
Sample Size and Power in Clinical Trials Version 1.0 May 011 1. Power of a Test. Factors affecting Power 3. Required Sample Size RELATED ISSUES 1. Effect Size. Test Statistics 3. Variation 4. Significance
A Brief Introduction to Factor Analysis
1. Introduction A Brief Introduction to Factor Analysis Factor analysis attempts to represent a set of observed variables X 1, X 2. X n in terms of a number of 'common' factors plus a factor which is unique
Supplementary PROCESS Documentation
Supplementary PROCESS Documentation This document is an addendum to Appendix A of Introduction to Mediation, Moderation, and Conditional Process Analysis that describes options and output added to PROCESS
STATISTICA Formula Guide: Logistic Regression. Table of Contents
: Table of Contents... 1 Overview of Model... 1 Dispersion... 2 Parameterization... 3 Sigma-Restricted Model... 3 Overparameterized Model... 4 Reference Coding... 4 Model Summary (Summary Tab)... 5 Summary
An Introduction to Path Analysis. nach 3
An Introduction to Path Analysis Developed by Sewall Wright, path analysis is a method employed to determine whether or not a multivariate set of nonexperimental data fits well with a particular (a priori)
SPSS ADVANCED ANALYSIS WENDIANN SETHI SPRING 2011
SPSS ADVANCED ANALYSIS WENDIANN SETHI SPRING 2011 Statistical techniques to be covered Explore relationships among variables Correlation Regression/Multiple regression Logistic regression Factor analysis
Chapter 1 Introduction. 1.1 Introduction
Chapter 1 Introduction 1.1 Introduction 1 1.2 What Is a Monte Carlo Study? 2 1.2.1 Simulating the Rolling of Two Dice 2 1.3 Why Is Monte Carlo Simulation Often Necessary? 4 1.4 What Are Some Typical Situations
Common factor analysis
Common factor analysis This is what people generally mean when they say "factor analysis" This family of techniques uses an estimate of common variance among the original variables to generate the factor
FACTOR ANALYSIS. Factor Analysis is similar to PCA in that it is a technique for studying the interrelationships among variables.
FACTOR ANALYSIS Introduction Factor Analysis is similar to PCA in that it is a technique for studying the interrelationships among variables Both methods differ from regression in that they don t have
MISSING DATA TECHNIQUES WITH SAS. IDRE Statistical Consulting Group
MISSING DATA TECHNIQUES WITH SAS IDRE Statistical Consulting Group ROAD MAP FOR TODAY To discuss: 1. Commonly used techniques for handling missing data, focusing on multiple imputation 2. Issues that could
January 26, 2009 The Faculty Center for Teaching and Learning
THE BASICS OF DATA MANAGEMENT AND ANALYSIS A USER GUIDE January 26, 2009 The Faculty Center for Teaching and Learning THE BASICS OF DATA MANAGEMENT AND ANALYSIS Table of Contents Table of Contents... i
UNDERSTANDING THE DEPENDENT-SAMPLES t TEST
UNDERSTANDING THE DEPENDENT-SAMPLES t TEST A dependent-samples t test (a.k.a. matched or paired-samples, matched-pairs, samples, or subjects, simple repeated-measures or within-groups, or correlated groups)
Multivariate Analysis of Variance. The general purpose of multivariate analysis of variance (MANOVA) is to determine
2 - Manova 4.3.05 25 Multivariate Analysis of Variance What Multivariate Analysis of Variance is The general purpose of multivariate analysis of variance (MANOVA) is to determine whether multiple levels
Exploratory Factor Analysis and Principal Components. Pekka Malo & Anton Frantsev 30E00500 Quantitative Empirical Research Spring 2016
and Principal Components Pekka Malo & Anton Frantsev 30E00500 Quantitative Empirical Research Spring 2016 Agenda Brief History and Introductory Example Factor Model Factor Equation Estimation of Loadings
4. There are no dependent variables specified... Instead, the model is: VAR 1. Or, in terms of basic measurement theory, we could model it as:
1 Neuendorf Factor Analysis Assumptions: 1. Metric (interval/ratio) data 2. Linearity (in the relationships among the variables--factors are linear constructions of the set of variables; the critical source
[This document contains corrections to a few typos that were found on the version available through the journal s web page]
Online supplement to Hayes, A. F., & Preacher, K. J. (2014). Statistical mediation analysis with a multicategorical independent variable. British Journal of Mathematical and Statistical Psychology, 67,
Analysing Questionnaires using Minitab (for SPSS queries contact -) [email protected]
Analysing Questionnaires using Minitab (for SPSS queries contact -) [email protected] Structure As a starting point it is useful to consider a basic questionnaire as containing three main sections:
Analyzing Structural Equation Models With Missing Data
Analyzing Structural Equation Models With Missing Data Craig Enders* Arizona State University [email protected] based on Enders, C. K. (006). Analyzing structural equation models with missing data. In G.
Multiple Imputation for Missing Data: A Cautionary Tale
Multiple Imputation for Missing Data: A Cautionary Tale Paul D. Allison University of Pennsylvania Address correspondence to Paul D. Allison, Sociology Department, University of Pennsylvania, 3718 Locust
Session 7 Bivariate Data and Analysis
Session 7 Bivariate Data and Analysis Key Terms for This Session Previously Introduced mean standard deviation New in This Session association bivariate analysis contingency table co-variation least squares
The Dummy s Guide to Data Analysis Using SPSS
The Dummy s Guide to Data Analysis Using SPSS Mathematics 57 Scripps College Amy Gamble April, 2001 Amy Gamble 4/30/01 All Rights Rerserved TABLE OF CONTENTS PAGE Helpful Hints for All Tests...1 Tests
Module 5: Multiple Regression Analysis
Using Statistical Data Using to Make Statistical Decisions: Data Multiple to Make Regression Decisions Analysis Page 1 Module 5: Multiple Regression Analysis Tom Ilvento, University of Delaware, College
Rachel J. Goldberg, Guideline Research/Atlanta, Inc., Duluth, GA
PROC FACTOR: How to Interpret the Output of a Real-World Example Rachel J. Goldberg, Guideline Research/Atlanta, Inc., Duluth, GA ABSTRACT THE METHOD This paper summarizes a real-world example of a factor
How to Get More Value from Your Survey Data
Technical report How to Get More Value from Your Survey Data Discover four advanced analysis techniques that make survey research more effective Table of contents Introduction..............................................................2
Choosing the Right Type of Rotation in PCA and EFA James Dean Brown (University of Hawai i at Manoa)
Shiken: JALT Testing & Evaluation SIG Newsletter. 13 (3) November 2009 (p. 20-25) Statistics Corner Questions and answers about language testing statistics: Choosing the Right Type of Rotation in PCA and
Stepwise Regression. Chapter 311. Introduction. Variable Selection Procedures. Forward (Step-Up) Selection
Chapter 311 Introduction Often, theory and experience give only general direction as to which of a pool of candidate variables (including transformed variables) should be included in the regression model.
Gerry Hobbs, Department of Statistics, West Virginia University
Decision Trees as a Predictive Modeling Method Gerry Hobbs, Department of Statistics, West Virginia University Abstract Predictive modeling has become an important area of interest in tasks such as credit
Using Principal Components Analysis in Program Evaluation: Some Practical Considerations
http://evaluation.wmich.edu/jmde/ Articles Using Principal Components Analysis in Program Evaluation: Some Practical Considerations J. Thomas Kellow Assistant Professor of Research and Statistics Mercer
HYPOTHESIS TESTING: CONFIDENCE INTERVALS, T-TESTS, ANOVAS, AND REGRESSION
HYPOTHESIS TESTING: CONFIDENCE INTERVALS, T-TESTS, ANOVAS, AND REGRESSION HOD 2990 10 November 2010 Lecture Background This is a lightning speed summary of introductory statistical methods for senior undergraduate
4. Multiple Regression in Practice
30 Multiple Regression in Practice 4. Multiple Regression in Practice The preceding chapters have helped define the broad principles on which regression analysis is based. What features one should look
Multiple Regression in SPSS This example shows you how to perform multiple regression. The basic command is regression : linear.
Multiple Regression in SPSS This example shows you how to perform multiple regression. The basic command is regression : linear. In the main dialog box, input the dependent variable and several predictors.
1/27/2013. PSY 512: Advanced Statistics for Psychological and Behavioral Research 2
PSY 512: Advanced Statistics for Psychological and Behavioral Research 2 Introduce moderated multiple regression Continuous predictor continuous predictor Continuous predictor categorical predictor Understand
II. DISTRIBUTIONS distribution normal distribution. standard scores
Appendix D Basic Measurement And Statistics The following information was developed by Steven Rothke, PhD, Department of Psychology, Rehabilitation Institute of Chicago (RIC) and expanded by Mary F. Schmidt,
Research Methods & Experimental Design
Research Methods & Experimental Design 16.422 Human Supervisory Control April 2004 Research Methods Qualitative vs. quantitative Understanding the relationship between objectives (research question) and
Social Media Marketing Management 社 會 媒 體 行 銷 管 理 確 認 性 因 素 分 析. (Confirmatory Factor Analysis) 1002SMMM12 TLMXJ1A Tue 12,13,14 (19:20-22:10) D325
Social Media Marketing Management 社 會 媒 體 行 銷 管 理 確 認 性 因 素 分 析 (Confirmatory Factor Analysis) 1002SMMM12 TLMXJ1A Tue 12,13,14 (19:20-22:10) D325 Min-Yuh Day 戴 敏 育 Assistant Professor 專 任 助 理 教 授 Dept.
DISCRIMINANT FUNCTION ANALYSIS (DA)
DISCRIMINANT FUNCTION ANALYSIS (DA) John Poulsen and Aaron French Key words: assumptions, further reading, computations, standardized coefficents, structure matrix, tests of signficance Introduction Discriminant
Ph.D. Workshop. Advanced Methods in Structural Equation Modeling Mediation and Moderation Analysis
Ph.D. Workshop Advanced Methods in Structural Equation Modeling Mediation and Moderation Analysis 15 th 17 th December 2015 Course content and aims: Empirical research in various disciplines (e.g., management
Premaster Statistics Tutorial 4 Full solutions
Premaster Statistics Tutorial 4 Full solutions Regression analysis Q1 (based on Doane & Seward, 4/E, 12.7) a. Interpret the slope of the fitted regression = 125,000 + 150. b. What is the prediction for
SPSS Explore procedure
SPSS Explore procedure One useful function in SPSS is the Explore procedure, which will produce histograms, boxplots, stem-and-leaf plots and extensive descriptive statistics. To run the Explore procedure,
DISCRIMINANT ANALYSIS AND THE IDENTIFICATION OF HIGH VALUE STOCKS
DISCRIMINANT ANALYSIS AND THE IDENTIFICATION OF HIGH VALUE STOCKS Jo Vu (PhD) Senior Lecturer in Econometrics School of Accounting and Finance Victoria University, Australia Email: [email protected] ABSTRACT
INTRODUCTION TO MULTIPLE CORRELATION
CHAPTER 13 INTRODUCTION TO MULTIPLE CORRELATION Chapter 12 introduced you to the concept of partialling and how partialling could assist you in better interpreting the relationship between two primary
Section 14 Simple Linear Regression: Introduction to Least Squares Regression
Slide 1 Section 14 Simple Linear Regression: Introduction to Least Squares Regression There are several different measures of statistical association used for understanding the quantitative relationship
Exploratory Factor Analysis: rotation. Psychology 588: Covariance structure and factor models
Exploratory Factor Analysis: rotation Psychology 588: Covariance structure and factor models Rotational indeterminacy Given an initial (orthogonal) solution (i.e., Φ = I), there exist infinite pairs of
Exploratory Factor Analysis
Exploratory Factor Analysis Definition Exploratory factor analysis (EFA) is a procedure for learning the extent to which k observed variables might measure m abstract variables, wherein m is less than
UNDERSTANDING THE TWO-WAY ANOVA
UNDERSTANDING THE e have seen how the one-way ANOVA can be used to compare two or more sample means in studies involving a single independent variable. This can be extended to two independent variables
Missing Data: Part 1 What to Do? Carol B. Thompson Johns Hopkins Biostatistics Center SON Brown Bag 3/20/13
Missing Data: Part 1 What to Do? Carol B. Thompson Johns Hopkins Biostatistics Center SON Brown Bag 3/20/13 Overview Missingness and impact on statistical analysis Missing data assumptions/mechanisms Conventional
Descriptive Statistics
Descriptive Statistics Primer Descriptive statistics Central tendency Variation Relative position Relationships Calculating descriptive statistics Descriptive Statistics Purpose to describe or summarize
FRAMING AND MEANING IN CUSTOMER ATTITUDE SURVEYS
FRAMING AND MEANING IN CUSTOMER ATTITUDE SURVEYS James H. Drew, Andrew L. Betz, III James H. Drew, GTE Laboratories Inc., 40 Sylvan Rd., Waltham, MA 02254 KEY WORDS" Customer Satisfaction, Validity, Measurement
EST.03. An Introduction to Parametric Estimating
EST.03 An Introduction to Parametric Estimating Mr. Larry R. Dysert, CCC A ACE International describes cost estimating as the predictive process used to quantify, cost, and price the resources required
How To Understand Multivariate Models
Neil H. Timm Applied Multivariate Analysis With 42 Figures Springer Contents Preface Acknowledgments List of Tables List of Figures vii ix xix xxiii 1 Introduction 1 1.1 Overview 1 1.2 Multivariate Models
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
2. Linearity (in relationships among the variables--factors are linear constructions of the set of variables) F 2 X 4 U 4
1 Neuendorf Factor Analysis Assumptions: 1. Metric (interval/ratio) data. Linearity (in relationships among the variables--factors are linear constructions of the set of variables) 3. Univariate and multivariate
MANUFACTURING AND CONTINUOUS IMPROVEMENT AREAS USING PARTIAL LEAST SQUARE PATH MODELING WITH MULTIPLE REGRESSION COMPARISON
MANUFACTURING AND CONTINUOUS IMPROVEMENT AREAS USING PARTIAL LEAST SQUARE PATH MODELING WITH MULTIPLE REGRESSION COMPARISON Carlos Monge 1, Jesús Cruz Álvarez 2, Jesús Fabián López 3 Abstract: Structural
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
STUDENT ATTITUDES TOWARD WEB-BASED COURSE MANAGEMENT SYSTEM FEATURES
STUDENT ATTITUDES TOWARD WEB-BASED COURSE MANAGEMENT SYSTEM FEATURES Dr. Manying Qiu, Virginia State University, [email protected] Dr. Steve Davis, Clemson University, [email protected] Dr. Sadie Gregory, Virginia
Mode and Patient-mix Adjustment of the CAHPS Hospital Survey (HCAHPS)
Mode and Patient-mix Adjustment of the CAHPS Hospital Survey (HCAHPS) April 30, 2008 Abstract A randomized Mode Experiment of 27,229 discharges from 45 hospitals was used to develop adjustments for the
Do Commodity Price Spikes Cause Long-Term Inflation?
No. 11-1 Do Commodity Price Spikes Cause Long-Term Inflation? Geoffrey M.B. Tootell Abstract: This public policy brief examines the relationship between trend inflation and commodity price increases and
DEPARTMENT OF PSYCHOLOGY UNIVERSITY OF LANCASTER MSC IN PSYCHOLOGICAL RESEARCH METHODS ANALYSING AND INTERPRETING DATA 2 PART 1 WEEK 9
DEPARTMENT OF PSYCHOLOGY UNIVERSITY OF LANCASTER MSC IN PSYCHOLOGICAL RESEARCH METHODS ANALYSING AND INTERPRETING DATA 2 PART 1 WEEK 9 Analysis of covariance and multiple regression So far in this course,
Statistics in Psychosocial Research Lecture 8 Factor Analysis I. Lecturer: Elizabeth Garrett-Mayer
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike License. Your use of this material constitutes acceptance of that license and the conditions of use of materials on this
Constructing a TpB Questionnaire: Conceptual and Methodological Considerations
Constructing a TpB Questionnaire: Conceptual and Methodological Considerations September, 2002 (Revised January, 2006) Icek Ajzen Brief Description of the Theory of Planned Behavior According to the theory
Correlational Research
Correlational Research Chapter Fifteen Correlational Research Chapter Fifteen Bring folder of readings The Nature of Correlational Research Correlational Research is also known as Associational Research.
Factor Analysis: Statnotes, from North Carolina State University, Public Administration Program. Factor Analysis
Factor Analysis Overview Factor analysis is used to uncover the latent structure (dimensions) of a set of variables. It reduces attribute space from a larger number of variables to a smaller number of
SIMPLE LINEAR CORRELATION. r can range from -1 to 1, and is independent of units of measurement. Correlation can be done on two dependent variables.
SIMPLE LINEAR CORRELATION Simple linear correlation is a measure of the degree to which two variables vary together, or a measure of the intensity of the association between two variables. Correlation
Influence of Tactical Factors on ERP Projects Success
2011 3rd International Conference on Advanced Management Science IPEDR vol.19 (2011) (2011) IACSIT Press, Singapore Influence of Tactical Factors on ERP Projects Success Shahin Dezdar + Institute for International
SPSS Guide: Regression Analysis
SPSS Guide: Regression Analysis I put this together to give you a step-by-step guide for replicating what we did in the computer lab. It should help you run the tests we covered. The best way to get familiar
Linear Models in STATA and ANOVA
Session 4 Linear Models in STATA and ANOVA Page Strengths of Linear Relationships 4-2 A Note on Non-Linear Relationships 4-4 Multiple Linear Regression 4-5 Removal of Variables 4-8 Independent Samples
