1 Quality & Quantity 38: , Kluwer Academic Publishers. Printed in the Netherlands. 381 Longitudinal Meta-analysis CORA J. M. MAAS, JOOP J. HOX and GERTY J. L. M. LENSVELT-MULDERS Department of Methodology and Statistics, Faculty of Social Sciences, Utrecht University, POB 80140, NL-3805 TC Utrecht, the Netherlands Abstract. The goal of meta-analysis is to integrate the research results of a number of studies on a specific topic. Characteristic for meta-analysis is that in general only the summary statistics of the studies are used and not the original data. When the published research results to be integrated are longitudinal, multilevel analysis can be used for the meta-analysis. We will demonstrate this with an example of longitudinal data on the mental development of infants. We distinguish four levels in the data. The highest level (4) is the publication, in which the results of one or more studies are published. The third level consists of the separate studies. At this level we have knowledge about the degree of prematurity of the group of infants in the specific study. The second level are the repeated measures. We have data about the test age, the mental development, the corresponding standard deviations, and the sample sizes. The lowest level is needed for the specification of the meta-analysis model. Both the way in which the multilevel model has to be specified (the Mln-program is used) as the results will be presented and interpreted. Key words: longitudinal analysis, meta-analysis, multilevel analysis. 1. Introduction In the social and behavioural sciences, research results are often inconsistent. Human behaviour is complex and difficult to explain. Depending on characteristics of the studies, such as samples, contextual variables and operationalizations, different conclusions may be drawn on the same research topic. The aim of meta-analysis (Glass, 1976; Light and Pillemer, 1984; Lipsey and Wilson, 2001) is the integration of such different research results and the explanation of the prevailing inconsistencies. Characteristic of the meta-analysis procedure is that in general only summary statistics, as p-values, correlations or standard deviations, of the studies are used and not the original data. So, if the research question is about the correlation between two variables, meta-analysis is used to combine the correlations as found in the different studies to one overall correlation. Also the deviations of the individual studies to this overall correlation are estimated. When the deviations are Correspondence concerning this article should be sent to Cora Maas, Department of Methodology and Statistics, University of Utrecht, Heidelberglaan 1, 3584 CS Utrecht, The Netherlands. Phone Fax Electronic mail may be sent to
2 382 CORA J. M. MAAS ET AL. substantial, characteristics of the studies can be used to explain these deviations (Cornell and Mulrow, 1999). A simple approach to meta-analysis consists of combining the p-values of a number of studies into one global p-value. Methods for this are well-known (cf. Hedges and Olkin, 1985; Schwarzer, 1989). A more sophisticated approach is to estimate in each study an effect size for the outcome, and to combine the effect sizes into one global effect size, plus a significance test for the combined effect. Implicit in combining effects from different studies is the assumption that all studies estimate the same effect in an identical way; it is assumed that the effects are homogeneous across the studies. Part of a meta-analysis is a test of this assumption. If the homogeneity test is significant, the assumption of homogeneity is rejected, and the effects must be assumed heterogeneous. In the presence of heterogeneous effects, there is no common study effect. There is, of course, an average study effect, and variance of the study outcomes around that average. The next step is explaining this variance using study characteristics. In classical meta-analysis (cf. Hedges and Olkin, 1985), the approach towards explaining between-studies variance is to cluster the studies into homogeneous clusters, followed by a post hoc explanation of the cluster structure. A very general meta-analysis model is the random effects model (cf. Hedges and Olkin, 1985, p. 189). The random effects model assumes that the study results differ because of sampling variation, plus additional variation that reflects real differences between the studies. Hedges and Olkin propose to use weighted least squares regression to analyze the differences between the studies. Raudenbush and Bryk (Raudenbush and Bryk, 1985; Raudenbush, 1994) point out that random effects meta-analysis is closely related to multilevel analysis. In meta-analysis, we have two nested levels of observations: studies, and individuals within studies. If we had access to the original data, we could perform a standard multilevel analysis on these data, including the study characteristics as explanatory variables at the study level. However, generally we do not have access to the original data. In this case, we use an adapted multilevel procedure, which uses only the sufficient statistics available from the publications (cf. Goldstein, 1995). The flexibility of multilevel analysis allows us to include not only study characteristics, but also certain design characteristics, such as the occurrence of repeated measures. In the remainder of this article, we show how multilevel analysis can be used to carry out a meta-analysis of longitudinal data. 2. The Model A classical meta-analysis model is the random effects model, as formulated by Hedges and Olkin (1985). In this model, the different research results are the result of not only random sampling fluctuations within the studies, but also of systematic differences between the studies. The specification of this model is: O j = µ 0 + r j + e j (1)
3 LONGITUDINAL META-ANALYSIS 383 where O j is the research result of study j, µ 0 is the mean effect of all studies, r j is the residual prediction error, and e j is the sampling error. The random effect meta-analysis model specified in Equation (1) (cf. Hedges and Olkin, 1985, p. 189) is formally equivalent to the multilevel intercept only model, presented below in the usual notation (cf. Bryk and Raudenbush, 1992): d j = γ 0 + u j + e j (2) The variance of the residual prediction error (σj 2 ) reflects the heterogeneity of the research results. The null hypothesis that this variance is zero, is identical to the null hypothesis tested in the usual homogeneity tests in meta-analysis. When the null hypothesis is rejected, attempts can be made to explain the observed variance with characteristics of the studies. Adding explanatory variables at the study level the model becomes: d j = γ 0 + γ 1 Z 1j + γ 2 Z 2j + +γ p Z pj + u j + e j (3) where d j is the research result of study j, γ 0 is the regression intercept of all studies, γ 1 to γ p are regression coefficients, Z 1 to Z p are characteristics of the studies, u j is the residual prediction error, and e j is the sampling error. When the published research results to be integrated are longitudinal, and the studies only publish the effects at the different time points (meaning no growth curves), multilevel analysis can be used for the meta-analysis. An advantage of the multilevel meta-analysis is, that characteristics of the study level can be entered as explanatory variables in the regression equation, while in the meta-analysis procedure only clusters of studies, with comparable characteristics, can be distinguished. An additional advantage is that multilevel meta-analysis does not assume that all studies report on the same time points, in fact, all time points may be different. 3. The Specification of a Longitudinal Meta-analysis Model Starting-point for the specification of the longitudinal meta-analysis model is the normal longitudinal model. If we, just for a moment, consider the outcome of a study as observed (and not as a series of means of a group with a specific standard deviation), we can specify a simple longitudinal model as follows. The observed score (Y ts ) of Study s at timepoint t is a linear function of the time plus random error. Therefore the lowest level is the level of the repeated measures, and this level 1 model is specified as follows: Y ts = π 0s + π 1s time ts + e ts (4) where Y ts is the observed score at timepoint t in study s, π 0s is the intercept of study s, time ts is the timepoint t in study s, π 1s is the effect of the time in study s on the observed score, and e ts is the random time effect, that is the deviation of observed scores from the study mean.
4 384 CORA J. M. MAAS ET AL. If we specify no explanatory variables at the second level (level of the study) the specification of the second level becomes: π 0s = β 00 + r 0s (5) π 1s = β 10 + r 1s (6) where π 0s is the intercept of study s, π 1s is the effect of time in study s on the observed score, β 00 is the intercept, β 10 is the effect of time on the observed score, r 0s is the random study effect of the intercept, and r 1s is the random study effect of time on the observed score. These equations can be written as one single regression equation by substituting the Equations (5) and (6) into (4): Y ts = β 00 + β 10 time ts + r 0s + r 1s time ts + e ts (7) The difference between the longitudinal multilevel model specified in Equation (7), and the longitudinal meta-analysis model, is that in the latter we have not access to the original observed scores but only to the means, standard deviations and sample size of the observed scores from each study. However, assuming normal distribution, these are sufficient statistics that contain all information in the sample. With respect to the specification of the model, the consequence is that the response variable Y ts represents the mean of the observed scores in study s, and that the error variance of the within study sampling errors e ts in Equation (7) is assumed known. Assuming normality, the standard error of a mean is estimated by the well-known formula: SE = SD ts nts (8) In this equation, SD and n are known quantities, so the standard error SE and the corresponding sampling variance V = SE 2 are known. Thus, the sampling variance of the means in each study is produced by: V ts = SD2 ts n ts (9) The unknown parameters in Equation (7) are the fixed regression coefficients β 00 and β 10, and the study-level variances V 0s of the r 0s and V 1s of the r 1s terms. The null hypothesis for the fixed regression coefficients, which correspond to tests of specific study characteristics, is that they are equal to zero. This is tested by a t- ratio formed by the ratio of the estimated coefficient to its standard error. When the number of level two units is large, this ratio follows a standard normal distribution, otherwise a Student distribution can be used (cf. Bryk and Raudenbush, 1992). The study-level variances V 0s and V 1s can also be tested for significance. The usual
5 LONGITUDINAL META-ANALYSIS 385 Maximum Likelihood procedure includes an asymptotic test based on the standard error of V computed from the inverse of the information matrix (Goldstein, 1995). Bryk and Raudenbush (1992, page 55) prefer a χ 2 -test using the standardised OLSresiduals, where: ( ) 2 S χ 2 (Y ts Ŷ ts ) = (10) SE s=1 with: df = s q 1(q = number independent variables). The Bryk and Raudenbush test has the advantage that in the intercept only model it is equivalent to the usual homogeneity test employed in meta-analysis (Hedges and Olkin, 1985). The problem with this test is that it is only available in the program (VK)HLM developed by Bryk and Raudenbush (Bryk, Raudenbush and Congdon, 1994), and not in the program MLn developed by Goldstein (Rasbash and Woodhouse, 1995). However, it is possible to carry out multilevel meta-analysis using standard multilevel software. Since this approach is more general, we will restrict ourselves to the results of the general multilevel analysis using the program MLn (Rasbash and Woodhouse, 1995). The model parameters β 00, β 10, V 0s and V 1s can be estimated using multilevel software, provided that it is possible to put constraints on the variances in the random part of the model. Specifically, the variance of the e ts, which is assumed known, is specified by including the standard error as given in Equation (8) at the lowest level as a predictor variable. The standard error is used as a predictor in the random part only: Y ts = β 00 + β 10 time ts + r 0s + r 1s time ts + SD ts xe ts (11) Using the explanatory variable SD and constraining the level one variance to one, we obtain the required sampling variance (Goldstein, 1995, p. 98). The estimation of the model parameters β 00, β 10, V 0s and V 1s, which are specified at the study level, then proceeds in the usual way. 4. Example We demonstrate the specification and interpretation of a longitudinal meta-analysis multilevel model, using an example from a longitudinal meta-analysis of the development of infants. The data used in the example are so-called Bayley-scores, which are standardized scores on the mental development of infants. We have access to 28 articles, in which 40 studies are reported, with a total of 67 points of time. The articles are published in the period between 1973 and The difference with the above specified multilevel model and the data used in this example is that we now have to specify a four level model. We need three levels for substantive reasons, because we have not only longitudinal data nested within studies, but also
6 386 CORA J. M. MAAS ET AL. an additional level of studies nested within in articles. Therefore the longitudinal model, without other explanatory variables, becomes: Y tsa = γ γ 100 Age tsa + r 0sa + r 1sa Age tsa + u 00a + SD tsa xe tsa (12) For the specification of the meta-analysis model we need another level. This level exists for technical reasons; at this level we constrain the estimate of the variance of the random sampling errors to equal 1. Multiplying this estimate with the predictor that contains the standard errors, gives us the desired known variances of the first level. The specification of the model in Mln is given in Appendix 1. The results of the intercept-only model show that there is no variance on the level of the studies. This means that studies in one publication are exchangeable; they are more alike than studies reported in different articles. As a result, we may collapse our data over this level. Leaving out the study level, which means that the variables observed at the study level are now treated as if they were observed at the time-points level, gives as estimate for the mean Bayley-score The total variance is decomposed into two parts, 74.65% is at the study level, 25.35% at the time-points level. The study level variance is clearly significant (p <0.00). In addition to the significance test, Hunter and Schmidt (1990) suggest that the between study variance should be considered important when it is at least 25% of the total variance. Therefore, the conclusion is that there are substantial differences between the studies. Adding the age (in months) of the infants (centred around the grand mean) gives the following equation: Bayley = Age We should expect no effect of the variable Age, because the Bayley-scores are standardized for Age. Nevertheless, the results show a positive effect of Age. This means that premature infants perform less well. Adding the predictor variable Prematurity (also centred about the grand mean) to the model gives the following: Bayley = Age 9.509Prematurity The effect of Age is no longer significant. The more an infant is premature the lower his Bayley-score. Adding the interaction of Age and Prematurity to the model gives the following result: Bayley = Age 9.593Prematurity Age x Prematurity The effect of Age is again not significant. The interpretation of the regression equation is that the less premature the higher the Bayley-score, but when the premature infants become older they make up their backwardness. The variance explained by the variables in the last model is 83.49% at the study level, and at the level of the time points 6.02%.
7 LONGITUDINAL META-ANALYSIS 387 There is one variable at the article level: the year of publication. This variable is treated as a dummy variable. Articles published between 1973 and 1980 have score 0, and articles between 1981 and 1990 score 1. When the year of publication is missing, the mean of the dummy variable is imputed. We have checked the possibility, that the studies with the missing publication year are different from the others, by adding a second dummy variable to the regression equation, which has the score 1 for the missing year code, and the score 0 for the studies with known publication year. Both the year of publication variable, and the missingness dummy variable are not significant. 5. Discussion The advantages of multilevel analysis for longitudinal meta-analysis are twofold. The first advantage is that the multilevel model and the accompanying software are very flexible, which means that many potential explanatory variables may be added to the model at various levels. These explanatory variables may have a substantive meaning, such as the Age and Prematurity variables used in our example, or serve as methodological control variables, like the publication year and missingness variables. The second advantage is that the multilevel model does not assume that the time points are the same in all studies, which means that there are very few restrictions to the studies to be added to the analysis. For multilevel longitudinal meta-analysis it is required that each study publishes at least the actual time points, the sample size, and the mean and standard deviation of the main variable at each time point. Most studies in our example either published these, or gave information from which these statistics could be deduced. Meta-analysis methods are so far mostly used to combine the results of a set of studies that are already published. However, they can also be fruitfully applied in co-operative studies, where a number of research groups plan a set of longitudinal studies, with the aim of combining the results at a later stage. In such cases, the choice of variables and study design can be made before the meta-analysis, and the problems of integrating the results can be anticipated and solved in the planning stage. Appendix 1 MLn commands to carry out multilevel longitudinal meta-analysis on Bayley scores. (1) name c1 article c2 study c3 testage c4 Bayley c5 SE c6 prem c7 year c8 dumyear c9 const (2) iden 4 article 3 study 2 testage 1 const (3) resp Bayley (4) expl cons SE (5)fpar SE
8 388 CORA J. M. MAAS ET AL. (6) setv 1 SE (7) setv 2 const (8) setv 3 const (9) setv 4 const (10) input c11 (11)00011 (12) finish (13) rcon c11 Explanation (1): There are 9 variables in the data-file named article, study, testage, Bayley, SE, prematurity, year, dumyear and the constant (2): There are 4 levels distinguished: the highest level is the article. In the articles are the results of one or more studies published, so the third level consists of the separate studies. The second level are the longitudinal data. The lowest level is needed for the specification of the meta-analysis model (3): The dependent variable is the Bayley-score (4): The independent variables are the constant and the standard error (5): The standard error is only a predictor in the random part so we remove this variable from the fixed part (6) (9): The total variance of the Bayley-scores is distributed over the three levels. The variance of the first level is constrained to one, so that the variance of the second level has the known variance of the standard error (10) (13): To constrain the estimate of the random parameter at the first level equal 1, we specify a constraints vector: The first three zeros are for the random parameters of the second, third en fourth level. These parameters have no constraints. The zeros in the fourth and fifth column specify the constrain of the estimate of the random parameter at the first level equal to 1 References Bryk, A. S. & Raudenbush, S. W. (1992). Hierarchical linear models. Newbury Park, CA: Sage. Bryk, A. S., Raudenbush, S. W. & Congdon, R. T. (1994). HLM 2/3. Hierarchical linear modeling with the HLM/2L and HLM/3L programs. Chicago: Scientific Software International. Cornell, J. & Mulrow, C. (1999). Meta-analysis. In: H. J. Ader and G. J. Mellenbergh (eds.), Research methodology in the social, behavioral, and life sciences. London: Sage. Glass, G. V. (1976). Primary, secondary and meta analysis of research. Educational Researcher 10: 3 8. Goldstein, H. (1995). Multilevel statistical models. London: Edward Arnold. Hedges, L. V. & Olkin, I. (1985). Statistical methods for meta-analysis. San Diego, CA: Academic Press. Hunter, J. E. & Schmidt, F. L. (1990). Methods of meta-analysis. Newbury Park, CA: Sage. Light, R. J. & Pillemer, D. B. (1984). Summing up: The science of reviewing research. Cambridge, MA: Harvard University Press.
9 LONGITUDINAL META-ANALYSIS 389 Lipsey, M. W. & Wilson, D. B. (2001). Practical meta-analysis. Thousand Oaks, CA: Sage. Rasbash, J. & Woodhouse, G. (1995). MLn command guide. London: Multilevel Models Project, University of London. Raudenbush, S. W. (1994). Random effects models. In: H. Cooper & L. V. Hedges (eds.), The handbook of research synthesis. New York: Russell Sage Foundation. Raudenbush, S. W. & Bryk, A. S. (1985). Empirical Bayes meta analysis. Journal of Educational Statistics 10: Schwarzer, R. (1989). Meta-analysis programs. Program manual. Berlijn: Institüt für Psychologie, Freie Universität Berlin.
Introduction to Data Analysis in Hierarchical Linear Models April 20, 2007 Noah Shamosh & Frank Farach Social Sciences StatLab Yale University Scope & Prerequisites Strong applied emphasis Focus on HLM
An Introduction to Hierarchical Linear Modeling for Marketing Researchers Barbara A. Wech and Anita L. Heck Organizations are hierarchical in nature. Specifically, individuals in the workplace are entrenched
HLM: A Gentle Introduction D. Betsy McCoach, Ph.D. Associate Professor, Educational Psychology Neag School of Education University of Connecticut Other names for HLMs Multilevel Linear Models incorporate
Introduction to Multilevel Modeling Using HLM 6 By ATS Statistical Consulting Group Multilevel data structure Students nested within schools Children nested within families Respondents nested within interviewers
Regression in ANOVA James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) 1 / 30 Regression in ANOVA 1 Introduction 2 Basic Linear
Multivariate hypothesis tests for fixed effects Testing homogeneity of level-1 variances In the following sections, we use the model displayed in the figure below to illustrate the hypothesis tests. Partial
Εισαγωγή στην πολυεπίπεδη μοντελοποίηση δεδομένων με το HLM Βασίλης Παυλόπουλος Τμήμα Ψυχολογίας, Πανεπιστήμιο Αθηνών Το υλικό αυτό προέρχεται από workshop που οργανώθηκε σε θερινό σχολείο της Ευρωπαϊκής
Figure 9-6 Example: Boats and Manatees Slide 1 Given the sample data in Table 9-1, find the value of the linear correlation coefficient r, then refer to Table A-6 to determine whether there is a significant
2 The Basic Two-Level Regression Model The multilevel regression model has become known in the research literature under a variety of names, such as random coefficient model (de Leeuw & Kreft, 1986; Longford,
STRUCTURAL EQUATION MODELING, 9(4), 599 620 Copyright 2002, Lawrence Erlbaum Associates, Inc. TEACHER S CORNER How to Use a Monte Carlo Study to Decide on Sample Size and Determine Power Linda K. Muthén
Regression Analysis: A Complete Example This section works out an example that includes all the topics we have discussed so far in this chapter. A complete example of regression analysis. PhotoDisc, Inc./Getty
Snijders, Tom A.B. Power and Sample Size in Multilevel Linear Models. In: B.S. Everitt and D.C. Howell (eds.), Encyclopedia of Statistics in Behavioral Science. Volume 3, 1570 1573. Chicester (etc.): Wiley,
John Fox Sociology 740 Winter 2014 Outline Why Missing Data Arise Why Missing Data Arise Global or unit non-response. In a survey, certain respondents may be unreachable or may refuse to participate. Item
Interpretation and Implementation 1 Categorical Variables in Regression: Implementation and Interpretation By Dr. Jon Starkweather, Research and Statistical Support consultant Use of categorical variables
DISCUSSION PAPER SERIES IZA DP No. 2198 Fixed and Mixed Effects Models in Meta-Analysis Spyros Konstantopoulos July 2006 Forschungsinstitut zur Zukunft der Arbeit Institute for the Study of Labor Fixed
Missing Data: Part 1 What to Do? Carol B. Thompson Johns Hopkins Biostatistics Center SON Brown Bag 3/20/13 Overview Missingness and impact on statistical analysis Missing data assumptions/mechanisms Conventional
Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression Objectives: To perform a hypothesis test concerning the slope of a least squares line To recognize that testing for a
Department of Psychology and Human Development Vanderbilt University GCM, 2010 1 Multilevel Modeling - A Brief Introduction 2 3 4 5 Introduction In this lecture, we introduce the multilevel model for change.
Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby
Multiple regression Introduction Multiple regression is a logical extension of the principles of simple linear regression to situations in which there are several predictor variables. For instance if we
How To Use A Monte Carlo Study To Decide On Sample Size and Determine Power Linda K. Muthén Muthén & Muthén 11965 Venice Blvd., Suite 407 Los Angeles, CA 90066 Telephone: (310) 391-9971 Fax: (310) 391-8971
Univariate Regression Correlation and Regression The regression line summarizes the linear relationship between 2 variables Correlation coefficient, r, measures strength of relationship: the closer r is
Research methods - II 3 2. Simple Linear Regression Simple linear regression is a technique in parametric statistics that is commonly used for analyzing mean response of a variable Y which changes according
SPSS Guide: Regression Analysis I put this together to give you a step-by-step guide for replicating what we did in the computer lab. It should help you run the tests we covered. The best way to get familiar
For Windows August 2012 Table of Contents Section 1: Overview... 3 1.1 About this Document... 3 1.2 Introduction to HLM... 3 1.3 Accessing HLM... 3 1.4 Getting Help with HLM... 4 Section 2: Accessing Data
Simple Linear Regression Chapter 11 Rationale Frequently decision-making situations require modeling of relationships among business variables. For instance, the amount of sale of a product may be related
A peer-reviewed electronic journal. Copyright is retained by the first or sole author, who grants right of first publication to Practical Assessment, Research & Evaluation. Permission is granted to distribute
1 Final Review 2 Review 2.1 CI 1-propZint Scenario 1 A TV manufacturer claims in its warranty brochure that in the past not more than 10 percent of its TV sets needed any repair during the first two years
Inferential Statistics Sampling and the normal distribution Z-scores Confidence levels and intervals Hypothesis testing Commonly used statistical methods Inferential Statistics Descriptive statistics are
Institute of Actuaries of India Subject CT3 Probability and Mathematical Statistics For 2015 Examinations Aim The aim of the Probability and Mathematical Statistics subject is to provide a grounding in
DEPARTMENT OF POLITICAL SCIENCE AND INTERNATIONAL RELATIONS Posc/Uapp 86 MULTIPLE REGRESSION WITH CATEGORICAL DATA I. AGENDA: A. Multiple regression with categorical variables. Coding schemes. Interpreting
Models for Count Data With Overdispersion Germán Rodríguez November 6, 2013 Abstract This addendum to the WWS 509 notes covers extra-poisson variation and the negative binomial model, with brief appearances
Overview of Factor Analysis Jamie DeCoster Department of Psychology University of Alabama 348 Gordon Palmer Hall Box 870348 Tuscaloosa, AL 35487-0348 Phone: (205) 348-4431 Fax: (205) 348-8648 August 1,
Simple Linear Regression Inference 1 Inference requirements The Normality assumption of the stochastic term e is needed for inference even if it is not a OLS requirement. Therefore we have: Interpretation
Hervé Abdi 1 1 Overview The multiple correlation coefficient generalizes the standard coefficient of correlation. It is used in multiple regression analysis to assess the quality of the prediction of the
Least Squares Introduction We have mentioned that one should not always conclude that because two variables are correlated that one variable is causing the other to behave a certain way. However, sometimes
DEPARTMENT OF PSYCHOLOGY UNIVERSITY OF LANCASTER MSC IN PSYCHOLOGICAL RESEARCH METHODS ANALYSING AND INTERPRETING DATA 2 PART 1 WEEK 9 Analysis of covariance and multiple regression So far in this course,
Location, Internationalization and Performance of Firms in Italy: a Multilevel Approach G.Giovannetti, G.Ricchiuti, M.Velucchi Firenze, Mar 27 2012 The performance of firms in a globalized world depends
Longitudinal and Life Course Studies 2009 Volume 1 Issue 1 Pp 63-72 Handling attrition and non-response in longitudinal data Harvey Goldstein University of Bristol Correspondence. Professor H. Goldstein
This article was downloaded by: [University of California, Los Angeles (UCLA)] On: 19 February 2014, At: 05:09 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954
Using Minitab for Regression Analysis: An extended example The following example uses data from another text on fertilizer application and crop yield, and is intended to show how Minitab can be used to
Structural Equation Models for Comparing Dependent Means and Proportions Jason T. Newsom How to Do a Paired t-test with Structural Equation Modeling Jason T. Newsom Overview Rationale Structural equation
Statistics: Rosie Cornish. 200. 1.5 Oneway Analysis of Variance 1 Introduction Oneway analysis of variance (ANOVA) is used to compare several means. This method is often used in scientific or medical experiments
Examples: Multilevel Modeling With Complex Survey Data CHAPTER 9 EXAMPLES: MULTILEVEL MODELING WITH COMPLEX SURVEY DATA Complex survey data refers to data obtained by stratification, cluster sampling and/or
CHAPTER 7B Multiple Regression: Statistical Methods Using IBM SPSS This chapter will demonstrate how to perform multiple linear regression with IBM SPSS first using the standard method and then using the
Notation: Notation and Equations for Regression Lecture 11/4 m: The number of predictor variables in a regression Xi: One of multiple predictor variables. The subscript i represents any number from 1 through
Overview of Methods for Analyzing Cluster-Correlated Data Garrett M. Fitzmaurice Laboratory for Psychiatric Biostatistics, McLean Hospital Department of Biostatistics, Harvard School of Public Health Outline
Simple Linear Regression Does sex influence mean GCSE score? In order to answer the question posed above, we want to run a linear regression of sgcseptsnew against sgender, which is a binary categorical
Introduction to Longitudinal Data Analysis Longitudinal Data Analysis Workshop Section 1 University of Georgia: Institute for Interdisciplinary Research in Education and Human Development Section 1: Introduction
Parkland College A with Honors Projects Honors Program 2014 Calculating P-Values Isela Guerra Parkland College Recommended Citation Guerra, Isela, "Calculating P-Values" (2014). A with Honors Projects.
Qualitative vs Quantitative research & Multilevel methods How to include context in your research April 2005 Marjolein Deunk Content What is qualitative analysis and how does it differ from quantitative
Practice 3 SPSS Partially based on Notes from the University of Reading: http://www.reading.ac.uk Simple Linear Regression A simple linear regression model is fitted when you want to investigate whether
Chapter Four Data Analyses and Presentation of the Findings The fourth chapter represents the focal point of the research report. Previous chapters of the report have laid the groundwork for the project.
Introductory Guide to HLM With HLM 7 Software 3 G. David Garson HLM software has been one of the leading statistical packages for hierarchical linear modeling due to the pioneering work of Stephen Raudenbush
Multiple Regression in SPSS This example shows you how to perform multiple regression. The basic command is regression : linear. In the main dialog box, input the dependent variable and several predictors.
Math 143 Inference on Regression 1 Review of Linear Regression In Chapter 2, we used linear regression to describe linear relationships. The setting for this is a bivariate data set (i.e., a list of cases/subjects
AMS7: WEEK 8. CLASS 1 Correlation Monday May 18th, 2015 Type of Data and objectives of the analysis Paired sample data (Bivariate data) Determine whether there is an association between two variables This
Chapter 11: Linear Regression - Inference in Regression Analysis - Part 2 Note: Whether we calculate confidence intervals or perform hypothesis tests we need the distribution of the statistic we will use.
Mathematics Fairfield Public Schools AP Statistics AP Statistics BOE Approved 04/08/2014 1 AP STATISTICS Critical Areas of Focus AP Statistics is a rigorous course that offers advanced students an opportunity
Assessing Forecasting Error: The Prediction Interval David Gerbing School of Business Administration Portland State University November 27, 2015 Contents 1 Prediction Intervals 1 1.1 Modeling Error..................................
Polynomial Regression POLYNOMIAL AND MULTIPLE REGRESSION Polynomial regression used to fit nonlinear (e.g. curvilinear) data into a least squares linear regression model. It is a form of linear regression
STAT E-150 Statistical Methods Multiple Regression Three percent of a man's body is essential fat, which is necessary for a healthy body. However, too much body fat can be dangerous. For men between the
Elements of statistics (MATH0487-1) Prof. Dr. Dr. K. Van Steen University of Liège, Belgium December 10, 2012 Introduction to Statistics Basic Probability Revisited Sampling Exploratory Data Analysis -
Slide 1 Section 14 Simple Linear Regression: Introduction to Least Squares Regression There are several different measures of statistical association used for understanding the quantitative relationship
Opening Example CHAPTER 13 SIMPLE LINEAR REGREION SIMPLE LINEAR REGREION! Simple Regression! Linear Regression Simple Regression Definition A regression model is a mathematical equation that descries the
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written
Multiple Regression in SPSS STAT 314 I. The accompanying data is on y = profit margin of savings and loan companies in a given year, x 1 = net revenues in that year, and x 2 = number of savings and loan
17. SIMPLE LINEAR REGRESSION II The Model In linear regression analysis, we assume that the relationship between X and Y is linear. This does not mean, however, that Y can be perfectly predicted from X.
Variance of OLS Estimators and Hypothesis Testing Charlie Gibbons ARE 212 Spring 2011 Randomness in the model Considering the model what is random? Y = X β + ɛ, β is a parameter and not random, X may be
Inference for Regression Simple Linear Regression IPS Chapter 10.1 2009 W.H. Freeman and Company Objectives (IPS Chapter 10.1) Simple linear regression Statistical model for linear regression Estimating
Paper P-702 Individual Growth Analysis Using PROC MIXED Maribeth Johnson, Medical College of Georgia, Augusta, GA ABSTRACT Individual growth models are designed for exploring longitudinal data on individuals
CHAPTER 20 Meta-Regression Introduction Fixed-effect model Fixed or random effects for unexplained heterogeneity Random-effects model INTRODUCTION In primary studies we use regression, or multiple regression,
5. Linear Regression Outline.................................................................... 2 Simple linear regression 3 Linear model............................................................. 4
Econ 371 Problem Set #3 Answer Sheet 4.1 In this question, you are told that a OLS regression analysis of third grade test scores as a function of class size yields the following estimated model. T estscore
Regression Analysis Using JMP Yiming Peng, Department of Statistics February 12, 2013 2 Presentation and Data http://www.lisa.stat.vt.edu Short Courses Regression Analysis Using JMP Download Data to Desktop
MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS MSR = Mean Regression Sum of Squares MSE = Mean Squared Error RSS = Regression Sum of Squares SSE = Sum of Squared Errors/Residuals α = Level of Significance
AP * Statistics Review Linear Regression Teacher Packet Advanced Placement and AP are registered trademark of the College Entrance Examination Board. The College Board was not involved in the production
[Leeuw, Edith D. de, and Joop Hox. (2008). Missing Data. Encyclopedia of Survey Research Methods. Retrieved from http://sage-ereference.com/survey/article_n298.html] Missing Data An important indicator
1) The S & P/TSX Composite Index is based on common stock prices of a group of Canadian stocks. The weekly close level of the TSX for 6 weeks are shown: Week TSX Index 1 8480 2 8470 3 8475 4 8510 5 8500
Mgt 540 Research Methods Data Analysis 1 Additional sources Compilation of sources: http://lrs.ed.uiuc.edu/tseportal/datacollectionmethodologies/jin-tselink/tselink.htm http://web.utk.edu/~dap/random/order/start.htm
This article was downloaded by: [University Library Utrecht] On: 15 May 2012, At: 01:20 Publisher: Psychology Press Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: