Mixed-effects Regression Models - Chapters 4 and 5 include random effects to account for subject s effect on their data, and to account for
|
|
- Beverly Waters
- 7 years ago
- Views:
Transcription
1 Mixed-effects Regression Models - Chapters 4 and 5 include random effects to account for subject s effect on their data, and to account for variance-covariance structure of the longitudinal data (conditional) variance-covariance matrix V (y i X i ) = Z i Σ υ Z + σ 2 ε I n i conditional on the random effects, the responses from a subject are independent (conditional independence assumption) subjects can be measured at potentially very different timepoints (i.e., Z i matrix can easily vary across subjects) in SAS, use RANDOM statement in PROC MIXED 1
2 Covariance Pattern Models - Chapter 6 DO NOT include random effects (and thus these are not mixed models, despite the fact that we use PROC MIXED to estimate them) explicitly model the (conditional) variance-covariance matrix in terms of particular forms, e.g., V (y i X i ) = Σ i = CS, AR(1), Toep, UN timing of the repeated measures should be more or less the same for all subjects (though subjects DO NOT have to be measured at all timepoints) in SAS, use REPEATED statement in PROC MIXED 2
3 Mixed-effects Regression Models with Autocorrelated Errors - Chapter 7 combine aspects of both MRM and CPM include random effects also allow errors to be correlated (according to some particular forms); relax the conditional independence assumption V (y i X i ) = Z i Σ υ Z i + σ2 ε Ω i timing of the repeated measures should be more or less the same for all subjects (though subjects DO NOT have to be measured at all timepoints) in SAS, use RANDOM and REPEATED statements in PROC MIXED 3
4 Figure 7.1. MRM for two individuals with uncorrelated errors. 4
5 Figure 7.2. MRM for two individuals with AR(1) errors. 5
6 Mixed-effects Model with Autocorrelated Errors y i n i 1 = X i n i p β p 1 + Z i n i r υ i r 1 + e i n i 1 i = 1... N subjects j = 1... n i observations within subject i y i = the n i 1 vector of responses for subject i X i = a known n i p design matrix β = a p 1 vector of population parameters Z i = a known n i r design matrix υ i = a r 1 vector of individual effects N (0, Σ υ ) e i = a n i 1 vector of random residuals N (0, σ 2 εω i ) relative to MRM: σ 2 ε Ω i instead of σ 2 ε I n i relative to CPM: inclusion of υ i 6
7 As a result, the observations y and random coefficients υ have the joint multivariate normal distribution: y i υ N X i β 0, Z i Σ υ Z i + σεω 2 i Σ υ Z i Z i Σ υ Σ υ The mean of the posterior distribution of υ, given y i, yields the EB estimator (or EAP Expected A Posteriori ) of the individual trend parameters: υ i = Z i (σε 2 Ω i) 1 Z i + Σ 1 1 Z i (σε 2 Ω i) 1 (y i X i β) υ with covariance matrix Σ υ yi = Z i (σ 2 ε Ω i) 1 Z i + Σ 1 υ 1 7
8 AR1 Errors The errors follow a first order autoregressive (AR1) process: e k = ρe k 1 + ε k where, ε k is assumed to be iid N (0, σ 2 ε) and ρ is the autocorrelation coefficient. Further assume ρ < 1. Stationarity: V (e k ) and C(e k e k+h ) are independent of k. The error variance covariance matrix is then of the form: σ 2 εω = σ 2 ε (1 ρ 2 ) 1 ρ ρ 2... ρ n 1 ρ 1 ρ... ρ n 2 ρ 2 ρ 1... ρ n ρ n 1 ρ n 2 ρ n
9 autoregressive since process is essentially a regression equation in which e k is related to its own past values, instead of other independent variables Ω depends on one parameter ρ AR1 sometimes referred to as a Markov process, since e k only depends on the previous value Correlation between residuals declines exponentially with the number of periods separating them 9
10 For AR(1), derivation of the variance-covariance structure: V (e k ) = V (ρe k 1 + ε k ) = ρ 2 V (e k 1 ) + σ 2 ε with stationarity, V (e k ) = V (e k 1 ), and so V (e k )(1 ρ 2 ) = σ 2 ε V (e k ) = σ 2 ε/(1 ρ 2 ) C(e k, e k 1 ) = E(e k e k 1 ) = E[(ρe k 1 + ε k ) e k 1 ] = ρv (e k 1 ) = ρσ 2 ε/(1 ρ 2 ) More generally, C(e k, e k s ) = ρ s σ 2 ε/(1 ρ 2 ) 10
11 in SAS PROC MIXED, AR1 is written as: σε 2 Ω = σε 2 1 ρ ρ 2... ρ n 1 ρ 1 ρ... ρ n 2 ρ 2 ρ 1... ρ n ρ n 1 ρ n 2 ρ n PROC MIXED estimates AR(1) term ρ σε 2 = ρ 1 ρ 2 σε 2 Residual variance σ 2 ε = σ2 ε 1 ρ 2 and lists ratio of AR(1) term to residual variance ˆρ ˆσ 2 ε ˆσ ε 2 = ˆρ 11
12 AR1 with Non-Stationarity: V (e k ) and C(e k e k+h ) allowed to vary over time, but assume e 0 = 0 (Mansour et al., 1985). Then, the error variance-covariance matrix σε 2Ω has: Ω = 1 ρ ρ 2... ρ n 1 ρ 1 + ρ 2 ρ(1 + ρ 2 )... ρ n 2 (1 + ρ 2 ) ρ 2 ρ(1 + ρ 2 ) 1 + ρ 2 + ρ 4... ρ n 3 (1 + ρ 2 + ρ 4 ) ρ n 1 ρ n 2 (1 + ρ 2 ) ρ n 3 (1 + ρ 2 + ρ 4 )... (1 ρ 2k )/(1 ρ 2 ) or in terms of the Cholesky factorization Ω = ΥΥ Υ = ρ ρ 2 ρ ρ n 1 ρ n 2 ρ n
13 Ω depends on one parameter ρ Variances and covariances increase across time Not available in PROC MIXED Available in MIXREG! 13
14 MA1 Errors The errors follow a first order moving average (MA1) process: e k = ε k θε k 1 where, ε k is assumed to be iid N (0, σ 2 ε) and θ is the autocorrelation coefficient. The error variance covariance matrix is then of the form: σ 2 εω = σ θ 2 θ θ 1 + θ 2 θ θ 1 + θ θ 2 14
15 Ω depends on one parameter θ Only the first-order lags are correlated Available in PROC MIXED as TOEP(2), though parameterization is different 15
16 ARMA(1,1) Errors A more general form for autocorrelated errors is the first-order mixed autoregressive-moving average process which depends on both parameters ρ and θ: e k = ρe k 1 + ε k θε k 1 The error variance covariance matrix is now of the form: σ 2 εω = σ 2 ε (1 ρ 2 ) γ 0 γ 1 ργ 1... ρ n 2 γ 1 γ 1 γ 0 γ 1... ρ n 3 γ 1 ργ 1 γ 1 γ 0... ρ n 4 γ 1 ρ 2 γ 1 ργ 1 γ 1... ρ n 5 γ ρ n 2 γ 1 ρ n 3 γ 1 ρ n 4 γ 1... γ 0 where γ 0 = 1 + θ 2 2ρθ and γ 1 = (1 ρθ)(ρ θ) 16
17 Ω depends on two parameters ρ and θ MA term alters the lag-1 autocorrelation, after which the autocorrelations decay as in AR1 Available in PROC MIXED, though parameterization is different 17
18 Toeplitz errors The autocorrelations of each lag are not functionally related. The error variance covariance matrix is then: σ 2 ε Ω = σ2 ε 1 ρ 1 ρ 2... ρ n 1 ρ 1 1 ρ 1... ρ n 2 ρ 2 ρ ρ n ρ n 1 ρ n 2 ρ n higher-order lags are often assumed equal to 0 PROC MIXED denotes TOEP(1) as σ 2 εi ni order of Toeplitz = number of lags + 1 also, ρ 1 = ρ 1σε 2, ρ 2 = ρ 2σε 2, etc., are estimated ratio of (ˆρ 1 /ˆσ2 ε ) = ˆρ 1 is listed 18
19 PROC MIXED models DATA one; INFILE = c:\data\temp.dat ; INPUT id y time; timec = time; MRM: y i = X i β + Z i υ i + e i with υ i N (0, Σ υ ) and e i N (0, σε 2 I n i ) e.g., PROC MIXED; CLASS id; MODEL y = time / S; RANDOM INT time / SUB=id TYPE=UN G GCORR; For MRM, always use TYPE=UN, unless there is some reason why the random effects would be uncorrelated (the default type) or follow some kind of structure (very unusual circumstance) 19
20 CPM: y i = X i β + e i with e i N (0, σ 2 εω i ) e.g., PROC MIXED; CLASS id timec; MODEL y = time / S; REPEATED timec / SUB=id TYPE=UN R RCORR; or TYPE=CS, TYPE=AR(1), TYPE=TOEP, etc., 20
21 MRM with AC errors: y i = X i β + Z i υ i + e i with υ i N (0, Σ υ ) and e i N (0, σ 2 εω i ) e.g., PROC MIXED; CLASS id timec; MODEL y = time / S; RANDOM INT time / SUB=id TYPE=UN G GCORR; REPEATED timec / SUB=id TYPE=AR(1) R RCORR; or TYPE=TOEP(2), TYPE=ARMA(1,1), etc., on REPEATED important note: if V (y) is n n, DO NOT fit a structure with more than n(n + 1)/2 parameters (where n is the total number of timepoints) 21
22 Missing Data and Autocorrelated errors suppose a study has 5 equally-spaced timepoints, and you want a AR(1) form for the errors: Ω = 1 ρ ρ 2 ρ 3 ρ 4 ρ 1 ρ ρ 2 ρ 3 ρ 2 ρ 1 ρ ρ 2 ρ 3 ρ 2 ρ 1 ρ ρ 4 ρ 3 ρ 2 ρ 1 suppose a given subject is measured at T1, T3, and T4 want Ω i = 1 ρ 2 ρ 3 ρ 2 1 ρ ρ 3 ρ 1 NOT Ω i = 1 ρ ρ 2 ρ 1 ρ ρ 2 ρ 1 Must keep track of time-relatedness of repeated measures 22
23 PROC MIXED example Must have a timing variable on the CLASS and REPEATED statements Usually, it should not be the same variable name as on the MODEL statement (unless you want time treated as a categorical factor in modeling the mean response over time) e.g., DATA one; INFILE c\data\riesby.dat ; INPUT id hamd intcpt week endog endweek; time = week; PROC MIXED METHOD=ML COVTEST; CLASS id time; MODEL hamd = week endog endweek / S; RANDOM INT week / SUB=id TYPE=UN G GCORR; REPEATED time / SUB=id TYPE=AR(1) R RCORR; 23
24 Model Selection which (co)variance structure to use for a given dataset? MRM (G) CPM (R) MRM with AC errors (G and R) can compare any restricted structure (q < n(n + 1)/2, where q = number of variance-covariance parameters, and n = total number of timeponts) to the CPM unstructured form (n(n + 1)/2 parameters) via a LR test if a given structure, which represents some kind of restriction of the general form, does not fit the data statistically worse than the unstructured, then this structure is reasonable degrees of freedom for this test equal (n(n + 1)/2) - q, where (n(n + 1)/2) and q are the numbers of (co)variance parameters estimated by the full and reduced models 24
25 p-values from LR tests of variance-covariance parameters need to be adjusted; divide by two adjustment, as described in Snijders and Bosker (1999), does reasonably well Can use either REML or ML for model selection of variance-covariance structure Of course, covariates MUST be the same For non-nested structures, comparison of likelihood-penalized statistics Akaike s Information Criterion (AIC) = 2 log L + 2p, where p is the total number of model parameters Bayesian Information Criterion (BIC) = 2 log L + p log N, where N is the total number of subjects 25
26 2-step model selection procedure (1) Including all covariates of potential interest, select an appropriate (co)variance structure (2) once a (co)variance structure is selected as appropriate, model trimming of the covariates is performed as usual Sometimes no best model, just some good ones similar issue as in model selection of explanatory variables choose a model that is in the class of good models 26
27 Crossover Study Example Bock (1983) examined the effect of tricyclic antidepressant (TCA) drugs on clinical status as measured by the Weekly Psychiatric Status Scale for Episodic Affective Disorders (WPSS) in 75 depressed patients in a six week crossover study. At each week, patients received a rating on this scale, with scores of: 1, usual self; 2, residual symptomatology; 3, partial remission; 4, marked symptomatology; 5, definite criteria for major depressive disorder; or 6, definite criteria for major depressive disorder with extreme impairment. A quasi-continuous measure of severity 27
28 Week Treatment Group N means TCA-None None-TCA standard deviations correlations
29 A mixed-effects regression model for a subject i is: WPSS i1 WPSS i2 WPSS i3 WPSS i4 WPSS i5 WPSS i6 = 1 5/2 1/2 1 3/ /2 1/2 1 1/2 1/2 1 3/ /2 1/2 β 0 β 1 β 2 + [grp] 1 5/2 1/2 1 3/ /2 1/2 1 1/2 1/2 1 3/ /2 1/2 β 3 β 4 β 5 + e i1 e i2 e i3 e i4 e i5 e i6 with random effects: General level υ 0i Linear trend υ 1i Change of slope υ 2i and grp = 0 for TCA-None 1 for None-TCA 29
30 Analysis of WPSS across time - 3 random effects ρ = 0 Model Estimate Std Error p < A. Over all trend general level β linear trend β slope effect β B. Between orders general level β linear trend β ns slope effect β συ σ υ0 υ συ σ υ0 υ σ υ1 υ συ error variance σ log L =
31 TCA-None group - interpretation of slope effect slope estimate = -.25 Period A (first three timepoints) slope contrast coefficients for Period A = [.5 0.5] as time increases severity scores decrease in Period A (adjusting for overall linear decline) Period B (latter three timepoints) slope contrast coefficients for Period B = [.5 0.5] as time increases severity scores increase in Period B (adjusting for overall linear decline) improvement is more pronounced in Period A 31
32 None-TCA group - interpretation of slope effect slope estimate = =.19 Period A (first three timepoints) slope contrast coefficients for Period A = [.5 0.5] as time increases severity scores increase in Period A (adjusting for overall linear decline) Period B (latter three timepoints) slope contrast coefficients for Period B = [.5 0.5] as time increases severity scores decrease in Period B (adjusting for overall linear decline) improvement is more pronounced in Period B 32
33 TCA-None group - estimated mean differences intercept linear slope week 1 ŷ = /2 (-.208) -1/2 (-.250) week 3 ŷ = /2 (-.208) +1/2 (-.250) week (-.208) +1 (-.250) per week change /2 (-.250) = week 4 ŷ = /2 (-.208) +1/2 (-.250) week 6 ŷ = /2 (-.208) -1/2 (-.250) week (-.208) -1 (-.250) per week change /2 (-.250) = overall change is more pronounced in Period A 33
34 None-TCA group - estimated mean differences intercept linear slope week 1 ŷ = ( ) -5/2 ( ) -1/2 ( ) week 3 ŷ = ( ) -1/2 ( ) +1/2 ( ) week ( ) +1 ( ) per week change /2 ( ) = week 4 ŷ = ( ) +1/2 ( ) +1/2 ( ) week 6 ŷ = ( ) +5/2 ( ) -1/2 ( ) week ( ) -1 ( ) per week change /2 ( ) = overall change is more pronounced in Period B 34
35 Observed and estimated WPSS means across time by group 35
36 NS AR1 Model Estimate Std Error p < A. Over all trend general level β linear trend β slope effect β B. Between orders general level β linear trend β ns slope effect β σ 2 υ σ υ0 υ σ 2 υ σ υ0 υ σ υ1 υ σ 2 υ error variance σ Non-Stationary AR(1) ρ log L = χ 2 1 = 5.8 p <
37 Toeplitz - 2 lags Estimate Std Error p < A. Over all trend general level β linear trend β slope effect β B. Between orders general level β linear trend β ns slope effect β συ σ υ0 υ συ σ υ0 υ σ υ1 υ συ error variance σ lag 1 ρ lag 2 ρ log L = χ 2 2 = 6.04 p <.05 37
38 MRM3 model MRM1 w. TOEP(4) Model Terms est. se est. se Overall trend general level β linear trend β slope effect β Between orders general level β linear trend β slope effect β συ σ υ0 υ συ σ υ0 υ σ υ1 υ συ error variance σ lag 1 ρ lag 2 ρ lag 3 ρ log L
39 Observed sds and correlations for y across time: Est sds and corrs - MRM
40 Observed sds and correlations for y across time: Est sds and corrs - MRM1 w. Toep(4)
41 Table 7.2 Variance-covariance structures for Bock (1983) data Model Σ υ r σ 2 Ω i q r + q 2 log L AIC BIC 1 Int 1 σ 2 I Int 1 AR(1) 2 3 Intercept variance set to zero 3 Int 1 NS AR(1) Int 1 MA(1) Int 1 ARMA(1,1) 3 4 Intercept variance set to zero 6 Int 1 Toeplitz(3) Int 1 Toeplitz(4) Int 1 Toeplitz(5) Int, Lin 3 σ 2 I Int, Lin 3 AR(1) 2 5 Intercept variance set to zero 11 Int, Lin 3 NS AR(1) 2 5 Intercept variance set to zero 12 Int, Lin 3 MA(1) Int, Lin 3 ARMA(1,1) 3 6 Linear variance set to zero 14 Int, Lin 3 Toeplitz(3) Int, Lin 3 Toeplitz(4) 4 7 Linear variance set to zero 16 Int, Lin 3 Toeplitz(5) Int, Lin, SC 6 σ 2 I Int, Lin, SC 6 AR(1) 2 8 Intercept variance set to zero 19 Int, Lin, SC 6 NS AR(1) Int, Lin, SC 6 MA(1) Int, Lin, SC 6 ARMA(1,1) 3 9 Intercept variance set to zero 22 Int, Lin, SC 6 Toeplitz(3) Int, Lin, SC 6 Toeplitz(4) 4 10 Unity correlation in Σ υ 24 Int, Lin, SC 6 Toeplitz(5) 5 11 Linear variance set to zero 25 0 UN Int = intercept, Lin = linear, SC = slope change 41
42 Table 7.3 Fixed-Effects Estimates for Models 3 and 25 Model 3 Model 25 Parameter Estimate SE p < Estimate SE p < Constant β Linear (L) β Slope Change (SC) β Group (G) β G L β G SC β log L Note. SE = standard error. 42
43 Model 25 estimated (conditional) variance-covariance matrix ˆΣ = Model 3 estimated (conditional) variance-covariance matrix ˆΣ =
44 Model 25 estimated (conditional) correlation matrix Model 3 estimated (conditional) correlation matrix Model 3 is not that bad considering it only has q = 3, but Model 25 is the best choice based on LR test and also AIC; remember BIC tends to choose models that are too simple 44
45 Example 7.1: SAS code for analysis of Bock dataset. This handout lists syntax for several PROC MIXED analyses including (a) mixed-effects models, (b) covariance pattern models, and (c) mixed-effects models with autocorrelated errors. 45
46 Example 7.2: SAS code and output - SAS IML code and output from analysis of Bock dataset. This handout uses IML to provide estimated means, variances, and correlations across time based on mixed-effects models with autocorrelated errors. 46
47 Maximum Likelihood Solution The ML solution uses maximum likelihood estimation in the distribution obtained by integrating over the distribution of β, that is, the marginal distribution: h(y i ) = υ f(y i υ; β, ω, σε 2 ) g(υ) dυ where g(υ) = (2π) r/2 Σ υ 1/2 exp [ 1/2υ Σ 1 υ υ ] f(y i υ; β, ω, σε) 2 = (2π) n i 2 σεω 2 12 i exp[ 1/2(y i X i β Z i υ i ) (σε 2 Ω i) 1 (y i X i β Z i υ i )] 47
48 Differentiating the log-likelihood, log L = N i=1 log h(y i ), yields: log L = 1 vechσ υ 2 G r (Σ 1 υ Σ 1 υ ) N i=1 vec Συ yi + υ i υ i N vecσ υ log L β = σ 2 ε N i=1 X i Ω 1 i e i log L σ 2 ε = 1 2 σ 4 ε N i=1 vec Ω 1 i vec[e i e i E i] e i = y i X i β Z i υ i, E i = σ 2 εω i Z i Σ υ yi Z i 48
49 For the vector ω of autocorrelation parameters: log L ω = 1 2 σ 2 ε N i=1 vec Ω i ω (Ω 1 i Ω 1 i ) vec[e i e i E i ] where, for AR1: vec Ω i ω = vec Ω i ρ = kρk 1 (k 2)ρ k+1 (1 ρ 2 ) 2 where k = 0, 1,..., n i 1 indexes the order of the diagonal of matrix Ω i (i.e., 0 = main diagonal, 1 = first off-diagonal). For MA1: vec Ω i ω = vec Ω i θ = 2θ if k = 0 1 if k = 1 0 otherwise 49
50 For Toeplitz errors, ω is the s x 1 vector with all non-zero autocorrelations of the symmetric toeplitz matrix Ω. Magnus (1988) defines the n 2 x s matrix, denoted K n,s, where, K n,s ω = vec Ω Notice also that the matrix K n,s is a derivative matrix, since vec Ω ω = K n,s For subject i with n i (< n) observations define K ni,s as: K ni,s ω = vec Ω i 50
51 EM solutions - M-step ˆ Σ υ = 1 N N i ( υ i υ i + Σ υ y i ) ˆβ = N i=1 X i Ω 1 i X i 1 N i=1 X i Ω 1 i (y i Z i υ i ) ˆσ 2 ε = N i=1 n i ˆω = σ 1/2 ε N i=1 1 N N i=1 vec Ω i ω i=1 vec Ω 1 i vec[e i e i + Z i Σ υ yi Z i] vec Ω i ω (Ω 1 i (Ω 1 i Ω 1 i ) vec Ω i ω 1 Ω 1 i ) vec[e i e i + Z iσ υ yi Z i ] 51
52 Information Matrix - Fisher Scoring Solution β Σ υ σ 2 ε ω β I(β) Σ υ 0 I(vechΣ υ ) σ 2 ε 0 I(σ 2 ε, vech Σ υ ) I(σ 2 ε) ω 0 I(ω, vech Σ υ ) I(ω, σ 2 ε ) I(ω) 52
53 Reparameterization of the Variance Terms Σ υ = LDL = L(exp Π)L = l l 31 l 32 1 e π e π e π 3 1 l 21 l l σ 2 ε = e τ AR1 ρ = ψ / 1 + ψ 2 or ψ = ρ / 1 ρ 2 53
54 And so, log L (w(π)) = 1 2 D 1 H r (L 1 L 1 ) N i=1 vec Συ yi + υ i υ i N vec Σ υ log L (ṽ(l)) = J r (L 1 Σ 1 υ ) N i=1 vec[σ υ y i + υ i υ i ] log L τ = σ 2 ε log L σ 2 ε log L ψ = 1 log L (1 + ψ 2 ) 3/2 ρ 54
SAS Syntax and Output for Data Manipulation:
Psyc 944 Example 5 page 1 Practice with Fixed and Random Effects of Time in Modeling Within-Person Change The models for this example come from Hoffman (in preparation) chapter 5. We will be examining
More informationIndividual Growth Analysis Using PROC MIXED Maribeth Johnson, Medical College of Georgia, Augusta, GA
Paper P-702 Individual Growth Analysis Using PROC MIXED Maribeth Johnson, Medical College of Georgia, Augusta, GA ABSTRACT Individual growth models are designed for exploring longitudinal data on individuals
More informationTechnical report. in SPSS AN INTRODUCTION TO THE MIXED PROCEDURE
Linear mixedeffects modeling in SPSS AN INTRODUCTION TO THE MIXED PROCEDURE Table of contents Introduction................................................................3 Data preparation for MIXED...................................................3
More informationLinear Mixed-Effects Modeling in SPSS: An Introduction to the MIXED Procedure
Technical report Linear Mixed-Effects Modeling in SPSS: An Introduction to the MIXED Procedure Table of contents Introduction................................................................ 1 Data preparation
More informationSTATISTICA Formula Guide: Logistic Regression. Table of Contents
: Table of Contents... 1 Overview of Model... 1 Dispersion... 2 Parameterization... 3 Sigma-Restricted Model... 3 Overparameterized Model... 4 Reference Coding... 4 Model Summary (Summary Tab)... 5 Summary
More informationMissing Data in Longitudinal Studies
Missing Data in Longitudinal Studies Hedeker D & Gibbons RD (1997). Application of random-effects pattern-mixture models for missing data in longitudinal studies. Psychological Methods, 2, 64-78. Chapter
More informationTime-Series Regression and Generalized Least Squares in R
Time-Series Regression and Generalized Least Squares in R An Appendix to An R Companion to Applied Regression, Second Edition John Fox & Sanford Weisberg last revision: 11 November 2010 Abstract Generalized
More informationPart 2: Analysis of Relationship Between Two Variables
Part 2: Analysis of Relationship Between Two Variables Linear Regression Linear correlation Significance Tests Multiple regression Linear Regression Y = a X + b Dependent Variable Independent Variable
More informationChapter 15. Mixed Models. 15.1 Overview. A flexible approach to correlated data.
Chapter 15 Mixed Models A flexible approach to correlated data. 15.1 Overview Correlated data arise frequently in statistical analyses. This may be due to grouping of subjects, e.g., students within classrooms,
More informationUse of deviance statistics for comparing models
A likelihood-ratio test can be used under full ML. The use of such a test is a quite general principle for statistical testing. In hierarchical linear models, the deviance test is mostly used for multiparameter
More information861 Example SPLH. 5 page 1. prefer to have. New data in. SPSS Syntax FILE HANDLE. VARSTOCASESS /MAKE rt. COMPUTE mean=2. COMPUTE sal=2. END IF.
SPLH 861 Example 5 page 1 Multivariate Models for Repeated Measures Response Times in Older and Younger Adults These data were collected as part of my masters thesis, and are unpublished in this form (to
More informationLongitudinal Data Analysis
Longitudinal Data Analysis Acknowledge: Professor Garrett Fitzmaurice INSTRUCTOR: Rino Bellocco Department of Statistics & Quantitative Methods University of Milano-Bicocca Department of Medical Epidemiology
More informationMISSING DATA TECHNIQUES WITH SAS. IDRE Statistical Consulting Group
MISSING DATA TECHNIQUES WITH SAS IDRE Statistical Consulting Group ROAD MAP FOR TODAY To discuss: 1. Commonly used techniques for handling missing data, focusing on multiple imputation 2. Issues that could
More informationAdvanced Forecasting Techniques and Models: ARIMA
Advanced Forecasting Techniques and Models: ARIMA Short Examples Series using Risk Simulator For more information please visit: www.realoptionsvaluation.com or contact us at: admin@realoptionsvaluation.com
More informationTIME SERIES ANALYSIS
TIME SERIES ANALYSIS Ramasubramanian V. I.A.S.R.I., Library Avenue, New Delhi- 110 012 ram_stat@yahoo.co.in 1. Introduction A Time Series (TS) is a sequence of observations ordered in time. Mostly these
More informationHow To Model A Series With Sas
Chapter 7 Chapter Table of Contents OVERVIEW...193 GETTING STARTED...194 TheThreeStagesofARIMAModeling...194 IdentificationStage...194 Estimation and Diagnostic Checking Stage...... 200 Forecasting Stage...205
More informationIntroduction to General and Generalized Linear Models
Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby
More informationLongitudinal Data Analyses Using Linear Mixed Models in SPSS: Concepts, Procedures and Illustrations
Research Article TheScientificWorldJOURNAL (2011) 11, 42 76 TSW Child Health & Human Development ISSN 1537-744X; DOI 10.1100/tsw.2011.2 Longitudinal Data Analyses Using Linear Mixed Models in SPSS: Concepts,
More informationTime Series Analysis
Time Series Analysis hm@imm.dtu.dk Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby 1 Outline of the lecture Identification of univariate time series models, cont.:
More informationAnalysis and Computation for Finance Time Series - An Introduction
ECMM703 Analysis and Computation for Finance Time Series - An Introduction Alejandra González Harrison 161 Email: mag208@exeter.ac.uk Time Series - An Introduction A time series is a sequence of observations
More information5. Multiple regression
5. Multiple regression QBUS6840 Predictive Analytics https://www.otexts.org/fpp/5 QBUS6840 Predictive Analytics 5. Multiple regression 2/39 Outline Introduction to multiple linear regression Some useful
More informationIntroducing the Multilevel Model for Change
Department of Psychology and Human Development Vanderbilt University GCM, 2010 1 Multilevel Modeling - A Brief Introduction 2 3 4 5 Introduction In this lecture, we introduce the multilevel model for change.
More informationADVANCED FORECASTING MODELS USING SAS SOFTWARE
ADVANCED FORECASTING MODELS USING SAS SOFTWARE Girish Kumar Jha IARI, Pusa, New Delhi 110 012 gjha_eco@iari.res.in 1. Transfer Function Model Univariate ARIMA models are useful for analysis and forecasting
More information2. Linear regression with multiple regressors
2. Linear regression with multiple regressors Aim of this section: Introduction of the multiple regression model OLS estimation in multiple regression Measures-of-fit in multiple regression Assumptions
More informationChapter 4: Vector Autoregressive Models
Chapter 4: Vector Autoregressive Models 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie IV.1 Vector Autoregressive Models (VAR)...
More informationChapter 6: Multivariate Cointegration Analysis
Chapter 6: Multivariate Cointegration Analysis 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie VI. Multivariate Cointegration
More informationOverview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written
More informationI L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN
Beckman HLM Reading Group: Questions, Answers and Examples Carolyn J. Anderson Department of Educational Psychology I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN Linear Algebra Slide 1 of
More informationTime Series Analysis
Time Series 1 April 9, 2013 Time Series Analysis This chapter presents an introduction to the branch of statistics known as time series analysis. Often the data we collect in environmental studies is collected
More informationBooth School of Business, University of Chicago Business 41202, Spring Quarter 2015, Mr. Ruey S. Tsay. Solutions to Midterm
Booth School of Business, University of Chicago Business 41202, Spring Quarter 2015, Mr. Ruey S. Tsay Solutions to Midterm Problem A: (30 pts) Answer briefly the following questions. Each question has
More informationFIXED AND MIXED-EFFECTS MODELS FOR MULTI-WATERSHED EXPERIMENTS
FIXED AND MIXED-EFFECTS MODELS FOR MULTI-WATERSHED EXPERIMENTS Jack Lewis, Mathematical Statistician, Pacific Southwest Research Station, 7 Bayview Drive, Arcata, CA, email: jlewis@fs.fed.us Abstract:
More informationA Basic Introduction to Missing Data
John Fox Sociology 740 Winter 2014 Outline Why Missing Data Arise Why Missing Data Arise Global or unit non-response. In a survey, certain respondents may be unreachable or may refuse to participate. Item
More informationE(y i ) = x T i β. yield of the refined product as a percentage of crude specific gravity vapour pressure ASTM 10% point ASTM end point in degrees F
Random and Mixed Effects Models (Ch. 10) Random effects models are very useful when the observations are sampled in a highly structured way. The basic idea is that the error associated with any linear,
More informationStatistics in Retail Finance. Chapter 6: Behavioural models
Statistics in Retail Finance 1 Overview > So far we have focussed mainly on application scorecards. In this chapter we shall look at behavioural models. We shall cover the following topics:- Behavioural
More informationNotes on Applied Linear Regression
Notes on Applied Linear Regression Jamie DeCoster Department of Social Psychology Free University Amsterdam Van der Boechorststraat 1 1081 BT Amsterdam The Netherlands phone: +31 (0)20 444-8935 email:
More informationTIME SERIES ANALYSIS
TIME SERIES ANALYSIS L.M. BHAR AND V.K.SHARMA Indian Agricultural Statistics Research Institute Library Avenue, New Delhi-0 02 lmb@iasri.res.in. Introduction Time series (TS) data refers to observations
More informationRob J Hyndman. Forecasting using. 11. Dynamic regression OTexts.com/fpp/9/1/ Forecasting using R 1
Rob J Hyndman Forecasting using 11. Dynamic regression OTexts.com/fpp/9/1/ Forecasting using R 1 Outline 1 Regression with ARIMA errors 2 Example: Japanese cars 3 Using Fourier terms for seasonality 4
More informationTime Series Analysis
Time Series Analysis Forecasting with ARIMA models Andrés M. Alonso Carolina García-Martos Universidad Carlos III de Madrid Universidad Politécnica de Madrid June July, 2012 Alonso and García-Martos (UC3M-UPM)
More informationUnivariate Time Series Analysis; ARIMA Models
Econometrics 2 Spring 25 Univariate Time Series Analysis; ARIMA Models Heino Bohn Nielsen of4 Outline of the Lecture () Introduction to univariate time series analysis. (2) Stationarity. (3) Characterizing
More informationChapter 1. Vector autoregressions. 1.1 VARs and the identi cation problem
Chapter Vector autoregressions We begin by taking a look at the data of macroeconomics. A way to summarize the dynamics of macroeconomic data is to make use of vector autoregressions. VAR models have become
More informationIntroduction to Multilevel Modeling Using HLM 6. By ATS Statistical Consulting Group
Introduction to Multilevel Modeling Using HLM 6 By ATS Statistical Consulting Group Multilevel data structure Students nested within schools Children nested within families Respondents nested within interviewers
More informationBiostatistics Short Course Introduction to Longitudinal Studies
Biostatistics Short Course Introduction to Longitudinal Studies Zhangsheng Yu Division of Biostatistics Department of Medicine Indiana University School of Medicine Zhangsheng Yu (Indiana University) Longitudinal
More informationPITFALLS IN TIME SERIES ANALYSIS. Cliff Hurvich Stern School, NYU
PITFALLS IN TIME SERIES ANALYSIS Cliff Hurvich Stern School, NYU The t -Test If x 1,..., x n are independent and identically distributed with mean 0, and n is not too small, then t = x 0 s n has a standard
More informationSections 2.11 and 5.8
Sections 211 and 58 Timothy Hanson Department of Statistics, University of South Carolina Stat 704: Data Analysis I 1/25 Gesell data Let X be the age in in months a child speaks his/her first word and
More informationIndices of Model Fit STRUCTURAL EQUATION MODELING 2013
Indices of Model Fit STRUCTURAL EQUATION MODELING 2013 Indices of Model Fit A recommended minimal set of fit indices that should be reported and interpreted when reporting the results of SEM analyses:
More informationModule 5: Multiple Regression Analysis
Using Statistical Data Using to Make Statistical Decisions: Data Multiple to Make Regression Decisions Analysis Page 1 Module 5: Multiple Regression Analysis Tom Ilvento, University of Delaware, College
More informationRidge Regression. Patrick Breheny. September 1. Ridge regression Selection of λ Ridge regression in R/SAS
Ridge Regression Patrick Breheny September 1 Patrick Breheny BST 764: Applied Statistical Modeling 1/22 Ridge regression: Definition Definition and solution Properties As mentioned in the previous lecture,
More informationEstimating an ARMA Process
Statistics 910, #12 1 Overview Estimating an ARMA Process 1. Main ideas 2. Fitting autoregressions 3. Fitting with moving average components 4. Standard errors 5. Examples 6. Appendix: Simple estimators
More informationPenalized regression: Introduction
Penalized regression: Introduction Patrick Breheny August 30 Patrick Breheny BST 764: Applied Statistical Modeling 1/19 Maximum likelihood Much of 20th-century statistics dealt with maximum likelihood
More informationGLM I An Introduction to Generalized Linear Models
GLM I An Introduction to Generalized Linear Models CAS Ratemaking and Product Management Seminar March 2009 Presented by: Tanya D. Havlicek, Actuarial Assistant 0 ANTITRUST Notice The Casualty Actuarial
More informationSAS Software to Fit the Generalized Linear Model
SAS Software to Fit the Generalized Linear Model Gordon Johnston, SAS Institute Inc., Cary, NC Abstract In recent years, the class of generalized linear models has gained popularity as a statistical modeling
More informationOutline. Topic 4 - Analysis of Variance Approach to Regression. Partitioning Sums of Squares. Total Sum of Squares. Partitioning sums of squares
Topic 4 - Analysis of Variance Approach to Regression Outline Partitioning sums of squares Degrees of freedom Expected mean squares General linear test - Fall 2013 R 2 and the coefficient of correlation
More informationChapter 5. Analysis of Multiple Time Series. 5.1 Vector Autoregressions
Chapter 5 Analysis of Multiple Time Series Note: The primary references for these notes are chapters 5 and 6 in Enders (2004). An alternative, but more technical treatment can be found in chapters 10-11
More informationMultivariate Logistic Regression
1 Multivariate Logistic Regression As in univariate logistic regression, let π(x) represent the probability of an event that depends on p covariates or independent variables. Then, using an inv.logit formulation
More informationTime Series Analysis
Time Series Analysis Autoregressive, MA and ARMA processes Andrés M. Alonso Carolina García-Martos Universidad Carlos III de Madrid Universidad Politécnica de Madrid June July, 212 Alonso and García-Martos
More informationModel Selection and Claim Frequency for Workers Compensation Insurance
Model Selection and Claim Frequency for Workers Compensation Insurance Jisheng Cui, David Pitt and Guoqi Qian Abstract We consider a set of workers compensation insurance claim data where the aggregate
More informationMultiple Linear Regression in Data Mining
Multiple Linear Regression in Data Mining Contents 2.1. A Review of Multiple Linear Regression 2.2. Illustration of the Regression Process 2.3. Subset Selection in Linear Regression 1 2 Chap. 2 Multiple
More informationUnivariate Time Series Analysis; ARIMA Models
Econometrics 2 Fall 25 Univariate Time Series Analysis; ARIMA Models Heino Bohn Nielsen of4 Univariate Time Series Analysis We consider a single time series, y,y 2,..., y T. We want to construct simple
More informationHLM software has been one of the leading statistical packages for hierarchical
Introductory Guide to HLM With HLM 7 Software 3 G. David Garson HLM software has been one of the leading statistical packages for hierarchical linear modeling due to the pioneering work of Stephen Raudenbush
More informationApplied Statistics. J. Blanchet and J. Wadsworth. Institute of Mathematics, Analysis, and Applications EPF Lausanne
Applied Statistics J. Blanchet and J. Wadsworth Institute of Mathematics, Analysis, and Applications EPF Lausanne An MSc Course for Applied Mathematicians, Fall 2012 Outline 1 Model Comparison 2 Model
More informationStephen du Toit Mathilda du Toit Gerhard Mels Yan Cheng. LISREL for Windows: PRELIS User s Guide
Stephen du Toit Mathilda du Toit Gerhard Mels Yan Cheng LISREL for Windows: PRELIS User s Guide Table of contents INTRODUCTION... 1 GRAPHICAL USER INTERFACE... 2 The Data menu... 2 The Define Variables
More informationThe Latent Variable Growth Model In Practice. Individual Development Over Time
The Latent Variable Growth Model In Practice 37 Individual Development Over Time y i = 1 i = 2 i = 3 t = 1 t = 2 t = 3 t = 4 ε 1 ε 2 ε 3 ε 4 y 1 y 2 y 3 y 4 x η 0 η 1 (1) y ti = η 0i + η 1i x t + ε ti
More informationAUTOMATION OF ENERGY DEMAND FORECASTING. Sanzad Siddique, B.S.
AUTOMATION OF ENERGY DEMAND FORECASTING by Sanzad Siddique, B.S. A Thesis submitted to the Faculty of the Graduate School, Marquette University, in Partial Fulfillment of the Requirements for the Degree
More informationSYSTEMS OF REGRESSION EQUATIONS
SYSTEMS OF REGRESSION EQUATIONS 1. MULTIPLE EQUATIONS y nt = x nt n + u nt, n = 1,...,N, t = 1,...,T, x nt is 1 k, and n is k 1. This is a version of the standard regression model where the observations
More informationOverview of Methods for Analyzing Cluster-Correlated Data. Garrett M. Fitzmaurice
Overview of Methods for Analyzing Cluster-Correlated Data Garrett M. Fitzmaurice Laboratory for Psychiatric Biostatistics, McLean Hospital Department of Biostatistics, Harvard School of Public Health Outline
More informationAn Introduction to Modeling Longitudinal Data
An Introduction to Modeling Longitudinal Data Session I: Basic Concepts and Looking at Data Robert Weiss Department of Biostatistics UCLA School of Public Health robweiss@ucla.edu August 2010 Robert Weiss
More informationANOVA. February 12, 2015
ANOVA February 12, 2015 1 ANOVA models Last time, we discussed the use of categorical variables in multivariate regression. Often, these are encoded as indicator columns in the design matrix. In [1]: %%R
More information1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96
1 Final Review 2 Review 2.1 CI 1-propZint Scenario 1 A TV manufacturer claims in its warranty brochure that in the past not more than 10 percent of its TV sets needed any repair during the first two years
More informationSPSS TRAINING SESSION 3 ADVANCED TOPICS (PASW STATISTICS 17.0) Sun Li Centre for Academic Computing lsun@smu.edu.sg
SPSS TRAINING SESSION 3 ADVANCED TOPICS (PASW STATISTICS 17.0) Sun Li Centre for Academic Computing lsun@smu.edu.sg IN SPSS SESSION 2, WE HAVE LEARNT: Elementary Data Analysis Group Comparison & One-way
More informationRandom effects and nested models with SAS
Random effects and nested models with SAS /************* classical2.sas ********************* Three levels of factor A, four levels of B Both fixed Both random A fixed, B random B nested within A ***************************************************/
More informationMilk Data Analysis. 1. Objective Introduction to SAS PROC MIXED Analyzing protein milk data using STATA Refit protein milk data using PROC MIXED
1. Objective Introduction to SAS PROC MIXED Analyzing protein milk data using STATA Refit protein milk data using PROC MIXED 2. Introduction to SAS PROC MIXED The MIXED procedure provides you with flexibility
More informationProblem of Missing Data
VASA Mission of VA Statisticians Association (VASA) Promote & disseminate statistical methodological research relevant to VA studies; Facilitate communication & collaboration among VA-affiliated statisticians;
More informationTime Series Analysis
Time Series Analysis Identifying possible ARIMA models Andrés M. Alonso Carolina García-Martos Universidad Carlos III de Madrid Universidad Politécnica de Madrid June July, 2012 Alonso and García-Martos
More informationAnalyzing Intervention Effects: Multilevel & Other Approaches. Simplest Intervention Design. Better Design: Have Pretest
Analyzing Intervention Effects: Multilevel & Other Approaches Joop Hox Methodology & Statistics, Utrecht Simplest Intervention Design R X Y E Random assignment Experimental + Control group Analysis: t
More informationLecture 14: GLM Estimation and Logistic Regression
Lecture 14: GLM Estimation and Logistic Regression Dipankar Bandyopadhyay, Ph.D. BMTRY 711: Analysis of Categorical Data Spring 2011 Division of Biostatistics and Epidemiology Medical University of South
More informationReview of the Methods for Handling Missing Data in. Longitudinal Data Analysis
Int. Journal of Math. Analysis, Vol. 5, 2011, no. 1, 1-13 Review of the Methods for Handling Missing Data in Longitudinal Data Analysis Michikazu Nakai and Weiming Ke Department of Mathematics and Statistics
More informationModel based clustering of longitudinal data: application to modeling disease course and gene expression trajectories
1 2 3 Model based clustering of longitudinal data: application to modeling disease course and gene expression trajectories 4 5 A. Ciampi 1, H. Campbell, A. Dyacheno, B. Rich, J. McCuser, M. G. Cole 6 7
More informationMultivariate Normal Distribution
Multivariate Normal Distribution Lecture 4 July 21, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2 Lecture #4-7/21/2011 Slide 1 of 41 Last Time Matrices and vectors Eigenvalues
More informationJoint models for classification and comparison of mortality in different countries.
Joint models for classification and comparison of mortality in different countries. Viani D. Biatat 1 and Iain D. Currie 1 1 Department of Actuarial Mathematics and Statistics, and the Maxwell Institute
More informationCentre for Central Banking Studies
Centre for Central Banking Studies Technical Handbook No. 4 Applied Bayesian econometrics for central bankers Andrew Blake and Haroon Mumtaz CCBS Technical Handbook No. 4 Applied Bayesian econometrics
More informationForecasting the US Dollar / Euro Exchange rate Using ARMA Models
Forecasting the US Dollar / Euro Exchange rate Using ARMA Models LIUWEI (9906360) - 1 - ABSTRACT...3 1. INTRODUCTION...4 2. DATA ANALYSIS...5 2.1 Stationary estimation...5 2.2 Dickey-Fuller Test...6 3.
More information4. Simple regression. QBUS6840 Predictive Analytics. https://www.otexts.org/fpp/4
4. Simple regression QBUS6840 Predictive Analytics https://www.otexts.org/fpp/4 Outline The simple linear model Least squares estimation Forecasting with regression Non-linear functional forms Regression
More informationLecture 2: ARMA(p,q) models (part 3)
Lecture 2: ARMA(p,q) models (part 3) Florian Pelgrin University of Lausanne, École des HEC Department of mathematics (IMEA-Nice) Sept. 2011 - Jan. 2012 Florian Pelgrin (HEC) Univariate time series Sept.
More informationi=1 In practice, the natural logarithm of the likelihood function, called the log-likelihood function and denoted by
Statistics 580 Maximum Likelihood Estimation Introduction Let y (y 1, y 2,..., y n be a vector of iid, random variables from one of a family of distributions on R n and indexed by a p-dimensional parameter
More informationExploratory Factor Analysis
Introduction Principal components: explain many variables using few new variables. Not many assumptions attached. Exploratory Factor Analysis Exploratory factor analysis: similar idea, but based on model.
More informationModule 5: Introduction to Multilevel Modelling SPSS Practicals Chris Charlton 1 Centre for Multilevel Modelling
Module 5: Introduction to Multilevel Modelling SPSS Practicals Chris Charlton 1 Centre for Multilevel Modelling Pre-requisites Modules 1-4 Contents P5.1 Comparing Groups using Multilevel Modelling... 4
More information9th Russian Summer School in Information Retrieval Big Data Analytics with R
9th Russian Summer School in Information Retrieval Big Data Analytics with R Introduction to Time Series with R A. Karakitsiou A. Migdalas Industrial Logistics, ETS Institute Luleå University of Technology
More informationChapter 7: Simple linear regression Learning Objectives
Chapter 7: Simple linear regression Learning Objectives Reading: Section 7.1 of OpenIntro Statistics Video: Correlation vs. causation, YouTube (2:19) Video: Intro to Linear Regression, YouTube (5:18) -
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.cs.toronto.edu/~rsalakhu/ Lecture 6 Three Approaches to Classification Construct
More informationIntroduction to Hierarchical Linear Modeling with R
Introduction to Hierarchical Linear Modeling with R 5 10 15 20 25 5 10 15 20 25 13 14 15 16 40 30 20 10 0 40 30 20 10 9 10 11 12-10 SCIENCE 0-10 5 6 7 8 40 30 20 10 0-10 40 1 2 3 4 30 20 10 0-10 5 10 15
More informationForecasting of Paddy Production in Sri Lanka: A Time Series Analysis using ARIMA Model
Tropical Agricultural Research Vol. 24 (): 2-3 (22) Forecasting of Paddy Production in Sri Lanka: A Time Series Analysis using ARIMA Model V. Sivapathasundaram * and C. Bogahawatte Postgraduate Institute
More informationCS229 Lecture notes. Andrew Ng
CS229 Lecture notes Andrew Ng Part X Factor analysis Whenwehavedatax (i) R n thatcomesfromamixtureofseveral Gaussians, the EM algorithm can be applied to fit a mixture model. In this setting, we usually
More informationIntroduction to Data Analysis in Hierarchical Linear Models
Introduction to Data Analysis in Hierarchical Linear Models April 20, 2007 Noah Shamosh & Frank Farach Social Sciences StatLab Yale University Scope & Prerequisites Strong applied emphasis Focus on HLM
More informationIntroduction to Fixed Effects Methods
Introduction to Fixed Effects Methods 1 1.1 The Promise of Fixed Effects for Nonexperimental Research... 1 1.2 The Paired-Comparisons t-test as a Fixed Effects Method... 2 1.3 Costs and Benefits of Fixed
More informationITSM-R Reference Manual
ITSM-R Reference Manual George Weigt June 5, 2015 1 Contents 1 Introduction 3 1.1 Time series analysis in a nutshell............................... 3 1.2 White Noise Variance.....................................
More informationElectronic Thesis and Dissertations UCLA
Electronic Thesis and Dissertations UCLA Peer Reviewed Title: A Multilevel Longitudinal Analysis of Teaching Effectiveness Across Five Years Author: Wang, Kairong Acceptance Date: 2013 Series: UCLA Electronic
More information**BEGINNING OF EXAMINATION** The annual number of claims for an insured has probability function: , 0 < q < 1.
**BEGINNING OF EXAMINATION** 1. You are given: (i) The annual number of claims for an insured has probability function: 3 p x q q x x ( ) = ( 1 ) 3 x, x = 0,1,, 3 (ii) The prior density is π ( q) = q,
More informationLecture 6: Poisson regression
Lecture 6: Poisson regression Claudia Czado TU München c (Claudia Czado, TU Munich) ZFS/IMS Göttingen 2004 0 Overview Introduction EDA for Poisson regression Estimation and testing in Poisson regression
More informationNew Work Item for ISO 3534-5 Predictive Analytics (Initial Notes and Thoughts) Introduction
Introduction New Work Item for ISO 3534-5 Predictive Analytics (Initial Notes and Thoughts) Predictive analytics encompasses the body of statistical knowledge supporting the analysis of massive data sets.
More informationAPPLIED MISSING DATA ANALYSIS
APPLIED MISSING DATA ANALYSIS Craig K. Enders Series Editor's Note by Todd D. little THE GUILFORD PRESS New York London Contents 1 An Introduction to Missing Data 1 1.1 Introduction 1 1.2 Chapter Overview
More information