Improving the Composite Indices of Business Cycles by Minimum Distance Factor Analysis

Size: px
Start display at page:

Download "Improving the Composite Indices of Business Cycles by Minimum Distance Factor Analysis"

Transcription

1 Improving the Composite Indices of Business Cycles by Minimum Distance Factor Analysis Yasutomo Murasawa Institute of Economic Research, Kyoto University Kyoto , Japan First draft: April 999 This version: February 2000 Abstract Minimum distance factor analysis (MD-FA is applicable to stationary ergodic sequences The estimated common factors, or factor scores, are weighted averages of the current observable variables, ie, more informative variables receive larger weights MD-FA of the US coincident business cycle indicators (BCI gives a new US coincident composite index (CI of business cycles The weights on the BCIs for the new CI are similar to those for the Stock Watson Experimental Coincident Index (XCI Keywords: Covariance structure; Time series; Generalized method of moments; Stock Watson index

2 Introduction The traditional composite indices (CI of business cycles, currently published by the Conference Board in the US, are descriptive statistics No statistical model exists, at least explicitly, behind the method So the statistical meaning of the resulting indices is unclear Assuming a linear one-factor structure for the coincident business cycle indicators (BCI, Stock and Watson (989, 99 obtain their Experimental Coincident Index (XCI The XCI is an estimate of the realization of the common business cycle factor Their assumption that the factors are linear autoregressive (AR processes with normal errors, however, is inconsistent with the observed asymmetry of business cycles between expansions and recessions Diebold and Rudebusch (996 suggest to combine the factor model of Stock and Watson (989, 99 and the regime-switching model of Hamilton (989 They assume that the mean of the common factor is a two-state first-order Markov chain Kim and Yoo (995, Chauvet (998, and Kim and Nelson (998 estimate this type of models in different ways and propose alternative coincident indices Although their models are consistent with the asymmetry of business cycles, they may still be misspecified In addition, estimation of the associated nonlinear state-space models is cumbersome In this paper, we assume a linear one-factor structure for the coincident BCIs, and apply minimum distance factor analysis (MD-FA We do not assume a parametric model for the dynamics of the factors, but only require the observable variables to be stationary ergodic and square integrable The advantages of this approach are (i the obtained index is an estimate of the realization of the common business cycle factor, (ii the model is consistent with the asymmetry of business cycles, (iii we do not have to worry about misspecification of the dynamics of the factors, and (iv estimation is easy 2

3 The obtained index is essentially a weighted average of the current standardized growth rates of the BCIs, where more informative BCIs receive larger weights, while the traditional CI is essentially the simple average For the US coincident BCIs, our estimation results show that employees on nonagricultural payrolls (EMP and index of industrial production (IIP are more informative, ie, highly correlated with the common business cycle factor, than personal income less transfer payments (INC and manufacturing and trade sales (SLS, implying that it is more efficient to weight them accordingly The weights for the new CI are similar to those for the XCI In a sense, the new CI improves the traditional CI towards the XCI, although the new CI imposes less assumptions on the factors The plan of the paper is as follows Section 2 defines a factor model and derives the implied autocovariance structure Section 3 discusses identification of the model parameters Section 4 discusses estimation of the model parameters and the realization of the common factors Section 5 applies MD-FA to the US coincident BCIs and proposes a new CI Section 6 concludes 2 Factor Model Let {x t } t= L 2 be an N stationary sequence with mean µ x and autocovariance matrix function Γ xx ( We observe {x t } T t= Assume a K-factor structure for {x t } t=, where K < N, such that t Z, x t = µ x + Bf t + u t, ( where B R N K is an unknown factor loading matrix, {f t } t= is an unobservable K stationary sequence of common factor vectors with mean zero and autocovariance matrix function Γ ff (, and {u t } t= is an unobservable N stationary sequence of specific factor vectors with mean zero and autocovariance matrix function Γ uu ( Consider estimation of B and the realization of {f t } T t= In FA, for identification of B, we usually assume that (i rank(b = K, (ii 3

4 Γ ff (0 is positive definite (pd, and (ii u,t,, u N,t are uncorrelated with each other and with f t at all leads and lags Then we have s Z, Γ xx (s = BΓ ff (sb + Γ uu (s, (2 where Γ uu (s is diagonal Assume that we use Γ xx (0,, Γ xx (S, where S < T, for estimation of B Let x T := ˆΓ xx,t (s := T t=s+ T t=s+ x t, (x t x T (x t s x T, s = 0,, S Under certain conditions which we clarify later, s {0,, S}, ˆΓ xx,t (s is consistent for Γ xx (s as T Given this, the first issue is identification of B from Γ xx (0,, Γ xx (S Since (2 still has rotational indeterminacy, we need further restrictions on the model 3 Identification 3 Principal Factor Model As additional identification restrictions, we often assume that (i Γ ff (0 = I K, (ii Γ uu (0 is given and pd, and (iii B Γ uu (0 B is diagonal, ie, the columns of Γ uu (0 /2 B are orthogonal These restrictions relate FA to principal component analysis (PCA Consider identification of B Since Γ uu (0 is given and pd, rearranging (2, Γ uu (0 /2 Γ xx (0Γ uu (0 /2 I N = Γ uu (0 /2 BB Γ uu (0 /2 (3 Since rank(γ uu (0 = N and rank(b = K < N, rank (Γ uu (0 /2 Γ xx (0Γ uu (0 /2 I N = rank (Γ uu (0 /2 BB Γ uu (0 /2 = K Let λ 0 Λ := 0 λ K, W := [ w w K ], 4

5 where λ k is the kth largest eigenvalue of the left-hand side of (3, and w k is the associated normalized eigenvector By the eigenvalue decomposition, Γ uu (0 /2 Γ xx (0Γ uu (0 /2 I N = W ΛW (4 Since the columns of Γ uu (0 /2 B are orthogonal, comparing (3 and (4, Γ uu (0 /2 B = W Λ /2, or B = Γ uu (0 /2 W Λ /2 Next, consider identification of the realization of {f t } T t= Notice that, given B, ( is a generalized linear regression model for each t Given µ x, the GLS estimator of f t is, ˆf t = ( B Γ uu (0 B B Γ uu (0 (x t µ x = ( Λ /2 W W Λ /2 Λ /2 W Γ uu (0 /2 (x t µ x = Λ /2 W Γ uu (0 /2 (x t µ x Notice that if we normalize x t by Γ uu (0 /2, then FA and PCA are equivalent up to scale Let x t := Γ uu (0 /2 (x t µ x, t =,, T, and Γ x x (0 := var( x Let λ k be the kth largest eigenvalue of Γ x x (0 I N and w k be the associated normalized eigenvector, k =,, K The kth principal component of x t is w k x t The GLS estimator, which is now equivalent to the OLS estimator, of the realization of the kth common factor of x t is w k x t/ λk Unfortunately, this approach has some problems First, Γ uu (0 is usually unknown in practice Second, since the restriction on B is not explicit, it is difficult to derive the asymptotic distribution of an estimator of B Third, it does not use Γ xx (,, Γ xx (S, which may contain some information 5

6 32 Multivariate Errors-in-Variables Model Alternatively, we can simply assume that B = [I K, B 2] This restriction relates FA to multivariate errors-in-variables models (EVM Let t Z, f t := Then we can write ( as x,t µ x, x K,t µ x,k, v t := u,t u K,t f t = f t + v t, x K+,t µ x,k+ = β K+f t + u K+,t, x N,t µ x,n = β Nf t + u N,t, where β i is the ith row of B Eliminating f t, we have t Z, x i,t µ x,i = β i f t β iv t + u i,t, i = K +,, N Consider the equation for x K+,t Suppose that s {,, S}, Γ ff (s 0 Then i {K + 2,, N}, s {0,, S}, x i,t s is correlated with f t through f t and f t s, but uncorrelated with β K+ v t + u K+,t by our assumption So we can use them as instrumental variables for estimation of β K+ By the order condition, we need (N K (S + K, or K (N (S + /(S + 2, to identify β K+ The same argument applies to the other equations Notice that serial correlation in {f t } t= helps identification of B Without serial correlation (S = 0, the order condition requires that K (N /2 With sufficient serial correlation (S, the order condition requires only that K < N 6

7 4 Estimation 4 Autocovariance Matrices 4 Consistency Let We can write γ 0 := vech(γ xx (0 vec(γ xx ( vec(γ xx (S, ˆγ T := vech (ˆΓxx,T (0 vec (ˆΓxx,T ( vec (ˆΓxx,T (S ˆΓ xx,t (s := = = T t=s+ T t=s+ T t=s+ ( x T µ x (x t x T (x t s x T [(x t µ x ( x T µ x ][(x t s µ x ( x T µ x ] (x t µ x (x t s µ x T t=s+ (x t s µ x, s = 0,, S Let t Z, vech((x t µ x (x t µ x vec((x t µ x (x t µ x z t := vec((x t µ x (x t S µ x If {x t } t= is stationary ergodic, then {z t } t= is stationary ergodic; see Durrett (996, pp 336, 340 Note that E(z = γ 0 Theorem Suppose that {x t } t= is stationary ergodic, 2 x L 2 Then lim T ˆγ T = γ 0 as Proof Applying the ergodic theorem to {x t } t=, we can write ˆγ T = T t=s+ z t + o as ( 7

8 The second term is asymptotically irrelevant The result follows by applying the ergodic theorem to {z t } t= 42 Asymptotic Distribution Let (Ω, F, P ( be the probability space under consideration Let {F t } t= be a filtration on (Ω, F We use the notion of mixingale to simplify our discussion; see Davidson (994, ch 6 Definition We say that {X t } t= is an L p -mixingale with respect to {F t } t= if {c t } t=, {ψ s } s=0 R +, where lim s ψ s = 0, such that t Z, s 0, E(X t F t s Lp c t ψ s, (5 X t E(X t F t+s Lp c t ψ s+ (6 The second inequality holds trivially if {F t } t= is adapted to {X t } t= We say that a vector or matrix random sequence is an L p -mixingale if each element is an L p -mixingale Note that if {X t } t= is a mixingale, then t Z, E(X t = 0 The rate of convergence of the coefficient measures the degree of serial dependence We say that {a s } s=0 R + is of size q if s=0 a/q s < If a s = O(s r, where r > q, then it is of size q; see Davidson (994, p 20 If a sequence is of size q, then q < q, it is also of size q We say that an L p -mixingale is of size q if {ψ s } s=0 is of size q With the notion of mixingale, we can state Gordin s central limit theorem (CLT for stationary sequences in Durrett (996, pp as follows Theorem 2 (Gordin s CLT Suppose that {X t } t= is a stationary ergodic L 2 -mixingale with respect to {F t } t= of size, 2 {F t } t= is adapted to {X t } t= 8

9 Then and s 2 := lim T var T ( T X t t= T T t= X t <, d N ( 0, s 2 If E(X 0, then the CLT applies to {X t E(X t } t= even if E(X is unknown, because {F t } t= is adapted to {X t E(X t } t= The following result is now immediate Theorem 3 Suppose that {x t } t= is stationary ergodic, 2 {F t } t= is adapted to {x t } t=, 3 {x t µ x } t= and {z t γ 0 } t= are L 2 -mixingales with respect to {F t } t= of size Then and Σ := lim T var ( T t=s+ z t <, (ˆγT γ 0 d N(0, Σ Proof Applying Gordin s CLT to {x t µ x } t=, we can write ˆγT = T t=s+ z t + o p ( By asymptotic equivalence, the second term is asymptotically irrelevant Notice that a R N(N+/2+SN 2, (i {a z t } t= is stationary ergodic, (ii {F t } t= is adapted to {a (z t γ 0 } t=, and (iii {a (z t γ 0 } t= is an L 2 -mixingale with respect to {F t } t= of size The result follows by Gordin s CLT and the Crámer Wold device 9

10 Since a transformation of a mixingale is not a mixingale in general, we need separate mixingale conditions on {x t µ x } t= and {z t γ 0 } t= Using the notion of mixing, we can give sufficient conditions on {x t } t= for the mixingale conditions to hold Definition 2 The sth-order α-mixing and φ-mixing coefficients of {X t } t= are α s := sup sup t Z A F t,b F t+s P (A B P (AP (B, φ s := sup sup P (B A P (B, t Z A F t,b F t+s ;P (A>0 where t Z, s 0, F t+s t := σ(x t,, X t+s Definition 3 We say that {X t } t= is α-mixing [φ-mixing] if lim s α s [φ s ] = 0 Since P (A B = P (B AP (A, φ-mixing implies α-mixing As before, we say that a mixing sequence is of size q if the mixing coefficient is of size q Mixing inequalities give the following relations between mixing sequences and mixingales; see Davidson (994, sec 42 Theorem 4 Suppose that {X t } t= is stationary, 2 X L p, where p >, 3 E(X = 0, 4 {F t } t= is adapted to {X t } t= Then if {X t } t= is α-mixing of size q, where q > 0, then p [, p, {X t } t= is an L p -mixingale with respect to {F t } t= of size q(/p /p, 2 if {X t } t= is φ-mixing of size q, where q > 0, then p [, p], {X t } t= is an L p -mixingale with respect to {F t } t= of size q( /p 0

11 Proof Since {F t } t= is adapted to {X t } t=, (6 holds trivially It remains to show that (5 also holds By Theorem 42 in Davidson (994, p [, p, t Z, s 0, ( E(X t F t s Lp 2 2 /p + X t+s Lp α /p /p s ( where c := 2 2 /p + X Lp = cψ s, q, {ψ s } s=0 is of size q(/p /p and ψ s := α /p /p s Since {α s } s=0 is of size 2 By Theorem 44 in Davidson (994, p [, p], t Z, s 0, E(X t F t s Lp 2 X t+s Lp φ /p s = cψ s, where c := 2 X Lp and ψ s := φ /p s Since {φ s } s=0 is of size q, {ψ s } s=0 is of size q( /p Next corollary gives sufficient conditions on {x t } t= for {z t γ 0 } t= to be an L 2 -mixingale with respect to {F t } t= of size Corollary Suppose that {x t } t= is stationary, 2 x L p, where p > 2, 3 {F t } t= is adapted to {x t } t= Then if {x t } t= is α-mixing of size q, where q > 0, then p [, p/2, {z t γ 0 } t= is an L p -mixingale with respect to {F t } t= of size q(/p 2/p,

12 2 if {x t } t= is φ-mixing of size q, where q > 0, then p [, p/2], {z t γ 0 } t= is an L p -mixingale with respect to {F t } t= of size q( 2/p Proof By Theorem 4 in Davidson (994, if {x t } t= is α-mixing [φ-mixing] of size q, then {z t γ 0 } t= is α-mixing [φ-mixing] of size q We have z γ 0 L p/2, where p > 2, and E(z γ 0 = 0 The result follows by applying the previous theorem to {z t γ 0 } t= So if (i x L p, where p > 4, and {x t } t= is α-mixing of size 2p/(p 4, or (ii x L p, where p 4, and {x t } t= is φ-mixing of size p/(p 2, then {x t µ x } t= and {z t γ 0 } t= are L 2 -mixingales with respect to {F t } t= of size 43 Covariance Matrix Estimation Consider estimation of Σ If we observe {z t } T t=s+, then we can apply various heteroskedasticity and autocorrelation consistent (HAC covariance matrix estimators Let Γ zz ( be the autocovariance matrix function for {z t } t= Then Let Σ := lim T var ( ( T = lim T var = lim T = s= = Γ zz (0 + Γ zz (s T t=s+ t=s+ T S s= (T S z t z t (Γ zz (s + Γ zz (s s= ( s Γ zz (s z T := ˆΓ zz,t (s := T t=s+ T z t, t=s+s+ (z t z T (z t s z T, s = 0,, 2

13 A kernel estimator of Σ is l(t ˆΣ T := ˆΓ zz,t (s + k(s (ˆΓ zz,t (s + ˆΓ zz,t (s s= where k( is a kernel function and l( is a bandwidth function Since we do not observe {z t } T t=s+, this estimator is infeasible Let z T,t := z T := ˆΓ zz,t (s := vech((x t x T (x t x T vec((x t x T (x t x T, t = S +,, T, vec((x t x T (x t S x T T z T,t, t=s+ T t=s+s+ (z T,t z T (z T,t s z T, s = 0,, Then z T,t = z t + o p (, z T = T t=s+ t=s+s+ z t + o p ( = E(z t + o p (, ˆΓ zz,t (s = T (z t E(z t + o p ((z t E(z t + o p ( = Γ zz (s + o p ( So we can apply a kernel estimator to {z T,t } T t=s+ to estimate Σ 42 Parameters 42 Minimum Distance Estimator Assume that B = [I K, B 2] Then we can write (2 as s Z, Γ xx (s = [ IK B 2 ] Γ ff (s [ I K B 2 ] + Γ uu (s (7 3

14 Let vec(b 2 vech(γ ff (0 diag(γ uu (0 vec(γ ff ( θ 0 := diag(γ uu ( vec(γ ff (S diag(γ uu (S Let Θ R (N KK+K(K+/2+N+S(K2 +N be the parameter space and g : Θ R N(N+/2+SN 2 be such that θ Θ, ([ ] IK vech Γ B ff (0 [ I K B 2 ] + Γ uu (0 2 ([ ] IK vec Γ g(θ := B ff ( [ I K B 2 ] + Γ uu ( 2 ([ ] IK vec Γ B ff (S [ I K B 2 ] + Γ uu (S 2 Then we have γ 0 = g(θ 0 (8 An MD estimator of θ 0 is ˆθ T := arg min θ Θ (ˆγ T g(θ W T (ˆγ T g(θ, (9 where W T is a weighting matrix that is finite, pd, and plim T W T = W, where W is fixed, finite, and pd Different weighting matrices give different MD estimators of θ 0 If W T is the identity matrix, then we have the equally-weighted MD (EMD estimator If W T is such that W = Σ, then we have an optimal MD (OMD estimator, ie, its asymptotic variance-covariance matrix is the smallest among the MD estimators 422 Consistency Let {Q T : Ω Θ R} T =S+ be a sequence of random criterion functions such that T S +, θ Θ, Q T (θ := (ˆγ T g(θ W T (ˆγ T g(θ 4

15 By definition, T S +, Let Q : Θ R be such that θ Θ, ˆθ T = arg min θ Θ Q T (θ Q(θ := (γ 0 g(θ W (γ 0 g(θ Since γ 0 = g(θ 0 and W is pd, given the identification restrictions, θ 0 = arg min θ Θ Q(θ To prove consistency of ˆθ T, it suffices to check conditions (A (C for Theorem 4 in Amemiya (985 The only nontrivial part is in condition (C, where we have to show that Q T ( converges in probability to Q( uniformly on Θ Next lemma gives sufficient conditions for this to hold Lemma Suppose that Θ is compact, 2 plim T ˆγ T = γ 0 Then plim sup Q T (θ Q(θ = 0 T θ Θ Proof By the triangle inequality, T S +, θ Θ, Q T (θ Q(θ = (ˆγ T g(θ W T (ˆγ T g(θ (γ 0 g(θ W (γ 0 g(θ = [(ˆγ T γ 0 + (γ 0 g(θ] (W T W [(ˆγ T γ 0 + (γ 0 g(θ] +[(ˆγ T γ 0 + (γ 0 g(θ] W [(ˆγ T γ 0 + (γ 0 g(θ] (γ 0 g(θ W (γ 0 g(θ = (ˆγ T γ 0 (W T W (ˆγ T γ 0 +2(ˆγ T γ 0 (W T W (γ 0 g(θ +(γ 0 g(θ (W T W (γ 0 g(θ 5

16 +(ˆγ T γ 0 W (ˆγ T γ 0 + 2(ˆγ T γ 0 W (γ 0 g(θ ˆγ T γ 0 W T W ˆγ T γ 0 +2 ˆγ T γ 0 W T W γ 0 g(θ + γ 0 g(θ W T W γ 0 g(θ + ˆγ T γ 0 W ˆγ T γ ˆγ T γ 0 W γ 0 g(θ Since Θ is compact and g( C 0, M < such that sup θ Θ γ 0 g(θ M Taking the supremum on both sides, T S +, sup Q T (θ Q(θ ˆγ T γ 0 W T W ˆγ T γ 0 θ Θ +2 ˆγ T γ 0 W T W M + M W T W M + ˆγ T γ 0 W ˆγ T γ ˆγ T γ 0 W M The result follows by taking the probability limit on both sides and applying Slutsky s theorem Consistency of ˆθ T is now immediate Theorem 5 Suppose that Θ is compact, 2 plim T ˆγ T = γ 0 Then plim ˆθ T = θ 0 T Proof Verify the conditions for Theorem 4 in Amemiya ( Asymptotic Distribution To prove asymptotic normality of ˆθ T, redefine Q T ( and Q( as θ Θ, Q T (θ := 2 (ˆγ T g(θ W T (ˆγ T g(θ, Q(θ := 2 (γ 0 g(θ W (γ 0 g(θ 6

17 By differentiation, θ Θ, Q T (θ θ i = g (θ W T (ˆγ T g(θ, θ i Q (θ θ i = g (θ W (γ 0 g(θ, θ i i =,, (N KK + K(K + /2 + N + S ( K 2 + N, and 2 Q T (θ = 2 g (θ W T (ˆγ T g(θ + g (θ g W T (θ, θ i θ j θ i θ j θ i θ j 2 Q (θ = 2 g (θ W (γ 0 g(θ + g (θ W g (θ, θ i θ j θ i θ j θ i θ j i, j =,, (N KK + K(K + /2 + N + S ( K 2 + N We need the following lemma first Lemma 2 Suppose that plim T ˆθT = θ 0 Then plim sup T θ K(θ 0 where K(θ 0 is a compact neighborhood of θ 0 2 Q T (θ 2 Q(θ = 0, Proof The proof is similar to that of Lemma Given this, the proof of asymptotic normality of ˆθ T is standard Theorem 6 Suppose that plim T ˆθT = θ 0, 2 (ˆγ T γ 0 d N(0, Σ, where Σ < Then (ˆθT θ 0 d N(0, V, where V := (GW G GW ΣW G (GW G, G := g(θ 0 7

18 ] Proof By the mean value theorem, T S +, θ T [ˆθT, θ 0 such that Q T (ˆθT = Q T (θ Q T ( θt (ˆθT θ 0 By the first-order condition, T S +, Q T (θ Q T ( θt (ˆθT θ 0 = 0, or (ˆθT θ 0 = 2 Q T ( θt QT (θ 0 = 2 Q T ( θt GWT (ˆγT γ 0 = (GW G GW (ˆγ T γ 0 [ + 2 ] Q T ( θt GWT (GW G GW O p ( By asymptotic equivalence, it suffices to show that the second term is o p ( We can write T S +, 2 Q T ( θt GW G = ( 2 Q T ( θt 2 Q ( θt + ( 2 Q ( θt GW G Since θ T is consistent for θ 0, the first term is o p ( by the previous lemma, and the second term is o p ( by Slutsky s theorem Note that if W = Σ, then V = ( GΣ G 43 Factor Scores Consider estimation of the realization of {f t } T t= Stacking ( for t =,, T, and taking B and µ x as given, we have a seemingly unrelated regressions (SUR model such that x µ x = Bf + u, x T µ x = Bf T + u T 8

19 Let x := Γ uu := x x T Let l T := (,, Then we have, f := f f T, u := Γ uu (T Γ uu (0 Γ uu (0 Γ uu (T u u T, x l T µ x = (I T Bf + u, where E(u = 0 and var(u = Γ uu The infeasible GLS estimator of f is ˆf GLS,T = [ (I T B Γ uu (I T B ] (IT B Γ uu (x l T µ x By the Gauss Markov theorem, ˆf GLS,T is the BLUE of f To make the GLS estimator feasible, we need a consistent estimator of the whole Γ uu, which requires a parametric model for the dynamics of {u t } t= Alternatively, we may give up system estimation and apply a feasible single-equation GLS estimator, ie, ˆf T,t = ( ˆB T ˆΓuu,T (0 ˆBT ˆB T ˆΓuu,T (0 (x t x T, t =,, T, (0 where ˆB T and ˆΓ uu,t (0 are consistent estimators of B and Γ uu (0 respectively Although estimation errors in ˆB T and ˆΓ uu,t (0 cause a measurement-error bias in ˆf T,t, it disappears as T Note that if {u t } t= is serially uncorrelated, then Γ uu is block-diagonal So the full-equation GLS estimator and the single-equation GLS estimator are equivalent 5 New Composite Index 5 Data 5 Business Cycle Indicators We analyze the four BCIs in Table that currently make up the US coincident CI of business cycles Our data are from CITIBASE The sample period is 959: 998:2 (480 observations We take the first difference of the log of each series and 9

20 Table : Components of the US Coincident Composite Index BCI EMP INC IIP SLS Description Employees on nonagricultural payrolls (thousands, SA Personal income less transfer payments (billions of chained $, SA, AR Index of industrial production (992 = 00, SA Manufacturing and trade sales (millions of chained $, SA Note: SA means seasonally-adjusted, and AR means annual rate Table 2: Descriptive Statistics of the Business Cycle Indicators BCI Mean SD Min Max EMP INC IIP SLS multiply it by 00, which is approximately equal to the monthly percentage growth rate series Table 2 and 3 summarize some descriptive statistics of the transformed series We see that they have significantly different means and standard deviations (sd: EMP has lower mean than the others, and EMP and INC are much smoother than the other two (Table 2 We eliminate these differences by standardizing each series so that the sample mean is 0 and the sample sd is We also see that all variables are positively correlated In particular, EMP and IIP have the highest correlation, while INC and SLS have the lowest correlation (Table 3 Figure shows the sample autocorrelation functions of the BCIs We see that they have different autocorrelation structures: EMP has persistent autocorrelation, Table 3: Sample Correlation Coefficients of the Business Cycle Indicators EMP INC IIP SLS EMP 00 INC IIP SLS

21 EMP INC + IIP SLS Lag + + Figure : Sample Autocorrelation Functions of the Business Cycle Indicators while SLS has almost no serial correlation 52 Composite Indices The leading, coincident, and lagging CIs of business cycles are summary statistics of the selected leading, coincident, and lagging BCIs respectively In the US, the Conference Board calculates the coincident CI in the following five steps: Calculate the monthly symmetric growth rate series of the BCIs 2 Exclude outliers and normalize each symmetric growth rate series so that the sample standard deviation is 3 Take the simple cross-section average of the normalized symmetric growth rate series This is the monthly symmetric growth rate series of the CI 4 Calculate the level series from the symmetric growth rate series 5 Rebase the level series to average 00 in the base year See the December 996 issue of Business Cycle Indicators for more details 2

22 Table 4: Principal Component Analysis of the Business Cycle Indicators Principal Component st 2nd 3rd 4th Eigenvalue Proportion Eigenvector EMP INC IIP SLS Note that it could be a weighted average in Step 3 For example, if we apply PCA, then the first principal component is a weighted average, where the weight vector is proportional to the normalized eigenvector associated with the largest eigenvalue of the sample correlation coefficient matrix Table 4 is the result of PCA of the US coincident BCIs (in terms of the standardized first differences of their logs The weight vector for the first principal component, which accounts for 63% of the total variation, is (027, 024, 027, 022 The weights on EMP and IIP are larger than those on INC and SLS PCA is still a descriptive method that reduces the dimension of a multivariate sample without assuming a statistical model We apply FA because it is a statistical method that estimates the realization of the latent common factors underlying the observable variables 52 Estimation Results 52 Parameters We apply MD estimators to estimate a one-factor model for the standardized first differences of the logs of the US coincident BCIs Let x,t,, x 4,t be EMP, INC, IIP, and SLS respectively First, we have to set S, the highest order of the autocovariances included Unfortunately, we do not have a criterion for selecting an optimal S given T So we simply try S = 0, Second, we have to choose W T, a weighting matrix for the MD 22

23 Table 5: Results of Minimum Distance Estimation of the Factor Model Parameter EMD OMD S = 0 S = S = 0 S = β β (007 (008 (008 (02 β (008 (008 (008 (02 β (006 (005 (007 (00 γ ff ( (04 (04 (03 (007 γ uu, ( (005 (006 (005 (004 γ uu,2 ( (008 (007 (008 (005 γ uu,3 ( (006 (005 (006 (005 γ uu,4 ( (006 (007 (006 (005 Note: Numbers in parentheses are asymptotic se s estimator Although OMD estimators are asymptotically more efficient than the EMD estimator, they have small-sample bias in analysis of covariance structures; see Altonji and Segal (996 and Clark (996 So we try both EMD and OMD estimators To estimate Σ, we apply a HAC covariance matrix estimator proposed by Newey and West (994 (To implement the Newey West estimator, we use a GAUSS code written by Ka-fu Wong, which is available from the GAUSS Sourse Code Archive at American University Table 5 summarizes the estimation results Although the estimates are a little sensitive to the choice of S and W T, EMP and IIP seem to have larger factor loadings and smaller specific-factor variances than INC and SLS In other words, EMP and IIP are more informative about the common business cycle factor than INC and SLS Hence, it is more efficient to weight them accordingly to construct a CI 23

24 Table 6: Weights for the New Composite Index BCI EMD OMD PCA CI S = 0 S = S = 0 S = EMP (004 (004 (003 (004 INC (003 (002 (003 (002 IIP (004 (004 (005 (004 SLS (002 (002 (002 (002 Note: Numbers in parentheses are asymptotic se s 522 Weights for the New Composite Index In this application, ( becomes x t = βf t + u t ( Given the model parameters, the single-equation GLS estimator of f t is ˆf t = ( β Γ uu (0 β β Γ uu (0 x t 4 i= β i γ uu,i (0 x i,t, ie, ˆf t is essentially a weighted average of x,t,, x 4,t Since the weight on x i,t is β i /γ uu,i (0, more informative variables receive larger weights Table 6 summarizes estimates of the weight vector The weights are not equal but larger on EMP and IIP than on INC and SLS We call the CI associated with one of these weight vectors the new CI 53 Comparison with Other Indices 53 Stock Watson Experimental Coincident Index The Stock Watson XCI assumes that t Z, φ f (Lf t = v t, Φ u (Lu t = w t, ( ( [ ] vt σ 2 NID 0, v 0, w t 0 Σ ww 24

25 where L is the lag operator, φ f ( is a pth-order polynomial on R, and Φ u ( is a qth-order polynomial on R N N For identification, we assume that (i β = (, β 2 and (ii Φ u ( and Σ ww are diagonal To obtain the maximum likelihood (ML estimator of the model parameters, we rewrite the model into a state-space form, and apply the Kalman filter (KF to evaluate the likelihood function; see Appendix The XCI, obtained as a by-product of the KF, is the conditional expectation of f t given (y,, y t associated with the ML estimate of the model parameters To determine p and q, we use a model selection criterion such as Akaike s information criterion (AIC or Schwartz s Bayesian information criterion (BIC In our case, AIC and BIC are defined as AIC := BIC := { ln L T q T q (ˆθML [ (N KK + K 2 p + Nq ]}, { ln(t q [ ln L (ˆθML (N KK + K 2 p + Nq ]}, 2 where N = 4, K =, and T = 479 As shown in Table 7, AIC selects (p, q = (, 2 while BIC selects (p, q = (, 0 Although AIC selects too large models with positive probability as T, selecting too large models may be less harmful than selecting too small models The likelihood ratio (LR test statistic for testing H 0 : (p, q = (, 0 against H : (p, q = (, 2 is 3390 The asymptotic distribution of the LR test statistic under H 0 is χ 2 (8 Since the test strongly rejects H 0 in favor of H, we select (p, q = (, 2 Table 8 presents the ML estimates of the model parameters for (p, q = (, 2 Not surprisingly, the ML estimate of the factor loading vector is close to the MD estimates in Table 5 Table 9 shows the implicit weights on the coincident BCIs for the XCI associated with the steady state KF The weights on the current coincident BCIs for the XCI are close to those for the new CI in Table 6 The difference is that the XCI puts 25

26 Table 7: Lag-Order Selection for the Factor Model (p, q ln L (ˆθML AIC BIC (0, (0, (0, (, (, (, (, (2, (2, (2, (2, (3, (3, (3, (3, Table 8: Result of Maximum Likelihood Estimation of the Factor Model Parameter EMP INC IIP SLS β (007 (008 (006 φ f 056 (005 σf (004 φ u, (005 (005 (007 (005 φ u, (005 (005 (006 (005 σw (003 (004 (003 (004 Note: Numbers in parentheses are asymptotic se s 26

27 Table 9: Weights for the Stock Watson Index Lag EMP INC IIP SLS ( ( ( ( ( Table 0: Sample Correlation Coefficients of the Indices New CI CI XCI New CI 000 CI XCI nonzero weights on the lagged coincident BCIs 532 Comparison Table 0 shows the sample correlation coefficient matrix of the three indices Since the new CI has a higher correlation with the XCI than the traditional CI, the new CI improves the traditional CI towards the XCI A difficulty in comparing alternative indices is absence of a criterion for good indices If we agree that real GDP is the most important coincident BCI, then a possible criterion is correlation with real GDP in quarterly growth rates Table shows the sample correlations of the three indices with real GDP The XCI has the highest correlation with real GDP, while the new CI has the lowest Moreover, the new CI hardly improves the traditional CI towards the XCI in quarterly series Table : Sample Correlation Coefficients of the Indices with Real GDP New CI CI XCI Real GDP New CI 0000 CI XCI Real GDP

28 This criterion implicitly assumes that a coincident index is an estimate of unobservable monthly real GDP For this purpose, however, we can obtain a better index by combining monthly coincident BCIs and quarterly real GDP using statespace models This seems to be an interesting topic for future research 6 Discussion We applied MD-FA to the US coincident BCIs and obtained a new CI of business cycles The new CI, as well as the XCI, puts larger weights on more informative BCIs (EMP and IIP and smaller weights on others (INC and SLS So it is more efficient than the traditional CI, which simply puts equal weights One drawback of the new CI is that it is not unique for two reasons First, we do not know how to choose S, the highest order of the autocovariance matrices included, in finite samples Second, we do not know which MD estimator to use in finite samples OMD estimators are asymptotically more efficient than the EMD estimator, but feasible OMD estimators are biased in small samples Recently, Kitamura and Stutzer (997 proposed an estimator that avoids the small-sample bias of feasible optimal GMM estimators Their estimator seems to be applicable to our problem 7 Acknowledgments This paper is based on my PhD dissertation, Minimum Distance Factor Analysis of Time Series: Theory and Application, University of Pennsylvania, 999 I am grateful to my advisor Roberto Mariano for his guidance and encouragement I also thank Frank Diebold, Frank Schorfheide, Robert Stein, and Yoshiyuki Takeuchi for their valuable comments and suggestions I am solely responsible for any remaining errors 28

29 A Stock Watson Experimental Coincident Index A State-Space Representation Premultiplying both sides of ( by Φ u (L, t Z, y t = Φ u (LBf t + w t, where y t := Φ u (Lx t When p q, a state-space model for {y t } t= is t Z, f t f t p = [ Φf, Φ f,p O K K I pk O pk K ] f t ( IK + v O t, pk K f t (p+ y t = [ B Φ u, B Φ u,q B O N (p qk ] f t f t p + w t, where o n is the n zero vector and O m n is the m n zero matrix When p q, a state-space model for {y t } t= is t Z, f t f t q = [ Φf, Φ f,p O K (q p+k I qk O qk K y t = [ B Φ u, B Φ u,q B ] In either case, we can write t Z, ] f t f t f t q f t (q+ + w t + ( IK O qk K v t, s t = F s t + Gv t, (2 y t = Hs t + w t, (3 where s t, F, G, and H are defined appropriately A2 Likelihood Function Let Θ be the parameter space Let f(; θ, θ Θ, be a pdf of (y,, y T Then T f(y,, y T ; θ = f t t (y t y,, y t ; θ, t= where t 2, f t t ( y,, y t ; θ is a conditional pdf of y t given (y,, y t (for t =, it is a marginal pdf Let F 0 := {, Ω} and t, F t := σ(y,, y t 29

30 Let t, µ t t (θ := E θ (y t F t, Σ t t (θ := var θ (y t F t Then t, y t y,, y t N ( µ t t (θ, Σ t t (θ So t, f t t (y t y,, y t ; θ = (2π N/2 det ( Σ t t (θ /2 ( exp ( yt µ t t (θ Σt t (θ ( y t µ t t (θ 2 The log-likelihood function for θ given (y,, y T is ln L(θ; y,, y T = NT 2 ln 2π 2 2 T ln det ( Σ t t (θ t= T ( yt µ t t (θ Σt t (θ ( y t µ t t (θ t= To evaluate the log-likelihood function, we must evaluate { µ t t (θ, Σ t t (θ } T t= Let t, s 0, ŝ t s := E θ (s t F s, P t s := var θ (s t F s From (3, t, µ t t (θ = Hŝ t t, Σ t t (θ = HP t t H + Σ ww Given θ, we can apply the KF to evaluate { ŝ t t, P t t } T t= 30

31 A3 Kalman Filter A3 Initial State First, we must specify ŝ 0 and P 0 To obtain the exact ML estimator, we set ŝ 0 = µ s, P 0 = Γ ss (0, where µ s := E(s and Γ ss (0 := var(s Since {s t } t= is stationary, taking expectations on both sides of (2, µ s = F µ s Assuming that I (max{p,q}+k F is nonsingular, µ s = 0 From (2, we also get Γ ss (0 = F Γ ss ( + GΣ uu G, Γ ss ( = F Γ ss (0 Eliminating Γ ss (, Γ ss (0 = F Γ ss (0F + GΣ uu G, or vec(γ ss (0 = vec(f Γ ss (0T + vec(gσ uu G = (F F vec(γ ss (0 + vec(gσ uu G = ( I [(max{p,q}+k] 2 F F vec(gσuu G In practice, we can simply set ŝ 0 0 = 0, P 0 0 = 0, 3

32 which implies that ŝ 0 = 0, P 0 = GΣ vv G The resulting estimator is asymptotically equivalent to the ML estimator A32 Updating Since (s 0, u is joint normal, s is normal Similarly, s 2, s 3, are also normal So t, s t y,, y t N ( ŝ t t, P t t We have t, y t ŷ t t = H ( s t ŝ t t + vt, where ŷ t t := E(y t F t So t, ( st y t y,, y t N (( ŝt t ŷ t t [ ] Pt t P, t t H HP t t HP t t H + Σ vv Let t, B t := P t t H ( HP t t H + Σ vv The updating equations for ŝ t t and P t t given ŝ t t and P t t are t, ŝ t t = ŝ t t + B t ( yt Hŝ t t, P t t = P t t B t HP t t A33 Prediction From (2, the prediction equations for ŝ t t and P t t given ŝ t t and P t t are t, ŝ t t = F ŝ t t, P t t = F P t t F + GΣ uu G Combining the updating and prediction equations, we get { ŝ t t, P t t } T t= 32

33 References Altonji, J G & Segal, L M (996, Small-sample bias in GMM estimation of covariance structures, Journal of Business and Economic Statistics 4, Amemiya, T (985, Advanced Econometrics, Harvard University Press Chauvet, M (998, An econometric characterization of business cycle dynamics with factor structure and regime switching, International Economic Review 39, Clark, T E (996, Small-sample properties of estimators of nonlinear models of covariance structure, Journal of Business and Economic Statistics 4, Davidson, J (994, Stochastic Limit Theory, Oxford University Press Diebold, F X & Rudebusch, G D (996, Measuring business cycles: A modern perspective, Review of Economics and Statistics 78, Durrett, R (996, Probability: Theory and Examples, second edn, Duxbury Press Hamilton, J D (989, A new approach to the economic analysis of nonstationary time series and the business cycle, Econometrica 57, Kim, C-J & Nelson, C R (998, Business cycle turning points, a new coincident index, and tests of duration dependence based on a dynamic factor model with regime switching, Review of Economics and Statistics 80, Kim, M-J & Yoo, J-S (995, New index of coincident indicators: A multivariate Markov switching factor model approach, Journal of Monetary Economics 36,

34 Kitamura, Y & Stutzer, M (997, An information-theoretic alternative to generalized method of moments estimation, Econometrica 65, Newey, W K & West, K D (994, Automatic lag selection in covariance matrix estimation, Review of Economic Studies 6, Stock, J H & Watson, M W (989, New indexes of coincident and leading economic indicators, NBER Macroeconomics Annual pp Stock, J H & Watson, M W (99, A probability model of the coincident economic indicators, in K Lahiri & G H Moore, eds, Leading Economic Indicators, Cambridge University Press, chapter 4 34

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

More information

3. Regression & Exponential Smoothing

3. Regression & Exponential Smoothing 3. Regression & Exponential Smoothing 3.1 Forecasting a Single Time Series Two main approaches are traditionally used to model a single time series z 1, z 2,..., z n 1. Models the observation z t as a

More information

Factor analysis. Angela Montanari

Factor analysis. Angela Montanari Factor analysis Angela Montanari 1 Introduction Factor analysis is a statistical model that allows to explain the correlations between a large number of observed correlated variables through a small number

More information

1 Teaching notes on GMM 1.

1 Teaching notes on GMM 1. Bent E. Sørensen January 23, 2007 1 Teaching notes on GMM 1. Generalized Method of Moment (GMM) estimation is one of two developments in econometrics in the 80ies that revolutionized empirical work in

More information

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

More information

Introduction to General and Generalized Linear Models

Introduction to General and Generalized Linear Models Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby

More information

Lecture 3: Linear methods for classification

Lecture 3: Linear methods for classification Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,

More information

SYSTEMS OF REGRESSION EQUATIONS

SYSTEMS OF REGRESSION EQUATIONS SYSTEMS OF REGRESSION EQUATIONS 1. MULTIPLE EQUATIONS y nt = x nt n + u nt, n = 1,...,N, t = 1,...,T, x nt is 1 k, and n is k 1. This is a version of the standard regression model where the observations

More information

INDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition)

INDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition) INDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition) Abstract Indirect inference is a simulation-based method for estimating the parameters of economic models. Its

More information

Statistical Machine Learning

Statistical Machine Learning Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes

More information

Estimating an ARMA Process

Estimating an ARMA Process Statistics 910, #12 1 Overview Estimating an ARMA Process 1. Main ideas 2. Fitting autoregressions 3. Fitting with moving average components 4. Standard errors 5. Examples 6. Appendix: Simple estimators

More information

Impulse Response Functions

Impulse Response Functions Impulse Response Functions Wouter J. Den Haan University of Amsterdam April 28, 2011 General definition IRFs The IRF gives the j th -period response when the system is shocked by a one-standard-deviation

More information

Time Series Analysis

Time Series Analysis Time Series Analysis Autoregressive, MA and ARMA processes Andrés M. Alonso Carolina García-Martos Universidad Carlos III de Madrid Universidad Politécnica de Madrid June July, 212 Alonso and García-Martos

More information

Topic 5: Stochastic Growth and Real Business Cycles

Topic 5: Stochastic Growth and Real Business Cycles Topic 5: Stochastic Growth and Real Business Cycles Yulei Luo SEF of HKU October 1, 2015 Luo, Y. (SEF of HKU) Macro Theory October 1, 2015 1 / 45 Lag Operators The lag operator (L) is de ned as Similar

More information

VI. Real Business Cycles Models

VI. Real Business Cycles Models VI. Real Business Cycles Models Introduction Business cycle research studies the causes and consequences of the recurrent expansions and contractions in aggregate economic activity that occur in most industrialized

More information

problem arises when only a non-random sample is available differs from censored regression model in that x i is also unobserved

problem arises when only a non-random sample is available differs from censored regression model in that x i is also unobserved 4 Data Issues 4.1 Truncated Regression population model y i = x i β + ε i, ε i N(0, σ 2 ) given a random sample, {y i, x i } N i=1, then OLS is consistent and efficient problem arises when only a non-random

More information

Time Series Analysis

Time Series Analysis Time Series Analysis hm@imm.dtu.dk Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby 1 Outline of the lecture Identification of univariate time series models, cont.:

More information

ANALYZING INVESTMENT RETURN OF ASSET PORTFOLIOS WITH MULTIVARIATE ORNSTEIN-UHLENBECK PROCESSES

ANALYZING INVESTMENT RETURN OF ASSET PORTFOLIOS WITH MULTIVARIATE ORNSTEIN-UHLENBECK PROCESSES ANALYZING INVESTMENT RETURN OF ASSET PORTFOLIOS WITH MULTIVARIATE ORNSTEIN-UHLENBECK PROCESSES by Xiaofeng Qian Doctor of Philosophy, Boston University, 27 Bachelor of Science, Peking University, 2 a Project

More information

The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series.

The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series. Cointegration The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series. Economic theory, however, often implies equilibrium

More information

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August

More information

Factor Analysis. Factor Analysis

Factor Analysis. Factor Analysis Factor Analysis Principal Components Analysis, e.g. of stock price movements, sometimes suggests that several variables may be responding to a small number of underlying forces. In the factor model, we

More information

State Space Time Series Analysis

State Space Time Series Analysis State Space Time Series Analysis p. 1 State Space Time Series Analysis Siem Jan Koopman http://staff.feweb.vu.nl/koopman Department of Econometrics VU University Amsterdam Tinbergen Institute 2011 State

More information

Forecasting Financial Time Series

Forecasting Financial Time Series Canberra, February, 2007 Contents Introduction 1 Introduction : Problems and Approaches 2 3 : Problems and Approaches : Problems and Approaches Time series: (Relative) returns r t = p t p t 1 p t 1, t

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,

More information

Some stability results of parameter identification in a jump diffusion model

Some stability results of parameter identification in a jump diffusion model Some stability results of parameter identification in a jump diffusion model D. Düvelmeyer Technische Universität Chemnitz, Fakultät für Mathematik, 09107 Chemnitz, Germany Abstract In this paper we discuss

More information

Spatial Statistics Chapter 3 Basics of areal data and areal data modeling

Spatial Statistics Chapter 3 Basics of areal data and areal data modeling Spatial Statistics Chapter 3 Basics of areal data and areal data modeling Recall areal data also known as lattice data are data Y (s), s D where D is a discrete index set. This usually corresponds to data

More information

Is the Basis of the Stock Index Futures Markets Nonlinear?

Is the Basis of the Stock Index Futures Markets Nonlinear? University of Wollongong Research Online Applied Statistics Education and Research Collaboration (ASEARC) - Conference Papers Faculty of Engineering and Information Sciences 2011 Is the Basis of the Stock

More information

Simple Linear Regression Inference

Simple Linear Regression Inference Simple Linear Regression Inference 1 Inference requirements The Normality assumption of the stochastic term e is needed for inference even if it is not a OLS requirement. Therefore we have: Interpretation

More information

Sales forecasting # 2

Sales forecasting # 2 Sales forecasting # 2 Arthur Charpentier arthur.charpentier@univ-rennes1.fr 1 Agenda Qualitative and quantitative methods, a very general introduction Series decomposition Short versus long term forecasting

More information

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7. Review University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

More information

Chapter 10: Basic Linear Unobserved Effects Panel Data. Models:

Chapter 10: Basic Linear Unobserved Effects Panel Data. Models: Chapter 10: Basic Linear Unobserved Effects Panel Data Models: Microeconomic Econometrics I Spring 2010 10.1 Motivation: The Omitted Variables Problem We are interested in the partial effects of the observable

More information

THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE. Alexander Barvinok

THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE. Alexander Barvinok THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE Alexer Barvinok Papers are available at http://www.math.lsa.umich.edu/ barvinok/papers.html This is a joint work with J.A. Hartigan

More information

Macroeconomic Effects of Financial Shocks Online Appendix

Macroeconomic Effects of Financial Shocks Online Appendix Macroeconomic Effects of Financial Shocks Online Appendix By Urban Jermann and Vincenzo Quadrini Data sources Financial data is from the Flow of Funds Accounts of the Federal Reserve Board. We report the

More information

Chapter 4: Vector Autoregressive Models

Chapter 4: Vector Autoregressive Models Chapter 4: Vector Autoregressive Models 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie IV.1 Vector Autoregressive Models (VAR)...

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Linear Classification. Volker Tresp Summer 2015

Linear Classification. Volker Tresp Summer 2015 Linear Classification Volker Tresp Summer 2015 1 Classification Classification is the central task of pattern recognition Sensors supply information about an object: to which class do the object belong

More information

Bayesian logistic betting strategy against probability forecasting. Akimichi Takemura, Univ. Tokyo. November 12, 2012

Bayesian logistic betting strategy against probability forecasting. Akimichi Takemura, Univ. Tokyo. November 12, 2012 Bayesian logistic betting strategy against probability forecasting Akimichi Takemura, Univ. Tokyo (joint with Masayuki Kumon, Jing Li and Kei Takeuchi) November 12, 2012 arxiv:1204.3496. To appear in Stochastic

More information

ANALYSIS OF FACTOR BASED DATA MINING TECHNIQUES

ANALYSIS OF FACTOR BASED DATA MINING TECHNIQUES Advances in Information Mining ISSN: 0975 3265 & E-ISSN: 0975 9093, Vol. 3, Issue 1, 2011, pp-26-32 Available online at http://www.bioinfo.in/contents.php?id=32 ANALYSIS OF FACTOR BASED DATA MINING TECHNIQUES

More information

Clustering in the Linear Model

Clustering in the Linear Model Short Guides to Microeconometrics Fall 2014 Kurt Schmidheiny Universität Basel Clustering in the Linear Model 2 1 Introduction Clustering in the Linear Model This handout extends the handout on The Multiple

More information

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression Logistic Regression Department of Statistics The Pennsylvania State University Email: jiali@stat.psu.edu Logistic Regression Preserve linear classification boundaries. By the Bayes rule: Ĝ(x) = arg max

More information

Introduction: Overview of Kernel Methods

Introduction: Overview of Kernel Methods Introduction: Overview of Kernel Methods Statistical Data Analysis with Positive Definite Kernels Kenji Fukumizu Institute of Statistical Mathematics, ROIS Department of Statistical Science, Graduate University

More information

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i. Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

More information

Fitting Subject-specific Curves to Grouped Longitudinal Data

Fitting Subject-specific Curves to Grouped Longitudinal Data Fitting Subject-specific Curves to Grouped Longitudinal Data Djeundje, Viani Heriot-Watt University, Department of Actuarial Mathematics & Statistics Edinburgh, EH14 4AS, UK E-mail: vad5@hw.ac.uk Currie,

More information

STOCK MARKET VOLATILITY AND REGIME SHIFTS IN RETURNS

STOCK MARKET VOLATILITY AND REGIME SHIFTS IN RETURNS STOCK MARKET VOLATILITY AND REGIME SHIFTS IN RETURNS Chia-Shang James Chu Department of Economics, MC 0253 University of Southern California Los Angles, CA 90089 Gary J. Santoni and Tung Liu Department

More information

The Engle-Granger representation theorem

The Engle-Granger representation theorem The Engle-Granger representation theorem Reference note to lecture 10 in ECON 5101/9101, Time Series Econometrics Ragnar Nymoen March 29 2011 1 Introduction The Granger-Engle representation theorem is

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

ARMA, GARCH and Related Option Pricing Method

ARMA, GARCH and Related Option Pricing Method ARMA, GARCH and Related Option Pricing Method Author: Yiyang Yang Advisor: Pr. Xiaolin Li, Pr. Zari Rachev Department of Applied Mathematics and Statistics State University of New York at Stony Brook September

More information

A Regime-Switching Model for Electricity Spot Prices. Gero Schindlmayr EnBW Trading GmbH g.schindlmayr@enbw.com

A Regime-Switching Model for Electricity Spot Prices. Gero Schindlmayr EnBW Trading GmbH g.schindlmayr@enbw.com A Regime-Switching Model for Electricity Spot Prices Gero Schindlmayr EnBW Trading GmbH g.schindlmayr@enbw.com May 31, 25 A Regime-Switching Model for Electricity Spot Prices Abstract Electricity markets

More information

Principle of Data Reduction

Principle of Data Reduction Chapter 6 Principle of Data Reduction 6.1 Introduction An experimenter uses the information in a sample X 1,..., X n to make inferences about an unknown parameter θ. If the sample size n is large, then

More information

Quadratic forms Cochran s theorem, degrees of freedom, and all that

Quadratic forms Cochran s theorem, degrees of freedom, and all that Quadratic forms Cochran s theorem, degrees of freedom, and all that Dr. Frank Wood Frank Wood, fwood@stat.columbia.edu Linear Regression Models Lecture 1, Slide 1 Why We Care Cochran s theorem tells us

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2014 Timo Koski () Mathematisk statistik 24.09.2014 1 / 75 Learning outcomes Random vectors, mean vector, covariance

More information

Standard errors of marginal effects in the heteroskedastic probit model

Standard errors of marginal effects in the heteroskedastic probit model Standard errors of marginal effects in the heteroskedastic probit model Thomas Cornelißen Discussion Paper No. 320 August 2005 ISSN: 0949 9962 Abstract In non-linear regression models, such as the heteroskedastic

More information

STAT 830 Convergence in Distribution

STAT 830 Convergence in Distribution STAT 830 Convergence in Distribution Richard Lockhart Simon Fraser University STAT 830 Fall 2011 Richard Lockhart (Simon Fraser University) STAT 830 Convergence in Distribution STAT 830 Fall 2011 1 / 31

More information

Recent Developments of Statistical Application in. Finance. Ruey S. Tsay. Graduate School of Business. The University of Chicago

Recent Developments of Statistical Application in. Finance. Ruey S. Tsay. Graduate School of Business. The University of Chicago Recent Developments of Statistical Application in Finance Ruey S. Tsay Graduate School of Business The University of Chicago Guanghua Conference, June 2004 Summary Focus on two parts: Applications in Finance:

More information

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Review Jeopardy. Blue vs. Orange. Review Jeopardy Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?

More information

Chapter 6: Multivariate Cointegration Analysis

Chapter 6: Multivariate Cointegration Analysis Chapter 6: Multivariate Cointegration Analysis 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie VI. Multivariate Cointegration

More information

Exploratory Factor Analysis and Principal Components. Pekka Malo & Anton Frantsev 30E00500 Quantitative Empirical Research Spring 2016

Exploratory Factor Analysis and Principal Components. Pekka Malo & Anton Frantsev 30E00500 Quantitative Empirical Research Spring 2016 and Principal Components Pekka Malo & Anton Frantsev 30E00500 Quantitative Empirical Research Spring 2016 Agenda Brief History and Introductory Example Factor Model Factor Equation Estimation of Loadings

More information

Master s Theory Exam Spring 2006

Master s Theory Exam Spring 2006 Spring 2006 This exam contains 7 questions. You should attempt them all. Each question is divided into parts to help lead you through the material. You should attempt to complete as much of each problem

More information

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBERT SPACE REVIEW BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

More information

Chapter 1. Vector autoregressions. 1.1 VARs and the identi cation problem

Chapter 1. Vector autoregressions. 1.1 VARs and the identi cation problem Chapter Vector autoregressions We begin by taking a look at the data of macroeconomics. A way to summarize the dynamics of macroeconomic data is to make use of vector autoregressions. VAR models have become

More information

MAN-BITES-DOG BUSINESS CYCLES ONLINE APPENDIX

MAN-BITES-DOG BUSINESS CYCLES ONLINE APPENDIX MAN-BITES-DOG BUSINESS CYCLES ONLINE APPENDIX KRISTOFFER P. NIMARK The next section derives the equilibrium expressions for the beauty contest model from Section 3 of the main paper. This is followed by

More information

Econometric Methods for Panel Data

Econometric Methods for Panel Data Based on the books by Baltagi: Econometric Analysis of Panel Data and by Hsiao: Analysis of Panel Data Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies

More information

Variance Reduction. Pricing American Options. Monte Carlo Option Pricing. Delta and Common Random Numbers

Variance Reduction. Pricing American Options. Monte Carlo Option Pricing. Delta and Common Random Numbers Variance Reduction The statistical efficiency of Monte Carlo simulation can be measured by the variance of its output If this variance can be lowered without changing the expected value, fewer replications

More information

Two Topics in Parametric Integration Applied to Stochastic Simulation in Industrial Engineering

Two Topics in Parametric Integration Applied to Stochastic Simulation in Industrial Engineering Two Topics in Parametric Integration Applied to Stochastic Simulation in Industrial Engineering Department of Industrial Engineering and Management Sciences Northwestern University September 15th, 2014

More information

1 Short Introduction to Time Series

1 Short Introduction to Time Series ECONOMICS 7344, Spring 202 Bent E. Sørensen January 24, 202 Short Introduction to Time Series A time series is a collection of stochastic variables x,.., x t,.., x T indexed by an integer value t. The

More information

Testing against a Change from Short to Long Memory

Testing against a Change from Short to Long Memory Testing against a Change from Short to Long Memory Uwe Hassler and Jan Scheithauer Goethe-University Frankfurt This version: January 2, 2008 Abstract This paper studies some well-known tests for the null

More information

Multivariate Analysis (Slides 13)

Multivariate Analysis (Slides 13) Multivariate Analysis (Slides 13) The final topic we consider is Factor Analysis. A Factor Analysis is a mathematical approach for attempting to explain the correlation between a large set of variables

More information

Econometrics Simple Linear Regression

Econometrics Simple Linear Regression Econometrics Simple Linear Regression Burcu Eke UC3M Linear equations with one variable Recall what a linear equation is: y = b 0 + b 1 x is a linear equation with one variable, or equivalently, a straight

More information

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015.

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015. Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment -3, Probability and Statistics, March 05. Due:-March 5, 05.. Show that the function 0 for x < x+ F (x) = 4 for x < for x

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

MATH4427 Notebook 2 Spring 2016. 2 MATH4427 Notebook 2 3. 2.1 Definitions and Examples... 3. 2.2 Performance Measures for Estimators...

MATH4427 Notebook 2 Spring 2016. 2 MATH4427 Notebook 2 3. 2.1 Definitions and Examples... 3. 2.2 Performance Measures for Estimators... MATH4427 Notebook 2 Spring 2016 prepared by Professor Jenny Baglivo c Copyright 2009-2016 by Jenny A. Baglivo. All Rights Reserved. Contents 2 MATH4427 Notebook 2 3 2.1 Definitions and Examples...................................

More information

Lecture 2: ARMA(p,q) models (part 3)

Lecture 2: ARMA(p,q) models (part 3) Lecture 2: ARMA(p,q) models (part 3) Florian Pelgrin University of Lausanne, École des HEC Department of mathematics (IMEA-Nice) Sept. 2011 - Jan. 2012 Florian Pelgrin (HEC) Univariate time series Sept.

More information

Chapter 2. Dynamic panel data models

Chapter 2. Dynamic panel data models Chapter 2. Dynamic panel data models Master of Science in Economics - University of Geneva Christophe Hurlin, Université d Orléans Université d Orléans April 2010 Introduction De nition We now consider

More information

Univariate Time Series Analysis; ARIMA Models

Univariate Time Series Analysis; ARIMA Models Econometrics 2 Spring 25 Univariate Time Series Analysis; ARIMA Models Heino Bohn Nielsen of4 Outline of the Lecture () Introduction to univariate time series analysis. (2) Stationarity. (3) Characterizing

More information

THREE DIMENSIONAL GEOMETRY

THREE DIMENSIONAL GEOMETRY Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,

More information

Testing against a Change from Short to Long Memory

Testing against a Change from Short to Long Memory Testing against a Change from Short to Long Memory Uwe Hassler and Jan Scheithauer Goethe-University Frankfurt This version: December 9, 2007 Abstract This paper studies some well-known tests for the null

More information

ENDOGENOUS GROWTH MODELS AND STOCK MARKET DEVELOPMENT: EVIDENCE FROM FOUR COUNTRIES

ENDOGENOUS GROWTH MODELS AND STOCK MARKET DEVELOPMENT: EVIDENCE FROM FOUR COUNTRIES ENDOGENOUS GROWTH MODELS AND STOCK MARKET DEVELOPMENT: EVIDENCE FROM FOUR COUNTRIES Guglielmo Maria Caporale, South Bank University London Peter G. A Howells, University of East London Alaa M. Soliman,

More information

Multivariate normal distribution and testing for means (see MKB Ch 3)

Multivariate normal distribution and testing for means (see MKB Ch 3) Multivariate normal distribution and testing for means (see MKB Ch 3) Where are we going? 2 One-sample t-test (univariate).................................................. 3 Two-sample t-test (univariate).................................................

More information

Univariate and Multivariate Methods PEARSON. Addison Wesley

Univariate and Multivariate Methods PEARSON. Addison Wesley Time Series Analysis Univariate and Multivariate Methods SECOND EDITION William W. S. Wei Department of Statistics The Fox School of Business and Management Temple University PEARSON Addison Wesley Boston

More information

2. Linear regression with multiple regressors

2. Linear regression with multiple regressors 2. Linear regression with multiple regressors Aim of this section: Introduction of the multiple regression model OLS estimation in multiple regression Measures-of-fit in multiple regression Assumptions

More information

Average Redistributional Effects. IFAI/IZA Conference on Labor Market Policy Evaluation

Average Redistributional Effects. IFAI/IZA Conference on Labor Market Policy Evaluation Average Redistributional Effects IFAI/IZA Conference on Labor Market Policy Evaluation Geert Ridder, Department of Economics, University of Southern California. October 10, 2006 1 Motivation Most papers

More information

Multivariate time series analysis is used when one wants to model and explain the interactions and comovements among a group of time series variables:

Multivariate time series analysis is used when one wants to model and explain the interactions and comovements among a group of time series variables: Multivariate Time Series Consider n time series variables {y 1t },...,{y nt }. A multivariate time series is the (n 1) vector time series {Y t } where the i th row of {Y t } is {y it }.Thatis,for any time

More information

ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE

ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE YUAN TIAN This synopsis is designed merely for keep a record of the materials covered in lectures. Please refer to your own lecture notes for all proofs.

More information

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

More information

Least Squares Estimation

Least Squares Estimation Least Squares Estimation SARA A VAN DE GEER Volume 2, pp 1041 1045 in Encyclopedia of Statistics in Behavioral Science ISBN-13: 978-0-470-86080-9 ISBN-10: 0-470-86080-4 Editors Brian S Everitt & David

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

Time Series Analysis

Time Series Analysis Time Series Analysis Forecasting with ARIMA models Andrés M. Alonso Carolina García-Martos Universidad Carlos III de Madrid Universidad Politécnica de Madrid June July, 2012 Alonso and García-Martos (UC3M-UPM)

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

Testing for Granger causality between stock prices and economic growth

Testing for Granger causality between stock prices and economic growth MPRA Munich Personal RePEc Archive Testing for Granger causality between stock prices and economic growth Pasquale Foresti 2006 Online at http://mpra.ub.uni-muenchen.de/2962/ MPRA Paper No. 2962, posted

More information

Chapter 4: Statistical Hypothesis Testing

Chapter 4: Statistical Hypothesis Testing Chapter 4: Statistical Hypothesis Testing Christophe Hurlin November 20, 2015 Christophe Hurlin () Advanced Econometrics - Master ESA November 20, 2015 1 / 225 Section 1 Introduction Christophe Hurlin

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 7: Conditionally Positive Definite Functions Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter

More information

Topics in Time Series Analysis

Topics in Time Series Analysis Topics in Time Series Analysis Massimiliano Marcellino EUI and Bocconi University This course reviews some recent developments in the analysis of time series data in economics, with a special emphasis

More information

Chapter 5. Analysis of Multiple Time Series. 5.1 Vector Autoregressions

Chapter 5. Analysis of Multiple Time Series. 5.1 Vector Autoregressions Chapter 5 Analysis of Multiple Time Series Note: The primary references for these notes are chapters 5 and 6 in Enders (2004). An alternative, but more technical treatment can be found in chapters 10-11

More information

Adaptive Demand-Forecasting Approach based on Principal Components Time-series an application of data-mining technique to detection of market movement

Adaptive Demand-Forecasting Approach based on Principal Components Time-series an application of data-mining technique to detection of market movement Adaptive Demand-Forecasting Approach based on Principal Components Time-series an application of data-mining technique to detection of market movement Toshio Sugihara Abstract In this study, an adaptive

More information

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d). 1. Line Search Methods Let f : R n R be given and suppose that x c is our current best estimate of a solution to P min x R nf(x). A standard method for improving the estimate x c is to choose a direction

More information

Univariate Time Series Analysis; ARIMA Models

Univariate Time Series Analysis; ARIMA Models Econometrics 2 Fall 25 Univariate Time Series Analysis; ARIMA Models Heino Bohn Nielsen of4 Univariate Time Series Analysis We consider a single time series, y,y 2,..., y T. We want to construct simple

More information

11. Time series and dynamic linear models

11. Time series and dynamic linear models 11. Time series and dynamic linear models Objective To introduce the Bayesian approach to the modeling and forecasting of time series. Recommended reading West, M. and Harrison, J. (1997). models, (2 nd

More information