# Time Series for Macroeconomics and Finance

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 Time Series for Macroeconomics and Finance John H. Cochrane 1 Graduate School of Business University of Chicago 5807 S. Woodlawn. Chicago IL (773) Spring 1997; Pictures added Jan I thank Giorgio DeSantis for many useful comments on this manuscript. Copyright c John H. Cochrane 1997, 2005

2 Contents 1 Preface 7 2 What is a time series? 8 3 ARMA models Whitenoise BasicARMAmodels Lagoperatorsandpolynomials ManipulatingARMAswithlagoperators AR(1) to MA( ) byrecursivesubstitution AR(1) to MA( ) withlagoperators AR(p) to MA( ), MA(q) to AR( ), factoring lag polynomials,andpartialfractions Summary of allowed lag polynomial manipulations MultivariateARMAmodels ProblemsandTricks The autocorrelation and autocovariance functions Definitions Autocovariance and autocorrelation of ARMA processes Summary

3 4.3 Afundamentalrepresentation Admissibleautocorrelationfunctions Multivariateauto-andcrosscorrelations Prediction and Impulse-Response Functions PredictingARMAmodels Statespacerepresentation ARMAsinvectorAR(1)representation ForecastsfromvectorAR(1)representation VARsinvectorAR(1)representation Impulse-responsefunction Factsaboutimpulse-responses Stationarity and Wold representation Definitions ConditionsforstationaryARMA s WoldDecompositiontheorem WhattheWoldtheoremdoesnotsay The Wold MA( ) as another fundamental representation VARs: orthogonalization, variance decomposition, Granger causality OrthogonalizingVARs Ambiguityofimpulse-responsefunctions Orthogonalshocks Sims orthogonalization Specifying C(0) Blanchard-Quah orthogonalization restrictions on C(1) Variancedecompositions VAR sinstatespacenotation

4 7.4 Tricksandproblems: GrangerCausality Basicidea Definition,autoregressiverepresentation Movingaveragerepresentation Univariaterepresentations Effectonprojections Summary Discussion A warning: why Granger causality is not Causality Contemporaneouscorrelation Spectral Representation Factsaboutcomplexnumbersandtrigonometry Definitions Addition,multiplication,andconjugation Trigonometricidentities Frequency,periodandphase Fouriertransforms Whycomplexnumbers? Spectraldensity Spectraldensitiesofsomeprocesses Spectraldensitymatrix,crossspectraldensity Spectraldensityofasum Filtering Spectrum of filteredseries Multivariate filteringformula

5 8.3.3 Spectral density of arbitrary MA( ) FilteringandOLS Acosineexample Cross spectral density of two filters,andaninterpre- tationofspectraldensity Constructing filters Simsapproximationformula Relation between Spectral, Wold, and Autocovariance representations Spectral analysis infinite samples FiniteFouriertransforms Definitions Bandspectrumregression Motivation Bandspectrumprocedure CramérorSpectralrepresentation Estimatingspectraldensities Fouriertransformsamplecovariances Samplespectraldensity Relation between transformed autocovariances and sampledensity Asymptotic distribution of sample spectral density Smoothedperiodogramestimates Weightedcovarianceestimates Relation between weighted covariance and smoothed periodogramestimates Variance of filtereddataestimates

6 9.4.9 SpectraldensityimpliedbyARMAmodels Asymptoticdistributionofspectralestimates Unit Roots RandomWalks Motivationsforunitroots Stochastictrends Permanenceofshocks Statisticalissues Unitrootandstationaryprocesses Responsetoshocks Spectraldensity Autocorrelation Randomwalkcomponentsandstochastictrends Forecasterrorvariances Summary Summary of a(1)estimatesandtests Near- observational equivalence of unit roots and stationary processes in finitesamples Empiricalworkonunitroots/persistence Cointegration Definition Cointegratingregressions Representationofcointegratedsystem Definitionofcointegration MultivariateBeveridge-Nelsondecomposition RankconditiononA(1)

7 Spectraldensityatzero Commontrendsrepresentation Impulse-responsefunction UsefulrepresentationsforrunningcointegratedVAR s AutoregressiveRepresentations ErrorCorrectionrepresentation RunningVAR s AnExample Cointegrationwithdriftsandtrends

8 Chapter 1 Preface These notes are intended as a text rather than as a reference. Atextiswhat youreadinordertolearnsomething. Areferenceissomethingyoulookback on after you know the outlines of a subject in order to get difficult theorems exactly right. The organization is quite differentfrommostbooks,whichreallyare intended as references. Most books first state a general theorem or apparatus, and then show how applications are special cases of a grand general structure. That s how we organize things that we already know, but that s not how we learn things. We learn things by getting familiar with a bunch of examples, and then seeing how they fit togetherinamoregeneralframework. Andthe point is the examples knowing how to do something. Thus, for example, I start with linear ARMA models constructed from normal iid errors. Once familiar with these models, I introduce the concept of stationarity and the Wold theorem that shows how such models are in fact much more general. But that means that the discussion of ARMA processes is not as general as it is in most books, and many propositions are stated in much less general contexts than is possible. Imakenoeffort to be encyclopedic. One function of a text (rather than a reference) is to decide what an average reader in this case an average first year graduate student in economics really needs to know about a subject, and what can be safely left out. So, if you want to know everything about a subject, consult a reference, such as Hamilton s (1993) excellent book. 7

9 Chapter 2 What is a time series? Most data in macroeconomics and finance come in the form of time series a set of repeated observations of the same variable, such as GNP or a stock return. We can write a time series as {x 1,x 2,...x T } or {x t },t=1, 2,...T We will treat x t as a random variable. In principle, there is nothing about time series that is arcane or different from the rest of econometrics. The only difference with standard econometrics is that the variables are subscripted t rather than i. For example, if y t is generated by y t = x t β + t,e( t x t )=0, then OLS provides a consistent estimate of β, just as if the subscript was i not t. The word time series is used interchangeably to denote a sample {x t }, such as GNP from 1947:1 to the present, and a probability model for that sample a statement of the joint distribution of the random variables {x t }. A possible probability model for the joint distribution of a time series {x t } is x t = t, t i.i.d. N (0,σ 2 ) i.e, x t normal and independent over time. However, time series are typically not iid, which is what makes them interesting. For example, if GNP today is unusually high, GNP tomorrow is also likely to be unusually high. 8

10 It would be nice to use a nonparametric approach just use histograms to characterize the joint density of {.., x t 1,x t,x t+1,...}. Unfortunately, we will not have enough data to follow this approach in macroeconomics for at least the next 2000 years or so. Hence, time-series consists of interesting parametric models for the joint distribution of {x t }. The models impose structure, which you must evaluate to see if it captures the features you think are present in the data. In turn, they reduce the estimation problem to the estimation of a few parameters of the time-series model. The first set of models we study are linear ARMA models. As you will see, these allow a convenient and flexible way of studying time series, and capturing the extent to which series can be forecast, i.e. variation over time in conditional means. However, they don t do much to help model variation in conditional variances. For that, we turn to ARCH models later on. 9

11 Chapter 3 ARMA models 3.1 White noise The building block for our time series models is the white noise process, which I ll denote t. In the least general case, t i.i.d. N(0,σ 2 ) Notice three implications of this assumption: 1. E( t )=E( t t 1, t 2...)=E( t all information at t 1) = E( t t j )=cov( t t j )=0 3. var ( t )=var( t t 1, t 2,...)=var( t all information at t 1) = σ 2 The first and second properties are the absence of any serial correlation or predictability. The third property is conditional homoskedasticity or a constant conditional variance. Later, we will generalize the building block process. For example, we may assume property 2 and 3 without normality, in which case the t need not be independent. We may also assume the first property only, in which case t is a martingale difference sequence. 10

12 By itself, t is a pretty boring process. If t is unusually high, there is no tendency for t+1 to be unusually high or low, so it does not capture the interesting property of persistence that motivates the study of time series. More realistic models are constructed by taking combinations of t. 3.2 Basic ARMA models Most of the time we will study a class of models created by taking linear combinations of white noise. For example, AR(1): MA(1): AR(p): MA(q): ARMA(p,q): x t = φx t 1 + t x t = t + θ t 1 x t = φ 1 x t 1 + φ 2 x t φ p x t p + t x t = t + θ 1 t θ q t q x t = φ 1 x t t + θ t As you can see, each case amounts to a recipe by which you can construct asequence{x t } given a sequence of realizations of the white noise process { t }, and a starting value for x. All these models are mean zero, and are used to represent deviations of the series about a mean. For example, if a series has mean x and follows an AR(1) (x t x) =φ(x t 1 x)+ t it is equivalent to x t =(1 φ) x + φx t 1 + t. Thus, constants absorb means. I will generally only work with the mean zero versions, since adding means and other deterministic trends is easy. 3.3 Lag operators and polynomials It is easiest to represent and manipulate ARMA models in lag operator notation. The lag operator moves the index back one time unit, i.e. Lx t = x t 1 11

13 More formally, L is an operator that takes one whole time series {x t } and produces another; the second time series is the same as the first, but moved backwards one date. From the definition, you can do fancier things: L 2 x t = LLx t = Lx t 1 = x t 2 L j x t = x t j L j x t = x t+j. We can also define lag polynomials, forexample a(l)x t =(a 0 L 0 + a 1 L 1 + a 2 L 2 )x t = a 0 x t + a 1 x t 1 + a 2 x t 2. Using this notation, we can rewrite the ARMA models as AR(1): MA(1): AR(p): MA(q): (1 φl)x t = t x t =(1+θL) t (1 + φ 1 L + φ 2 L φ p L p )x t = t x t =(1+θ 1 L +...θ q L q ) t or simply AR: MA: ARMA: a(l)x t = t x t = b(l) a(l)x t = b(l) t Manipulating ARMAs with lag operators. ARMA models are not unique. A time series with a given joint distribution of {x 0,x 1,...x T } can usually be represented with a variety of ARMA models. It is often convenient to work with different representations. For example, 1) the shortest (or only finite length) polynomial representation is obviously the easiest one to work with in many cases; 2) AR forms are the easiest to estimate, since the OLS assumptions still apply; 3) moving average representations express x t in terms of a linear combination of independent right hand variables. For many purposes, such as finding variances and covariances in sec. 4 below, this is the easiest representation to use. 12

14 3.3.2 AR(1) to MA( ) by recursive substitution Start with the AR(1) Recursively substituting, x t = φx t 1 + t. x t = φ(φx t 2 + t 1 )+ t = φ 2 x t 2 + φ t 1 + t x t = φ k x t k + φ k 1 t k φ 2 t 2 + φ t 1 + t Thus,anAR(1)canalwaysbeexpressedasanARMA(k,k-1).Moreimportantly, if φ < 1 so that lim k φ k x t k =0, then x t = X φ j t j j=0 so the AR(1) can be expressed as an MA( ) AR(1) to MA( ) with lag operators. These kinds of manipulations are much easier using lag operators. To invert the AR(1), write it as (1 φl)x t = t. A natural way to invert the AR(1) is to write x t =(1 φl) 1 t. What meaning can we attach to (1 φl) 1? Wehaveonlydefined polynomials in L so far. Let s try using the expression (1 z) 1 =1+z + z 2 + z for z < 1 (you can prove this with a Taylor expansion). This expansion, with the hope that φ < 1implies φl < 1 in some sense, suggests x t =(1 φl) 1 t =(1+φL + φ 2 L ) t = X φ j t j j=0 13

15 which is the same answer we got before. (At this stage, treat the lag operator as a suggestive notation that delivers the right answer. We ll justify that the method works in a little more depth later.) Note that we can t always perform this inversion. In this case, we required φ < 1. Not all ARMA processes are invertible toarepresentationofx t in terms of current and past t AR(p) to MA( ), MA(q) to AR( ), factoring lag polynomials, and partial fractions The AR(1) example is about equally easy to solve using lag operators as using recursive substitution. Lag operators shine with more complicated models. For example, let s invert an AR(2). I leave it as an exercise to try recursive substitution and show how hard it is. To do it with lag operators, start with x t = φ 1 x t 1 + φ 2 x t 2 + t. (1 φ 1 L φ 2 L 2 )x t = t I don t know any expansion formulas to apply directly to (1 φ 1 L φ 2 L 2 ) 1, but we can use the 1/(1 z) formulabyfactoring the lag polynomial. Thus, find λ 1 and λ 2 such that. The required vales solve (1 φ 1 L φ 2 L 2 )=(1 λ 1 L)(1 λ 2 L) λ 1 λ 2 = φ 2 λ 1 + λ 2 = φ 1. (Note λ 1 and λ 2 maybeequal,andtheymaybecomplex.) Now, we need to invert We do it by (1 λ 1 L)(1 λ 2 L)x t = t. 14

16 x t =(1 λ 1 L) 1 (1 λ 2 L) 1 t X X x t =( λ j 1L j )( λ j 2L j ) t. j=0 j=0 Multiplying out the polynomials is tedious, but straightforward. X X ( λ j 1L j )( λ j 2L j )=(1+λ 1 L + λ 2 1L )(1 + λ 2 L + λ 2 2L )= j=0 j=0 1+(λ 1 + λ 2 )L +(λ λ 1 λ 2 + λ 2 2)L = X ( j=0 jx k=0 λ k 1λ j k 2 )L j There is a prettier way to express the MA( ). Here we use the partial fractions trick. We find a and b so that 1 (1 λ 1 L)(1 λ 2 L) = a (1 λ 1 L) + b (1 λ 2 L) = a(1 λ 2L)+b(1 λ 1 L). (1 λ 1 L)(1 λ 2 L) The numerator on the right hand side must be 1, so Solving, so 1 (1 λ 1 L)(1 λ 2 L) = λ 1 (λ 1 λ 2 ) Thus, we can express x t as x t = λ 1 λ 1 λ 2 a + b =1 λ 2 a + λ 1 b =0 b = λ 2 λ 2 λ 1,a= λ 1 λ 1 λ 2, X j=0 1 (1 λ 1 L) + λ 2 1 (λ 2 λ 1 ) (1 λ 2 L). λ j 1 t j + λ 2 λ 2 λ 1 15 X λ j 2 t j. j=0

17 x t = X λ 1 ( λ j 1 + λ 2 λ j λ 1 λ 2 λ 2 λ 2) t j 1 j=0 This formula should remind you of the solution to a second-order difference or differential equation the response of x to a shock isasumoftwoexponentials, or (if the λ are complex) a mixture of two damped sine and cosine waves. AR(p) s work exactly the same way. Computer programs exist to find roots of polynomials of arbitrary order. You can then multiply the lag polynomials together or find the partial fractions expansion. Below, we ll see a way of writing the AR(p) asavector AR(1) that makes the process even easier. Note again that not every AR(2) can be inverted. We require that the λ 0 s satisfy λ < 1, andonecanusetheirdefinition to find the implied allowed region of φ 1 and φ 2. Again, until further notice, we will only use invertible ARMA models. Going from MA to AR( ) is now obvious. Write the MA as x t = b(l) t, and so it has an AR( ) representation b(l) 1 x t = t Summary of allowed lag polynomial manipulations In summary. one can manipulate lag polynomials pretty much just like regular polynomials, as if L was a number. (We ll study the theory behind them later, and it is based on replacing L by z where z is a complex number.) Among other things, 1) We can multiply them a(l)b(l) =(a 0 + a 1 L +..)(b 0 + b 1 L +..) =a 0 b 0 +(a 0 b 1 + b 0 a 1 )L ) They commute: a(l)b(l) =b(l)a(l) 16

18 (you should prove this to yourself). 3) We can raise them to positive integer powers a(l) 2 = a(l)a(l) 4) We can invert them, by factoring them and inverting each term a(l) =(1 λ 1 L)(1 λ 2 L)... a(l) 1 =(1 λ 1 L) 1 (1 λ 2 L) 1...= X λ j 1L j j=0 X j=0 λ j 2L j... = = c 1 (1 λ 1 L) 1 + c 2 (1 λ 2 L) 1... We ll consider roots greater than and/or equal to one, fractional powers, and non-polynomial functions of lag operators later. 3.4 Multivariate ARMA models. As in the rest of econometrics, multivariate models look just like univariate models, with the letters reinterpreted as vectors and matrices. Thus, consider a multivariate time series yt x t =. z t The building block is a multivariate white noise process, t iid N(0, Σ), by which we mean δt σ t = ; E( ν t )=0,E( t 0 2 t t)=σ = δ σ δν,e( t 0 t j) =0. (InthesectiononorthogonalizingVAR swe llseehowtostartwithaneven simpler building block, δ and ν uncorrelated or Σ = I.) σ δν σ 2 ν 17

19 The AR(1) is x t = φx t 1 + t. Reinterpreting the letters as appropriately sized matrices and vectors, yt φyy φ = yz yt 1 δt + z t φ zz z t 1 ν t or φ zy y t = φ yy y t 1 + φ yz z t 1 + δ t z t = φ zy y t 1 + φ zz z t 1 + ν t Notice that both lagged y and lagged z appear in each equation. Thus, the vector AR(1) captures cross-variable dynamics. For example, it could capture the fact that when M1 is higher in one quarter, GNP tends to be higher the following quarter, as well as the facts that if GNP is high in one quarter, GNP tends to be higher the following quarter. We can write the vector AR(1) in lag operator notation, (I φl)x t = t or A(L)x t = t. I ll use capital letters to denote such matrices of lag polynomials. Given this notation, it s easy to see how to write multivariate ARMA models of arbitrary orders: A(L)x t = B(L) t, where A(L) =I Φ 1 L Φ 2 L 2...; B(L) =I+Θ 1 L+Θ 2 L 2 φj,yy φ +..., Φ j = j,yz, φ j,zy φ j,zz and similarly for Θ j. The way we have written these polynomials, the first term is I, just as the scalar lag polynomials of the last section always start with 1. Another way of writing this fact is A(0) = I, B(0) = I. As with Σ, there are other equivalent representations that do not have this feature, which we will study when we orthogonalize VARs. We can invert and manipulate multivariate ARMA models in obvious ways. For example, the MA( )representation of the multivariate AR(1) must be X (I ΦL)x t = t x t =(I ΦL) 1 t = Φ j t j 18 j=0

20 More generally, consider inverting an arbitrary AR(p), A(L)x t = t x t = A(L) 1 t. We can interpret the matrix inverse as a product of sums as above, or we can interpret it with the matrix inverse formula: ayy (L) a yz (L) yt δt = a zy (L) a zz (L) z t ν t yt =(a z yy (L)a zz (L) a zy (L)a yz (L)) 1 azz (L) a yz (L) δt t a zy (L) a yy (L) ν t We take inverses of scalar lag polynomials as before, by factoring them into roots and inverting each root with the 1/(1 z) formula. Though the above are useful ways to think about what inverting a matrix of lag polynomials means, they are not particularly good algorithms for doing it. It is far simpler to simply simulate the response of x t to shocks. We study this procedure below. The name vector autoregression is usually used in the place of vector ARMA because it is very uncommon to estimate moving average terms. Autoregressions are easy to estimate since the OLS assumptions still apply, where MA terms have to be estimated by maximum likelihood. Since every MA has an AR( ) representation, pure autoregressions can approximate vector MA processes, if you allow enough lags. 3.5 Problems and Tricks There is an enormous variety of clever tricks for manipulating lag polynomials beyond the factoring and partial fractions discussed above. Here are a few. 1. You can invert finite-order polynomials neatly by matching representations. For example, suppose a(l)x t = b(l) t,andyouwanttofind the moving average representation x t = d(l) t. You could try to crank out a(l) 1 b(l) directly, but that s not much fun. Instead, you could find d(l) from b(l) t = a(l)x t = a(l)d(l) t, hence b(l) =a(l)d(l), 19

21 and matching terms in L j to make sure this works. For example, suppose a(l) =a 0 + a 1 L, b(l) =b 0 + b 1 L + b 2 L 2. Multiplying out d(l) =(a o + a 1 L) 1 (b 0 + b 1 L + b 2 L 2 ) would be a pain. Instead, write Matching powers of L, b 0 + b 1 L + b 2 L 2 =(a 0 + a 1 L)(d 0 + d 1 L + d 2 L ). b 0 = a 0 d 0 b 1 = a 1 d 0 + a 0 d 1 b 2 = a 1 d 1 + a 0 d 2 0= a 1 d j+1 + a 0 d j ; j 3. which you can easily solve recursively for the d j.(tryit.) 20

22 Chapter 4 The autocorrelation and autocovariance functions. 4.1 Definitions The autocovariance of a series x t is defined as γ j =cov(x t,x t j ) (Covariance is defined as cov (x t,x t j )=E(x t E(x t ))(x t j E(x t j )), in case you forgot.) Since we are specializing to ARMA models without constant terms, E(x t ) = 0 for all our models. Hence γ j = E(x t x t j ) Note γ 0 =var(x t ) A related statistic is the correlation of x t with x t j or autocorrelation ρ j = γ j /var(x t )=γ j /γ 0. My notation presumes that the covariance of x t and x t j isthesameas that of x t 1 and x t j 1, etc., i.e. that it depends only on the separation between two xs, j, and not on the absolute date t. You can easily verify that invertible ARMA models posses this property. It is also a deeper property called stationarity, that I ll discuss later. 21

23 We constructed ARMA models in order to produce interesting models of the joint distribution of a time series {x t }. Autocovariances and autocorrelations are one obvious way of characterizing the joint distribution of a time series so produced. The correlation of x t with x t+1 is an obvious measure of how persistent the time series is, or how strong is the tendency for a high observation today to be followed by a high observation tomorrow. Next, we calculate the autocorrelations of common ARMA processes, both to characterize them, and to gain some familiarity with manipulating time series. 4.2 Autocovariance and autocorrelation of ARMA processes. White Noise. Since we assumed t iid N (0,σ 2 ), it s pretty obvious that γ 0 = σ 2,γ j =0forj 6= 0 ρ 0 =1,ρ j =0forj 6= 0. MA(1) The model is: Autocovariance: x t = t + θ t 1 γ 0 = var(x t )=var( t + θ t 1 )=σ 2 + θ 2 σ 2 =(1+θ 2 )σ 2 γ 1 = E(x t x t 1 )=E(( t + θ t 1 )( t 1 + θ t 2 )=E(θ 2 t 1) =θσ 2 γ 2 = E(x t x t 2 )=E(( t + θ t 1 )( t 1 + θ t 2 )=0 γ 3,...=0 Autocorrelation: ρ 1 = θ/(1 + θ 2 ); ρ 2,...=0 MA(2) 22

24 Model: Autocovariance: x t = t + θ 1 t 1 + θ 2 t 2 γ 0 = E[( t + θ 1 t 1 + θ 2 t 2 ) 2 ]=(1+θ θ 2 2)σ 2 γ 1 = E[( t + θ 1 t 1 + θ 2 t 2 )( t 1 + θ 1 t 2 + θ 2 t 3 )] = (θ 1 + θ 1 θ 2 )σ 2 γ 2 = E[( t + θ 1 t 1 + θ 2 t 2 )( t 2 + θ 1 t 3 + θ 2 t 4 )] = θ 2 σ 2 γ 3,γ 4,...=0 Autocorrelation: ρ 0 =1 ρ 1 =(θ 1 + θ 1 θ 2 )/(1 + θ1 2 + θ2) 2 ρ 2 = θ 2 /(1 + θ θ 2 2) ρ 3,ρ 4,...=0 MA(q), MA( ) By now the pattern should be clear: MA(q) processes have q autocorrelations different from zero. Also, it should be obvious that if x t = θ(l) t = X (θ j L j ) t j=0 then γ 0 =var(x t )= Ã X θj 2 j=0 X γ k = θ j θ j+k σ 2 j=0! σ 2 23

25 and formulas for ρ j follow immediately. Thereisanimportantlessoninallthis. Calculating second moment properties is easy for MA processes, since all the covariance terms (E( j k )) drop out. AR(1) There are two ways to do this one. First, we might use the MA( ) representation of an AR(1), and use the MA formulas given above. Thus, the model is X (1 φl)x t = t x t =(1 φl) 1 t = φ j t j so Ã! X γ 0 = φ 2j σ 2 = j=0 Ã! Ã X X γ 1 = φ j φ j+1 σ 2 = φ j=0 and continuing this way, γ k = j=0 j=0 1 1 φ 2 σ2 ; ρ 0 =1! φ 2j σ 2 = φk 1 φ 2 σ2 ; ρ k = φ k. φ 1 φ 2 σ2 ; ρ 1 = φ There s another way to find the autocorrelations of an AR(1), which is useful in its own right. γ 1 = E(x t x t 1 )=E((φx t 1 + t )(x t 1 )) = φσ 2 x; ρ 1 = φ γ 2 = E(x t x t 2 )=E((φ 2 x t 2 + φ t 1 + t )(x t 2 )) = φ 2 σ 2 x; ρ 2 = φ 2... γ k = E(x t x t k )=E((φ k x t k +...)(x t k )=φ k σx; 2 ρ k = φ k AR(p); Yule-Walker equations This latter method turns out to be the easy way to do AR(p) s. I ll do an AR(3), then the principle is clear for higher order AR s x t = φ 1 x t 1 + φ 2 x t 2 + φ 3 x t 3 + t 24

26 multiplying both sides by x t, x t 1,..., taking expectations, and dividing by γ 0, we obtain 1=φ 1 ρ 1 + φ 2 ρ 2 + φ 3 ρ 3 + σ 2 /γ 0 ρ 1 = φ 1 + φ 2 ρ 1 + φ 3 ρ 2 ρ 2 = φ 1 ρ 1 + φ 2 + φ 3 ρ 1 ρ 3 = φ 1 ρ 2 + φ 2 ρ 1 + φ 3 ρ k = φ 1 ρ k 1 + φ 2 ρ k 2 + φ 3 ρ k 3 The second, third and fourth equations can be solved for ρ 1,ρ 2 and ρ 3.Then each remaining equation gives ρ k in terms of ρ k 1 and ρ k 2,sowecansolve for all the ρs. Notice that the ρs follow the same difference equation as the original x s. Therefore, past ρ 3,theρ s follow a mixture of damped sines and exponentials. The first equation can then be solved for the variance, Summary σ 2 x = γ 0 = σ 2 1 (φ 1 ρ 1 + φ 2 ρ 2 + φ 3 ρ 3 ) The pattern of autocorrelations as a function of lag ρ j as a function of j is called the autocorrelation function. The MA(q) processhasq (potentially) non-zero autocorrelations, and the rest are zero. The AR(p) process has p (potentially) non-zero autocorrelations with no particular pattern, and then the autocorrelation function dies off as a mixture of sines and exponentials. One thing we learn from all this is that ARMA models are capable of capturing very complex patters of temporal correlation. Thus, they are a useful and interesting class of models. In fact, they can capture any valid autocorrelation! If all you care about is autocorrelation (and not, say, third moments, or nonlinear dynamics), then ARMAs are as general as you need to get! Time series books (e.g. Box and Jenkins ()) also define a partial autocorrelation function. The j th partial autocorrelation is related to the coefficient 25

27 on x t j of a regression of x t on x t 1,x t 2,...,x t j. Thus for an AR(p), the p + 1thandhigherpartial autocorrelations are zero. In fact, the partial autocorrelation function behaves in an exactly symmetrical fashion to the autocorrelation function: the partial autocorrelation of an MA(q) isdamped sines and exponentials after q. Box and Jenkins () and subsequent books on time series aimed at forecasting advocate inspection of autocorrelation and partial autocorrelation functions to identify the appropriate parsimonious AR, MA or ARMA process. I m not going to spend any time on this, since the procedure is not much followed in economics anymore. With rare exceptions (for example Rosen (), Hansen and Hodrick(1981)) economic theory doesn t say much about the orders of AR or MA terms. Thus, we use short order ARMAs to approximate a process which probably is really of infinite order (though with small coefficients). Rather than spend a lot of time on identification of the precise ARMA process, we tend to throw in a few extra lags just to be sure and leave it at that. 4.3 A fundamental representation Autocovariances also turn out to be useful because they are the first of three fundamental representations for a time series. ARMA processes with normal iid errors are linear combinations of normals, so the resulting {x t } are normally distributed. Thus the joint distribution of an ARMA time series is fully characterized by their mean (0) and covariances E(x t x t j ). (Using the usual formula for a multivariate normal, we can write the joint probability density of {x t } using only the covariances.) In turn, all the statistical properties of a series are described by its joint distribution. Thus, once we know the autocovariances we know everything there is to know about the process. Put another way, If two processes have the same autocovariance function, they are the same process. This was not true of ARMA representations an AR(1) is the same as a (particular) MA( ), etc. 26

28 This is a useful fact. Here s an example. Suppose x t is composed of two unobserved components as follows: y t = ν t + αν t 1 ; z t = δ t ; x t = y t + z t ν t,δ t iid, independent of each other. What ARMA process does x t follow? One way to solve this problem is to find the autocovariance function of x t,thenfind an ARMA process with the same autocovariance function. Since the autocovariance function is fundamental, this must be an ARMA representation for x t. var(x t )=var(y t )+var(z t )=(1+α 2 )σ 2 ν + σ 2 δ E(x t x t 1 )=E[(ν t + δ t + αν t 1 )(ν t 1 + δ t 1 + αν t 2 )] = ασ 2 ν E(x t x t k )=0, k 1. γ 0 and γ 1 nonzero and the rest zero is the autocorrelation function of an MA(1), so we must be able to represent x t as an MA(1). Using the formula above for the autocorrelation of an MA(1), γ 0 =(1+θ 2 )σ 2 =(1+α 2 )σ 2 ν + σ 2 δ γ 1 = θσ 2 = ασ 2 ν. These are two equations in two unknowns, which we can solve for θ and σ 2, the two parameters of the MA(1) representation x t =(1+θL) t. Matching fundamental representations is one of the most common tricks in manipulating time series, and we ll see it again and again. 4.4 Admissible autocorrelation functions Since the autocorrelation function is fundamental, it might be nice to generate time series processes by picking autocorrelations, rather than specifying (non-fundamental) ARMA parameters. But not every collection of numbers is the autocorrelation of a process. In this section, we answer the question, 27

### 1 Short Introduction to Time Series

ECONOMICS 7344, Spring 202 Bent E. Sørensen January 24, 202 Short Introduction to Time Series A time series is a collection of stochastic variables x,.., x t,.., x T indexed by an integer value t. The

### Chapter 1. Vector autoregressions. 1.1 VARs and the identi cation problem

Chapter Vector autoregressions We begin by taking a look at the data of macroeconomics. A way to summarize the dynamics of macroeconomic data is to make use of vector autoregressions. VAR models have become

### Univariate and Multivariate Methods PEARSON. Addison Wesley

Time Series Analysis Univariate and Multivariate Methods SECOND EDITION William W. S. Wei Department of Statistics The Fox School of Business and Management Temple University PEARSON Addison Wesley Boston

### Chapter 6. Econometrics. 6.1 Introduction. 6.2 Univariate techniques Transforming data

Chapter 6 Econometrics 6.1 Introduction We re going to use a few tools to characterize the time series properties of macro variables. Today, we will take a relatively atheoretical approach to this task,

### SYSTEMS OF REGRESSION EQUATIONS

SYSTEMS OF REGRESSION EQUATIONS 1. MULTIPLE EQUATIONS y nt = x nt n + u nt, n = 1,...,N, t = 1,...,T, x nt is 1 k, and n is k 1. This is a version of the standard regression model where the observations

### Time Series Analysis 1. Lecture 8: Time Series Analysis. Time Series Analysis MIT 18.S096. Dr. Kempthorne. Fall 2013 MIT 18.S096

Lecture 8: Time Series Analysis MIT 18.S096 Dr. Kempthorne Fall 2013 MIT 18.S096 Time Series Analysis 1 Outline Time Series Analysis 1 Time Series Analysis MIT 18.S096 Time Series Analysis 2 A stochastic

### State Space Time Series Analysis

State Space Time Series Analysis p. 1 State Space Time Series Analysis Siem Jan Koopman http://staff.feweb.vu.nl/koopman Department of Econometrics VU University Amsterdam Tinbergen Institute 2011 State

### y t by left multiplication with 1 (L) as y t = 1 (L) t =ª(L) t 2.5 Variance decomposition and innovation accounting Consider the VAR(p) model where

. Variance decomposition and innovation accounting Consider the VAR(p) model where (L)y t = t, (L) =I m L L p L p is the lag polynomial of order p with m m coe±cient matrices i, i =,...p. Provided that

### Chapter 4: Vector Autoregressive Models

Chapter 4: Vector Autoregressive Models 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie IV.1 Vector Autoregressive Models (VAR)...

### Introduction to ARMA Models

Statistics 910, #8 1 Overview Introduction to ARMA Models 1. Modeling paradigm 2. Review stationary linear processes 3. ARMA processes 4. Stationarity of ARMA processes 5. Identifiability of ARMA processes

### Lecture 2: ARMA(p,q) models (part 3)

Lecture 2: ARMA(p,q) models (part 3) Florian Pelgrin University of Lausanne, École des HEC Department of mathematics (IMEA-Nice) Sept. 2011 - Jan. 2012 Florian Pelgrin (HEC) Univariate time series Sept.

### Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

### Univariate Time Series Analysis; ARIMA Models

Econometrics 2 Spring 25 Univariate Time Series Analysis; ARIMA Models Heino Bohn Nielsen of4 Outline of the Lecture () Introduction to univariate time series analysis. (2) Stationarity. (3) Characterizing

### Sales forecasting # 2

Sales forecasting # 2 Arthur Charpentier arthur.charpentier@univ-rennes1.fr 1 Agenda Qualitative and quantitative methods, a very general introduction Series decomposition Short versus long term forecasting

### Analysis of Financial Time Series with EViews

Analysis of Financial Time Series with EViews Enrico Foscolo Contents 1 Asset Returns 2 1.1 Empirical Properties of Returns................. 2 2 Heteroskedasticity and Autocorrelation 4 2.1 Testing for

### Time Series Analysis

Time Series Analysis Forecasting with ARIMA models Andrés M. Alonso Carolina García-Martos Universidad Carlos III de Madrid Universidad Politécnica de Madrid June July, 2012 Alonso and García-Martos (UC3M-UPM)

### Financial TIme Series Analysis: Part II

Department of Mathematics and Statistics, University of Vaasa, Finland January 29 February 13, 2015 Feb 14, 2015 1 Univariate linear stochastic models: further topics Unobserved component model Signal

### Chapter 5. Analysis of Multiple Time Series. 5.1 Vector Autoregressions

Chapter 5 Analysis of Multiple Time Series Note: The primary references for these notes are chapters 5 and 6 in Enders (2004). An alternative, but more technical treatment can be found in chapters 10-11

### Time Series Analysis in Economics. Klaus Neusser

Time Series Analysis in Economics Klaus Neusser May 26, 2015 Contents I Univariate Time Series Analysis 3 1 Introduction 1 1.1 Some examples.......................... 2 1.2 Formal definitions.........................

### Time Series Analysis

Time Series Analysis Autoregressive, MA and ARMA processes Andrés M. Alonso Carolina García-Martos Universidad Carlos III de Madrid Universidad Politécnica de Madrid June July, 212 Alonso and García-Martos

### Univariate Time Series Analysis; ARIMA Models

Econometrics 2 Fall 25 Univariate Time Series Analysis; ARIMA Models Heino Bohn Nielsen of4 Univariate Time Series Analysis We consider a single time series, y,y 2,..., y T. We want to construct simple

### A Multiplicative Seasonal Box-Jenkins Model to Nigerian Stock Prices

A Multiplicative Seasonal Box-Jenkins Model to Nigerian Stock Prices Ette Harrison Etuk Department of Mathematics/Computer Science, Rivers State University of Science and Technology, Nigeria Email: ettetuk@yahoo.com

### Some useful concepts in univariate time series analysis

Some useful concepts in univariate time series analysis Autoregressive moving average models Autocorrelation functions Model Estimation Diagnostic measure Model selection Forecasting Assumptions: 1. Non-seasonal

### 4.3 Moving Average Process MA(q)

66 CHAPTER 4. STATIONARY TS MODELS 4.3 Moving Average Process MA(q) Definition 4.5. {X t } is a moving-average process of order q if X t = Z t + θ 1 Z t 1 +... + θ q Z t q, (4.9) where and θ 1,...,θ q

### Analysis and Computation for Finance Time Series - An Introduction

ECMM703 Analysis and Computation for Finance Time Series - An Introduction Alejandra González Harrison 161 Email: mag208@exeter.ac.uk Time Series - An Introduction A time series is a sequence of observations

### The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series.

Cointegration The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series. Economic theory, however, often implies equilibrium

### 1 Teaching notes on GMM 1.

Bent E. Sørensen January 23, 2007 1 Teaching notes on GMM 1. Generalized Method of Moment (GMM) estimation is one of two developments in econometrics in the 80ies that revolutionized empirical work in

### Linear Least Squares

Linear Least Squares Suppose we are given a set of data points {(x i,f i )}, i = 1,...,n. These could be measurements from an experiment or obtained simply by evaluating a function at some points. One

### Lecture 4: Seasonal Time Series, Trend Analysis & Component Model Bus 41910, Time Series Analysis, Mr. R. Tsay

Lecture 4: Seasonal Time Series, Trend Analysis & Component Model Bus 41910, Time Series Analysis, Mr. R. Tsay Business cycle plays an important role in economics. In time series analysis, business cycle

### Maximum Likelihood Estimation of an ARMA(p,q) Model

Maximum Likelihood Estimation of an ARMA(p,q) Model Constantino Hevia The World Bank. DECRG. October 8 This note describes the Matlab function arma_mle.m that computes the maximum likelihood estimates

### OLS in Matrix Form. Let y be an n 1 vector of observations on the dependent variable.

OLS in Matrix Form 1 The True Model Let X be an n k matrix where we have observations on k independent variables for n observations Since our model will usually contain a constant term, one of the columns

### Estimating an ARMA Process

Statistics 910, #12 1 Overview Estimating an ARMA Process 1. Main ideas 2. Fitting autoregressions 3. Fitting with moving average components 4. Standard errors 5. Examples 6. Appendix: Simple estimators

### Time Series Analysis

JUNE 2012 Time Series Analysis CONTENT A time series is a chronological sequence of observations on a particular variable. Usually the observations are taken at regular intervals (days, months, years),

### Time Series in Mathematical Finance

Instituto Superior Técnico (IST, Portugal) and CEMAT cnunes@math.ist.utl.pt European Summer School in Industrial Mathematics Universidad Carlos III de Madrid July 2013 Outline The objective of this short

### Software Review: ITSM 2000 Professional Version 6.0.

Lee, J. & Strazicich, M.C. (2002). Software Review: ITSM 2000 Professional Version 6.0. International Journal of Forecasting, 18(3): 455-459 (June 2002). Published by Elsevier (ISSN: 0169-2070). http://0-

### Random Vectors and the Variance Covariance Matrix

Random Vectors and the Variance Covariance Matrix Definition 1. A random vector X is a vector (X 1, X 2,..., X p ) of jointly distributed random variables. As is customary in linear algebra, we will write

### Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

### TIME SERIES ANALYSIS

TIME SERIES ANALYSIS L.M. BHAR AND V.K.SHARMA Indian Agricultural Statistics Research Institute Library Avenue, New Delhi-0 02 lmb@iasri.res.in. Introduction Time series (TS) data refers to observations

### Time Series Analysis

Time Series Analysis Time series and stochastic processes Andrés M. Alonso Carolina García-Martos Universidad Carlos III de Madrid Universidad Politécnica de Madrid June July, 2012 Alonso and García-Martos

### Lecture 1: Univariate Time Series B41910: Autumn Quarter, 2008, by Mr. Ruey S. Tsay

Lecture 1: Univariate Time Series B41910: Autumn Quarter, 2008, by Mr. Ruey S. Tsay 1 Some Basic Concepts 1. Time Series: A sequence of random variables measuring certain quantity of interest over time.

### REVIEW EXERCISES DAVID J LOWRY

REVIEW EXERCISES DAVID J LOWRY Contents 1. Introduction 1 2. Elementary Functions 1 2.1. Factoring and Solving Quadratics 1 2.2. Polynomial Inequalities 3 2.3. Rational Functions 4 2.4. Exponentials and

### Introduction to Time Series Analysis. Lecture 6.

Introduction to Time Series Analysis. Lecture 6. Peter Bartlett www.stat.berkeley.edu/ bartlett/courses/153-fall2010 Last lecture: 1. Causality 2. Invertibility 3. AR(p) models 4. ARMA(p,q) models 1 Introduction

### 2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

1. Systems of linear equations We are interested in the solutions to systems of linear equations. A linear equation is of the form 3x 5y + 2z + w = 3. The key thing is that we don t multiply the variables

### Matrices and Polynomials

APPENDIX 9 Matrices and Polynomials he Multiplication of Polynomials Let α(z) =α 0 +α 1 z+α 2 z 2 + α p z p and y(z) =y 0 +y 1 z+y 2 z 2 + y n z n be two polynomials of degrees p and n respectively. hen,

### Econometrics Regression Analysis with Time Series Data

Econometrics Regression Analysis with Time Series Data João Valle e Azevedo Faculdade de Economia Universidade Nova de Lisboa Spring Semester João Valle e Azevedo (FEUNL) Econometrics Lisbon, May 2011

### 15.062 Data Mining: Algorithms and Applications Matrix Math Review

.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

### Introduction to time series analysis

Introduction to time series analysis Margherita Gerolimetto November 3, 2010 1 What is a time series? A time series is a collection of observations ordered following a parameter that for us is time. Examples

### Notes on Factoring. MA 206 Kurt Bryan

The General Approach Notes on Factoring MA 26 Kurt Bryan Suppose I hand you n, a 2 digit integer and tell you that n is composite, with smallest prime factor around 5 digits. Finding a nontrivial factor

### EC327: Advanced Econometrics, Spring 2007

EC327: Advanced Econometrics, Spring 2007 Wooldridge, Introductory Econometrics (3rd ed, 2006) Appendix D: Summary of matrix algebra Basic definitions A matrix is a rectangular array of numbers, with m

### Trend and Seasonal Components

Chapter 2 Trend and Seasonal Components If the plot of a TS reveals an increase of the seasonal and noise fluctuations with the level of the process then some transformation may be necessary before doing

### Luciano Rispoli Department of Economics, Mathematics and Statistics Birkbeck College (University of London)

Luciano Rispoli Department of Economics, Mathematics and Statistics Birkbeck College (University of London) 1 Forecasting: definition Forecasting is the process of making statements about events whose

### Time Series - ARIMA Models. Instructor: G. William Schwert

APS 425 Fall 25 Time Series : ARIMA Models Instructor: G. William Schwert 585-275-247 schwert@schwert.ssb.rochester.edu Topics Typical time series plot Pattern recognition in auto and partial autocorrelations

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

### Time Series Analysis

Time Series Analysis Identifying possible ARIMA models Andrés M. Alonso Carolina García-Martos Universidad Carlos III de Madrid Universidad Politécnica de Madrid June July, 2012 Alonso and García-Martos

### Time Series Analysis III

Lecture 12: Time Series Analysis III MIT 18.S096 Dr. Kempthorne Fall 2013 MIT 18.S096 Time Series Analysis III 1 Outline Time Series Analysis III 1 Time Series Analysis III MIT 18.S096 Time Series Analysis

### Additional Topics in Linear Algebra Supplementary Material for Math 540. Joseph H. Silverman

Additional Topics in Linear Algebra Supplementary Material for Math 540 Joseph H Silverman E-mail address: jhs@mathbrownedu Mathematics Department, Box 1917 Brown University, Providence, RI 02912 USA Contents

### Probability and Random Variables. Generation of random variables (r.v.)

Probability and Random Variables Method for generating random variables with a specified probability distribution function. Gaussian And Markov Processes Characterization of Stationary Random Process Linearly

### MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

### Joint Probability Distributions and Random Samples (Devore Chapter Five)

Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01 Probability and Statistics for Engineers Winter 2010-2011 Contents 1 Joint Probability Distributions 1 1.1 Two Discrete

### Notes for STA 437/1005 Methods for Multivariate Data

Notes for STA 437/1005 Methods for Multivariate Data Radford M. Neal, 26 November 2010 Random Vectors Notation: Let X be a random vector with p elements, so that X = [X 1,..., X p ], where denotes transpose.

### LOGNORMAL MODEL FOR STOCK PRICES

LOGNORMAL MODEL FOR STOCK PRICES MICHAEL J. SHARPE MATHEMATICS DEPARTMENT, UCSD 1. INTRODUCTION What follows is a simple but important model that will be the basis for a later study of stock prices as

### Chapter 5: Bivariate Cointegration Analysis

Chapter 5: Bivariate Cointegration Analysis 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie V. Bivariate Cointegration Analysis...

### INDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition)

INDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition) Abstract Indirect inference is a simulation-based method for estimating the parameters of economic models. Its

### Math 4310 Handout - Quotient Vector Spaces

Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable

### Forecasting model of electricity demand in the Nordic countries. Tone Pedersen

Forecasting model of electricity demand in the Nordic countries Tone Pedersen 3/19/2014 Abstract A model implemented in order to describe the electricity demand on hourly basis for the Nordic countries.

### Matrices provide a compact notation for expressing systems of equations or variables. For instance, a linear function might be written as: b k

MATRIX ALGEBRA FOR STATISTICS: PART 1 Matrices provide a compact notation for expressing systems of equations or variables. For instance, a linear function might be written as: y = x 1 b 1 + x 2 + x 3

### Time Series Analysis

Time Series Analysis Lecture Notes for 475.726 Ross Ihaka Statistics Department University of Auckland April 14, 2005 ii Contents 1 Introduction 1 1.1 Time Series.............................. 1 1.2 Stationarity

### December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

### Filtering Data using Frequency Domain Filters

Filtering Data using Frequency Domain Filters Wouter J. Den Haan London School of Economics c Wouter J. Den Haan August 27, 2014 Overview Intro lag operator Why frequency domain? Fourier transform Data

### Impulse Response Functions

Impulse Response Functions Wouter J. Den Haan University of Amsterdam April 28, 2011 General definition IRFs The IRF gives the j th -period response when the system is shocked by a one-standard-deviation

### C: LEVEL 800 {MASTERS OF ECONOMICS( ECONOMETRICS)}

C: LEVEL 800 {MASTERS OF ECONOMICS( ECONOMETRICS)} 1. EES 800: Econometrics I Simple linear regression and correlation analysis. Specification and estimation of a regression model. Interpretation of regression

### A Introduction to Matrix Algebra and Principal Components Analysis

A Introduction to Matrix Algebra and Principal Components Analysis Multivariate Methods in Education ERSH 8350 Lecture #2 August 24, 2011 ERSH 8350: Lecture 2 Today s Class An introduction to matrix algebra

### Normalization and Mixed Degrees of Integration in Cointegrated Time Series Systems

Normalization and Mixed Degrees of Integration in Cointegrated Time Series Systems Robert J. Rossana Department of Economics, 04 F/AB, Wayne State University, Detroit MI 480 E-Mail: r.j.rossana@wayne.edu

### 3. Regression & Exponential Smoothing

3. Regression & Exponential Smoothing 3.1 Forecasting a Single Time Series Two main approaches are traditionally used to model a single time series z 1, z 2,..., z n 1. Models the observation z t as a

### Basics Inversion and related concepts Random vectors Matrix calculus. Matrix algebra. Patrick Breheny. January 20

Matrix algebra January 20 Introduction Basics The mathematics of multiple regression revolves around ordering and keeping track of large arrays of numbers and solving systems of equations The mathematical

### CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

### Discrete Time Series Analysis with ARMA Models

Discrete Time Series Analysis with ARMA Models Veronica Sitsofe Ahiati (veronica@aims.ac.za) African Institute for Mathematical Sciences (AIMS) Supervised by Tina Marquardt Munich University of Technology,

### ITSM-R Reference Manual

ITSM-R Reference Manual George Weigt June 5, 2015 1 Contents 1 Introduction 3 1.1 Time series analysis in a nutshell............................... 3 1.2 White Noise Variance.....................................

### ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE

ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE YUAN TIAN This synopsis is designed merely for keep a record of the materials covered in lectures. Please refer to your own lecture notes for all proofs.

### Graphical Tools for Exploring and Analyzing Data From ARIMA Time Series Models

Graphical Tools for Exploring and Analyzing Data From ARIMA Time Series Models William Q. Meeker Department of Statistics Iowa State University Ames, IA 50011 January 13, 2001 Abstract S-plus is a highly

### 2 Integrating Both Sides

2 Integrating Both Sides So far, the only general method we have for solving differential equations involves equations of the form y = f(x), where f(x) is any function of x. The solution to such an equation

### Tangent and normal lines to conics

4.B. Tangent and normal lines to conics Apollonius work on conics includes a study of tangent and normal lines to these curves. The purpose of this document is to relate his approaches to the modern viewpoints

### U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, 2009. Notes on Algebra

U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, 2009 Notes on Algebra These notes contain as little theory as possible, and most results are stated without proof. Any introductory

### 1 Gaussian Elimination

Contents 1 Gaussian Elimination 1.1 Elementary Row Operations 1.2 Some matrices whose associated system of equations are easy to solve 1.3 Gaussian Elimination 1.4 Gauss-Jordan reduction and the Reduced

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

### Amath 546/Econ 589 Multivariate GARCH Models

Amath 546/Econ 589 Multivariate GARCH Models Eric Zivot May 15, 2013 Lecture Outline Exponentially weighted covariance estimation Multivariate GARCH models Prediction from multivariate GARCH models Reading

### In this paper we study how the time-series structure of the demand process affects the value of information

MANAGEMENT SCIENCE Vol. 51, No. 6, June 25, pp. 961 969 issn 25-199 eissn 1526-551 5 516 961 informs doi 1.1287/mnsc.15.385 25 INFORMS Information Sharing in a Supply Chain Under ARMA Demand Vishal Gaur

### ADVANCED FORECASTING MODELS USING SAS SOFTWARE

ADVANCED FORECASTING MODELS USING SAS SOFTWARE Girish Kumar Jha IARI, Pusa, New Delhi 110 012 gjha_eco@iari.res.in 1. Transfer Function Model Univariate ARIMA models are useful for analysis and forecasting

### Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

### Revised Version of Chapter 23. We learned long ago how to solve linear congruences. ax c (mod m)

Chapter 23 Squares Modulo p Revised Version of Chapter 23 We learned long ago how to solve linear congruences ax c (mod m) (see Chapter 8). It s now time to take the plunge and move on to quadratic equations.

### Lecture Notes on Polynomials

Lecture Notes on Polynomials Arne Jensen Department of Mathematical Sciences Aalborg University c 008 Introduction These lecture notes give a very short introduction to polynomials with real and complex

### CS3220 Lecture Notes: QR factorization and orthogonal transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss

### Traffic Safety Facts. Research Note. Time Series Analysis and Forecast of Crash Fatalities during Six Holiday Periods Cejun Liu* and Chou-Lin Chen

Traffic Safety Facts Research Note March 2004 DOT HS 809 718 Time Series Analysis and Forecast of Crash Fatalities during Six Holiday Periods Cejun Liu* and Chou-Lin Chen Summary This research note uses

### Taylor Polynomials and Taylor Series Math 126

Taylor Polynomials and Taylor Series Math 26 In many problems in science and engineering we have a function f(x) which is too complicated to answer the questions we d like to ask. In this chapter, we will

### UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure

UNIT 2 MATRICES - I Matrices - I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress

### Chapter 8. Matrices II: inverses. 8.1 What is an inverse?

Chapter 8 Matrices II: inverses We have learnt how to add subtract and multiply matrices but we have not defined division. The reason is that in general it cannot always be defined. In this chapter, we