Lecture 2: ARMA Models

Size: px
Start display at page:

Download "Lecture 2: ARMA Models"

Transcription

1 Lecture : ARMA Models ARMA Process As we have remarked, dependence is very common in time series observations. To model this time series dependence, we start with univariate ARMA models. To motivate the model, basically we can track two lines of thinking. First, for a series x t, we can model that the level of its current observations depends on the level of its lagged observations. For example, if we observe a high GDP realization this quarter, we would expect that the GDP in the next few quarters are good as well. This way of thinking can be represented by an AR model. The AR() (autoregressive of order one) can be written as: x t = φx t + ɛ t where ɛ t W N(0, σ ɛ ) and we keep this assumption through this lecture. Similarly, AR(p) (autoregressive of order p) can be written as: x t = φ x t + φ x t φ p x t p + ɛ t. In a second way of thinking, we can model that the observations of a random variable at time t are not only affected by the shock at time t, but also the shocks that have taken place before time t. For example, if we observe a negative shock to the economy, say, a catastrophic earthquake, then we would expect that this negative effect affects the economy not only for the time it takes place, but also for the near future. This kind of thinking can be represented by an MA model. The MA() (moving average of order one) and MA(q) (moving average of order q) can be written as and x t = ɛ t + θɛ t x t = ɛ t + θ ɛ t θ q ɛ t q. If we combine these two models, we get a general ARMA(p, q) model, x t = φ x t + φ x t φ p x t p + ɛ t + θ ɛ t θ q ɛ t q. ARMA model provides one of the basic tools in time series modeling. In the next few sections, we will discuss how to draw inferences using a univariate ARMA model. Copyright by Ling Hu.

2 Lag Operators Lag operators enable us to present an ARMA in a much concise way. Applying lag operator (denoted L) once, we move the index back one time unit; and applying it k times, we move the index back k units. Lx t = x t L x t = x t. L k x t = x t k The lag operator is distributive over the addition operator, i.e. L(x t + y t ) = x t + y t Using lag operators, we can rewrite the ARMA models as: AR() : AR(p) : MA() : MA(q) : ( φl)x t = ɛ t ( φ L φ L... φ p L p )x t = ɛ t x t = ( + θl)ɛ t x t = ( + θ L + θ L θ q L q )ɛ t Let φ 0 =, θ 0 = and define log polynomials φ(l) = φ L φ L... φ p L p θ(l) = + θ L + θ L θ p L q With lag polynomials, we can rewrite an ARMA process in a more compact way: AR : MA : ARMA : φ(l)x t = ɛ t x t = θ(l)ɛ t φ(l)x t = θ(l)ɛ t 3 Invertibility Given a time series probability model, usually we can find multiple ways to represent it. Which representation to choose depends on our problem. For example, to study the impulse-response functions (section 4), MA representations maybe more convenient; while to estimate an ARMA model, AR representations maybe more convenient as usually x t is observable while ɛ t is not. However, not all ARMA processes can be inverted. In this section, we will consider under what conditions can we invert an AR model to an MA model and invert an MA model to an AR model. It turns out that invertibility, which means that the process can be inverted, is an important property of the model. If we let denotes the identity operator, i.e., y t = y t, then the inversion operator ( φl) is defined to be the operator so that ( φl) ( φl) =

3 For the AR() process, if we premulitply ( φl) to both sides of the equation, we get x t = ( φl) ɛ t Is there any explicit way to rewrite ( φl)? Yes, and the answer just turns out to be θ(l) with θ k = φ k for φ <. To show this, ( φl)θ(l) = ( φl)( + θ L + θ L +...) = ( φl)( + φl + φ L +...) = φl + φl φ L + φ L φ 3 L = lim k φk L k = for φ < We can also verify this result by recursive substitution, x t = φx t + ɛ t = φ x t + ɛ t + φɛ t. = φ k x t k + ɛ t + φɛ t φ k ɛ t k+ k = φ k x t k + φ j ɛ t j With φ <, we have that lim k φ k x t k = 0, so again, we get the moving average representation with MA coefficient equal to φ k. So the condition that φ < enables us to invert an AR() process to an MA( ) process, AR() : ( φl)x t = ɛ t MA( ) : x t = θ(l)ɛ t with θ k = φ k We have got some nice results in inverting an AR() process to a MA( ) process. Then, how to invert a general AR(p) process? We need to factorize a lag polynomial and then make use of the result that ( φl) = θ(l). For example, let p =, we have ( φ L φ L )x t = ɛ t () To factorize this polynomial, we need to find roots λ and λ such that ( φ L φ L ) = ( λ L)( λ L) Given that both λ < and λ < (or when they are complex number, they lie within the unit circle. Keep this in mind as I may not mention this again in the remaining of the lecture), we could write ( λ L) = θ (L) ( λ L) = θ (L) 3

4 and so to invert (), we have Solving θ (L)θ (L) is straightforward, x t = ( λ L) ( λ L) ɛ t = θ (L)θ (L)ɛ t θ (L)θ (L) = ( + λ L + λ L +...)( + λ L + λ L +...) = + (λ + λ )L + (λ + λ λ + λ )L +... k = ( λ j λk j )L k = ψ(l), say, with ψ k = k λj λk j. Similarly, we can also invert the general AR(p) process given that all roots λ i has less than one absolute value. An alternative way to represent this MA process (to express ψ) is to make use of partial fractions. Let c, c be two constants, and their values are determined by We must have ( λ L)( λ L) = c λ L + c λ L = c ( λ L) + c ( λ L) ( λ L)( λ L) = c ( λ L) + c ( λ L) = (c + c ) (c λ + c λ )L which gives Solving these two equations we get Then we can express x t as where ψ k = c λ k + c λ k. c + c = and c λ + c λ = 0. c = λ λ λ, c = λ λ λ. x t = [( λ L)( λ L)] ɛ t = c ( λ L) ɛ t + c ( λ L) ɛ t = c λ k ɛ t k + c λ k ɛ t k = ψ k ɛ t k 4

5 Similarly, an MA process, x t = θ(l)ɛ t, is invertible if θ(l) exists. An MA() process is invertible if θ <, and an MA(q) process is invertible if all roots of + θ z + θ z +... θ q z q = 0 lie outside of the unit circle. Note that for any invertible MA process, we can find a noninvertible MA process which is the same as the invertible process up to the second moment. The converse is also true. We will give an example in section 5. Finally, given an invertible ARMA(p, q) process, then what is the series ψ k? Note that since φ(l)x t = θ(l)ɛ t x t = φ (L)θ(L)ɛ t x t = ψ(l)ɛ t φ (L)θ(L)ɛ t = ψ(l)ɛ t, we have θ(l) = φ(l)ψ(l). So the elements of ψ can be computed recursively by equating the coefficients of L k. Example For a ARMA(, ) process, we have Matching coefficients on L k, we get + θl = ( φl)(ψ 0 + ψ L + ψ L +...) = ψ 0 + (ψ φψ 0 )L + (ψ φψ )L +... = ψ 0 Solving those equation, we can easily get θ = ψ φψ 0 0 = ψ j φψ j for j ψ 0 = ψ = φ + θ ψ j = φ j (φ + θ) for j 4 Impulse-Response Functions Given an ARMA model, φ(l)x t = θ(l)ɛ t, it is natural to ask: what is the effect on x t given a unit shock at time s (for s < t)? 5

6 4. MA process For an MA() process, x t = ɛ t + θɛ t the effects of ɛ on x are: For a MA(q) process, the effects on ɛ on x are: ɛ : x : 0 θ 0 0 x t = ɛ t + θ ɛ t + θ ɛ t θ q ɛ t q, ɛ : x : 0 θ θ... θ q 0 The left figure in Figure plots the impulse-response function of an MA(3) process. Similarly, we can write down the effects for an MA( ) process. As you can see, we can get impulse-response function immediately from an MA process. 4. AR process For a AR() process x t = φx t + ɛ t with φ <, we can invert it to a MA process and the effects of ɛ on x are: ɛ : x : 0 φ φ... As can be seen from above, the impulse-response dynamics is quite clear from a MA representation. For example, let t > s > 0, given one unit increase in ɛ s, the effect on x t would be φ t s, if there are no other shocks. If there are shocks that take place at time other than s and has nonzero effect on x t, then we can add these effects, since this is a linear model. The dynamics is a bit complicated for higher order AR process. But applying our old trick of inverting them to a MA process, then the following analysis will be straightforward. Take an AR() process as example. Example or x t = 0.6x t + 0.x t + ɛ t ( 0.6L 0.L )x t = ɛ t We first solve the polynomial: y + 3y 5 = 0 and get two roots y =.96 and y = Recall that λ = /y = 0.84 and λ = /y = 0.4. So we can factorize the lag polynomial to be: ( 0.6L 0.L )x t = ( 0.84L)( + 0.4L)x t x t = ( 0.84L) ( + 0.4L) ɛ t = ψ(l)ɛ t Recall that the roots for polynomial ay + by + c = 0 is b± b 4ac a. 6

7 where ψ k = k λj λk j. In this example, the series of ψ is {, 0.6, 0.566, , ,...}. So the effects of ɛ on x can be described as: ɛ : x : The right figure in Figure plots this impulse-response function. So after we invert an AR(p) process to an MA process, given t > s > 0, the effect of one unit increase in ɛ s on x t is just ψ t s. We can see that given a linear process, AR or ARMA, if we could represent them as a MA process, we will find impulse-response dynamics immediately. In fact, MA representation is the same thing as the impulse-response function Response Response Time Time Figure : The impulse-response functions of an MA(3) process (θ = 0.6, θ = 0.5, θ 3 = 0.4) and an AR() process (φ = 0.6, φ = 0.), with unit shock at time zero 5 Autocovariance Functions and Stationarity of ARMA models 5. MA() x t = ɛ t + θɛ t, where ɛ t W N(0, σ ɛ ). It is easy to calculate the first two moments of x t : and E(x t ) = E(ɛ t + θɛ t ) = 0 E(x t ) = ( + θ )σ ɛ γ x (t, t + h) = E[(ɛ t + θɛ t )(ɛ t+h + θɛ t+h )] { θσ = ɛ for h = 0 for h > 7

8 So, for a MA() process, we have a fixed mean and a covariance function which does not depend on time t: γ(0) = ( + θ )σ ɛ, γ() = θσ ɛ, and γ(h) = 0 for h >. So we know MA() is stationary given any finite value of θ. The autocorrelation can be computed as ρ x (h) = γ x (h)/γ x (0), so ρ x (0) =, ρ x () = θ + θ, ρ x(h) = 0 for h > We have proposed in the section on invertability that for an invertible (noninvertible) MA process, there always exists a noninvertible (invertible) process which is the same as the original process up to the second moment. We use the following MA() process as an example. Example 3 The process x t = ɛ t + θɛ t, ɛ t W N(0, σ ) θ > is noninvertible. Consider an invertible MA process defined as x t = ɛ t + /θ ɛ t, ɛ t W N(0, θ σ ). Then we can compute that E(x t ) = E( x t ) = 0, E(x t ) = E( x t ) = ( + θ )σ, γ x () = γ x () = θσ, and γ x (h) = γ x (h) = 0 for h >. Therefore, these two processes are equivalent up to the second moments. To be more concrete, we plug in some numbers. Let θ =, and we know that the process is noninvertible. Consider the invertible process x t = ɛ t + ɛ t, ɛ t W N(0, ) x t = ɛ t + (/) ɛ t, ɛ t W N(0, 4). Note that E(x t ) = E( x t ) = 0, E(x t ) = E( x t ) = 5, γ x () = γ x () =, and γ x (h) = γ x (h) = 0 for h >. Although these two representations, noninvertible MA and invertible MA, could generate the same process up to the second moment, we prefer the invertible presentations in practice because if we can invert an MA process to an AR process, we can find the value of ɛ t (non-observable) based on all past values of x (observable). If a process is noninvertible, then, in order to find the value of ɛ t, we have to know all future values of x. 5. MA(q) x t = θ(l)ɛ t = (θ k L k )ɛ t 8

9 The first two moments are: and γ x (h) = E(x t ) = 0 E(x t ) = θ k σ ɛ { q h θ kθ k+h σɛ for h =,,..., q 0 for h > q Again, a MA(q) is stationary for any finite values of θ,..., θ q. 5.3 MA( ) x t = θ(l)ɛ t = (θ k L k )ɛ t Before we compute moments and discuss the stationarity of x t, we should first make sure that {x t } converges. Proposition If {ɛ t } is a sequence of white noise with σɛ <, and if series x t = θ(l)ɛ t = θ k ɛ t k converges in mean square. θ k <, then the Proof (See Appendix 3.A. in Hamilton): Recall the Cauchy criterion: a sequence {y n } converges in mean square if and only if y n y m 0 as n, m. In this problem, for n > m > 0, we want to show that [ n ] m E θ k ɛ t k θ k ɛ t k = = k= m k n θ k σ ɛ [ n m θk θ k k= ] 0 as m, n The result holds since {θ k } is square summable. It is often more convenient to work with a slightly stronger condition absolutely summability: θ k <. σ ɛ 9

10 It is easy to show that absolutely summable implies square summable. A MA( ) process with absolutely summable coefficients is stationary with moments: 5.4 AR() E(x t ) = 0 E(x t ) = γ x (h) = θ k σ ɛ θ k θ k+h σɛ ( φl)x t = ɛ t () Recall that an AR() process with φ < can be inverted to an MA( ) process x t = θ(l)ɛ t with θ k = φ k. With φ <, it is easy to check that the absolute summability holds: θ k = φ k <. Using the results for MA( ), the moments for x t in () can be computed: E(x t ) = 0 E(x t ) = φ k σɛ So, an AR() process with φ < is stationary. 5.5 AR(p) Recall that an AR(p) process = σɛ /( φ ) γ x (h) = φ k+h σɛ = φ h σ ɛ /( φ ) ( φ L φ L... φ p L p )x t = ɛ t can be inverted to an MA process x t = θ(l)ɛ t if all λ i in ( φ L φ L... φ p L p ) = ( λ L)( λ L)... ( λ p L) (3) have less than one absolute value. It also turns out that with λ i <, the absolute summability ψ k < is also satisfied. (The proof can be found on page 770 of Hamilton and the proof uses the result that ψ k = c λ k + c λ k.) 0

11 When we solve the polynomial in: (L y )(L y )... (L y p ) = 0 (4) the requirement that λ i < is equivalent to that all roots in (4) lie outside of the unit circle, i.e., y i > for all i. First calculate the expectation for x t, E(x t ) = 0. To compute the second moments, one method is to invert it into a MA process and using the formula of autocovariance function for MA( ). This method requires finding the moving average coefficients ψ, and an alternative method which is known as Yule-Walker method maybe more convenient in finding the autocovariance functions. To illustrate this method, take an AR() process as an example: x t = φ x t + φ x t + ɛ t Multiply x t, x t, x t,... to both sides of the equation, take expectation and and then divide by γ(0), we get the following equations: = φ ρ() + φ ρ() + σ ɛ /γ(0) ρ() = φ + φ ρ() ρ() = φ ρ() + φ ρ(k) = φ ρ(k ) + φ ρ(k ) for k 3 ρ() can be first solved from the second equation: ρ() = φ /( φ ), ρ() can then be solved from the third equation. ρ(k) can be solved recursively using ρ() and ρ() and finally, γ(0) can be solved from the first equation. Using γ(0) and ρ(k), γ(k) can computed using γ(k) = ρ(k)γ(0). Figure plots this autocorrelation for k = 0,..., 50 and the parameters are set to be φ = 0.5 and φ = 0.3. As is clear from the graph, the autocorrelation is very close to zero when k > rho(k) k Figure : Plot of the autocorrelation of AR() process, with φ = 0.5 and φ = 0.3

12 5.6 ARMA(p, q) Given an invertible ARMA(p, q) process, we have shown that invert φ(l) we obtain φ(l)x t = θ(l)ɛ t, x t = φ(l) θ(l)ɛ t = ψ(l)ɛ t. Therefore, an ARMA(p, q) process is stationary as long as φ(l) is invertible. In other words, the stationarity of the ARMA process only depends on the autoregressive parameters, and not on the moving average parameters (assuming that all parameters are finite). The expectation of this process E(x t ) = 0. To find the autocovariance function, first we can invert it to MA process and find the MA coefficients ψ(l) = φ(l) θ(l). We have shown an example of finding ψ in ARMA(, ) process, where we have ( φl)x t = ( + θl)ɛ t x t = ψ(l)ɛ t = ψ j ɛ t j where ψ 0 = and ψ j = φ j (φ + θ) for j. Now, using the autocovariance functions for MA( ) process we have γ x (0) = = = ( + ( + ψ k σ ɛ ) φ (k ) (φ + θ) σɛ k= (φ + θ) φ ) σ ɛ If we plug in some numbers, say, φ = 0.5 and θ = 0.5, so the original process is x t = 0.5x t + ɛ t + 0.5ɛ t, then γ x (0) = (7/3)σ ɛ. For h, γ x (h) = = ψ k ψ k+h σɛ Plug in φ = θ = 0.5 we have for h, ( ) φ h (φ + θ) + φ h (φ + θ) φ k σɛ ( = φ h (φ + θ) + ) (φ + θ)φ φ σɛ γ x (h) = 5 h σɛ. 3 k=

13 An alternative to compute the autocovariance function is to multiply each side of φ(l)x t = θ(l)ɛ t with x t, x t,... and take expectations. In our ARMA(, ) example, this gives γ x (0) φγ x () = [ + θ(θ + φ)]σ ɛ γ x () φγ x (0) = θσɛ γ x () φγ x () = 0 γ x (h) φγ x (h ) = 0 for h > where we use that x t = ψ(l)ɛ t in taking expectation on the right side, for instance, E(x t ɛ t ) = E((ɛ t + ψ ɛ t +...)ɛ t ) = σɛ. Plug in θ = φ = 0.5 and solving those equations, we have γ x (0) = (7/3)σɛ, γ x () = (5/3)σɛ, and γ x (h) = γ x (h )/ for h. This is the same results as we got using the first method. Summary: A MA process is stationary if and only if the coefficients {θ k } are square summable (absolute summable), i.e., θ k < or θ k <. Therefore, MA with finite number of MA coefficients are always stationary. Note that stationarity does not require MA to be invertible. An AR process is stationary if it is invertible, i.e. λ i < or y i >, as defined in (3) and (4) respectively. An ARMA(p, q) process is stationary if its autoregressive lag polynomial is invertible. 5.7 Autocovariance generating function of stationary ARMA process For covariance stationary process, we see that autocovariance function is very useful in describing the process. One way to summarize absolutely summable autocovariance functions ( h= γ(h) < ) is to use the autocovariance-generating function: g x (z) =. h= γ(h)z h. where z could be a complex number. For white noise, the autocovriance-generating function (AGF) is just a constant, i.e, for ɛ W N(0, σ ɛ ), g ɛ (z) = σ ɛ. For MA() process, x t = ( + θl)ɛ t, ɛ W N(0, σ ɛ ), we can compute that For a MA(q) process, g x (z) = σ ɛ [θz + ( + θ ) + θz] = σ ɛ ( + θz)( + θz ). x t = ( + θ L θ q L q )ɛ t, we know that γ x (h) = q h θ kθ k+h σ ɛ for h =,..., q and γ x (h) = 0 for h > q. we have g x (z) = h= γ(h)z h 3

14 ( = σɛ θk + = σ ɛ ( ) q h (θ k θ k h z h + θ k θ k+h z h ) h= θ k z k) ( θ k z k ) For a MA( ) process x t = θ(l)ɛ t where θ k <, we can naturally let q be replaced by in the AGF for MA(q) to get AGF for MA( ), ( ) ( ) g x (z) = σɛ θ k z k θ k z k = σɛ θ(z)θ(z ). Next, for a stationary AR or ARMA process, we can invert them to a MA process. For instance, an AR() process, ( φl)x t = ɛ t, invert it to and its AGF is which equal to σ ɛ g x (z) = x t = φl ɛ t, σ ɛ ( φz)( φz ), ( ) ( ) θ k z k θ k z k = σ θ(z)θ(z ), where θ k = φ k. In general, the AGF for an ARMA(p, q) process is g x (z) = σ ɛ ( + θ z θ q z q )( + θ z θ q z q ) ( φ z... φ p z p )( φ z... φ p z p ) = σ ɛ θ(z)θ(z ) φ(z)φ(z ) 6 Simulated ARMA process In this section, we plot a few simulated ARMA processes. In the simulations, the errors are Gaussian white noise i.i.d.n(0, ). As a comparison, we first plot a Gaussian white noise (or AR() with φ = 0) in Figure 3. Then, we plot AR() with φ = 0.4 and φ = 0.9 in Figure 4 and Figure 5. As you can see, the white noise process is very choppy and patternless. When φ = 0.4, it becomes a bit smoother, and when φ = 0.9, the departures from the mean (zero) is very prolonged. Figure 6 plots an AR() process and the coefficients are set to numbers as in our example in this lecture. Finally, Figure 7 plots a MA(3) process. Compare this MA(3) process with the white noise, we could see an increase of volatilities (the volatility of the white noise is and the volatility of the MA(3) process is.77). 4

15 Figure 3: A Gaussian white noise time series Figure 4: A simulated AR() process, with φ = 0.4 5

16 Figure 5: A simulated AR() process, with φ = Figure 6: A simulated AR() process, with φ = 0.6, φ = 0. 6

17 Figure 7: A simulated MA(3) process, with θ = 0.6, θ = 0.5, and θ 3 = Forecastings of ARMA Models 7. Principles of forecasting If we are interested in forecasting a random variable y t+h based on the observations of x up to time t (denoted by X) we can have different candidates, denoted by g(x). If our criterion in picking the best forecast is to minimize the mean squared error (MSE), then the best forecast is the conditional expectation, g(x) = E X (y t+h ). The proof can be found on page 73 in Hamilton. In our following discussion, we assume that the data generating process is known (so parameters are known), so we can compute the conditional moments. 7. AR models Let s start from an AR() process: x t = φx t + ɛ t where we continue to assume that ɛ t is a white noise with mean zero and variance σ ɛ, then we can compute E t (x t+ ) = E t (φx t + ɛ t+ ) = φx t E t (x t+ ) = E t (φ x t + φɛ t+ + ɛ t+ ) = φ x t... =... and the variance E t (x t+k ) = E t (φ k x t + φ k ɛ t ɛ t+k ) = φ k x t Var t (x t+ ) = Var t (φx t + ɛ t+ ) = σɛ Var t (x t+ ) = Var t (φ x t + φɛ t+ + ɛ t+ ) = ( + φ )σɛ... =... k Var t (x t+k ) = Var t (φ k x t + φ k ɛ t ɛ t+k ) = φ j σɛ 7

18 Note that as k, E t (x t+k ) 0 which is the unconditional expectation of x t, and Var t (x t+k ) σ ɛ /( φ ) which is the unconditional variance of x t. Similarly, for an AR(p) process, we can forecast recursively. 7.3 MA Models For a MA() process, if we know ɛ t, then and x t = ɛ t + θɛ t, E t (x t+ ) = E t (ɛ t+ + θɛ t ) = θɛ t E t (x t+ ) = E t (ɛ t+ + θɛ t+ ) = 0... =... E t (x t+k ) = E t (ɛ t+k + θɛ t+k ) = 0 Var t (x t+ ) = Var t (ɛ t+ + θɛ t ) = σ ɛ Var t (x t+ ) = Var t (ɛ t+ + θɛ t+ ) = ( + θ )σɛ... =... Var t (x t+k ) = Var t (ɛ t+k + θɛ t+k ) = ( + θ )σ ɛ It is easy to see that for an MA() process, the conditional expectation for two step ahead and higher is the same as unconditional expectation, so is the variance. Next, for a MA(q) model, x t = ɛ t + θ ɛ t + θ ɛ t θ q ɛ t q = θ j ɛ t j, if we know ɛ t, ɛ t,..., ɛ t q, then E t (x t+ ) = E t ( θ j ɛ t+ j ) = θ j ɛ t+ j E t (x t+ ) = E t ( θ j ɛ t+ j ) = j= j= θ j ɛ t+ j... =... E t (x t+k ) = E t ( θ j ɛ t+k j ) = θ j ɛ t+k j for k q E t (x t+k ) = E t ( j=k θ j ɛ t+k j ) = 0 for k > q 8

19 and Var t (x t+ ) = Var t ( Var t (x t+ ) = Var t (... =... Var t (x t+k ) = Var t ( θ j ɛ t+ j ) = σɛ θ j ɛ t+ j ) = + θσ ɛ θ j ɛ t+k j ) = k θj σɛ k > 0 We could see that for an MA(q) process, the conditional expectation and variance of forecast for q + and higher is the same as unconditional expectations and variance. 8 Wold Decomposition So far we have focused on ARMA models, which are linear time series models. Is there any relationship between a general covariance stationary process (maybe nonlinear) to linear representations? The answer is given by the Wold decomposition theorem: Proposition (Wold Decomposition) Any zero-mean covariance stationary process x t can be represented in the form x t = ψ j ɛ t j + V t where (i) ψ 0 = and (ii) ɛ t W N(0, σ ɛ ) ψ j < (iii) E(ɛ t V s ) = 0 s, t > 0 (iv) ɛ t is the error in forecasting x t on the basis of a linear function of lagged x: ɛ t = x t E(x t x t, x t,...) (v) V t is a deterministic process and it can be predicted from a linear function of lagged x. Remarks: Wold decomposition says that any covariance stationary process has a linear representation: a linear deterministic component (V t ) and a linear indeterministic components (ɛ t ). If V t = 0, then the process is said to be purely-non-deterministic, and the process can be represented as a MA( ) process. Basically, ɛ t is the error from the projection of x t on lagged x, therefore it is uniquely determined and it is orthogonal to lagged x and lagged ɛ. Since this error ɛ is the residual from the projections, it may not be the true errors in the DGP of x t. Also note that the error term (ɛ) is a white noise process, and does not need to be iid. Readings: Hamilton Ch. -4 Brockwell and Davis Ch. 3 Hayashi Ch 6., 6. 9

1 Short Introduction to Time Series

1 Short Introduction to Time Series ECONOMICS 7344, Spring 202 Bent E. Sørensen January 24, 202 Short Introduction to Time Series A time series is a collection of stochastic variables x,.., x t,.., x T indexed by an integer value t. The

More information

Lecture 2: ARMA(p,q) models (part 3)

Lecture 2: ARMA(p,q) models (part 3) Lecture 2: ARMA(p,q) models (part 3) Florian Pelgrin University of Lausanne, École des HEC Department of mathematics (IMEA-Nice) Sept. 2011 - Jan. 2012 Florian Pelgrin (HEC) Univariate time series Sept.

More information

Time Series Analysis

Time Series Analysis Time Series Analysis Autoregressive, MA and ARMA processes Andrés M. Alonso Carolina García-Martos Universidad Carlos III de Madrid Universidad Politécnica de Madrid June July, 212 Alonso and García-Martos

More information

Time Series Analysis

Time Series Analysis Time Series Analysis Forecasting with ARIMA models Andrés M. Alonso Carolina García-Martos Universidad Carlos III de Madrid Universidad Politécnica de Madrid June July, 2012 Alonso and García-Martos (UC3M-UPM)

More information

Some useful concepts in univariate time series analysis

Some useful concepts in univariate time series analysis Some useful concepts in univariate time series analysis Autoregressive moving average models Autocorrelation functions Model Estimation Diagnostic measure Model selection Forecasting Assumptions: 1. Non-seasonal

More information

Sales forecasting # 2

Sales forecasting # 2 Sales forecasting # 2 Arthur Charpentier arthur.charpentier@univ-rennes1.fr 1 Agenda Qualitative and quantitative methods, a very general introduction Series decomposition Short versus long term forecasting

More information

Univariate Time Series Analysis; ARIMA Models

Univariate Time Series Analysis; ARIMA Models Econometrics 2 Spring 25 Univariate Time Series Analysis; ARIMA Models Heino Bohn Nielsen of4 Outline of the Lecture () Introduction to univariate time series analysis. (2) Stationarity. (3) Characterizing

More information

Time Series Analysis 1. Lecture 8: Time Series Analysis. Time Series Analysis MIT 18.S096. Dr. Kempthorne. Fall 2013 MIT 18.S096

Time Series Analysis 1. Lecture 8: Time Series Analysis. Time Series Analysis MIT 18.S096. Dr. Kempthorne. Fall 2013 MIT 18.S096 Lecture 8: Time Series Analysis MIT 18.S096 Dr. Kempthorne Fall 2013 MIT 18.S096 Time Series Analysis 1 Outline Time Series Analysis 1 Time Series Analysis MIT 18.S096 Time Series Analysis 2 A stochastic

More information

Introduction to Time Series Analysis. Lecture 6.

Introduction to Time Series Analysis. Lecture 6. Introduction to Time Series Analysis. Lecture 6. Peter Bartlett www.stat.berkeley.edu/ bartlett/courses/153-fall2010 Last lecture: 1. Causality 2. Invertibility 3. AR(p) models 4. ARMA(p,q) models 1 Introduction

More information

Univariate Time Series Analysis; ARIMA Models

Univariate Time Series Analysis; ARIMA Models Econometrics 2 Fall 25 Univariate Time Series Analysis; ARIMA Models Heino Bohn Nielsen of4 Univariate Time Series Analysis We consider a single time series, y,y 2,..., y T. We want to construct simple

More information

3.1 Stationary Processes and Mean Reversion

3.1 Stationary Processes and Mean Reversion 3. Univariate Time Series Models 3.1 Stationary Processes and Mean Reversion Definition 3.1: A time series y t, t = 1,..., T is called (covariance) stationary if (1) E[y t ] = µ, for all t Cov[y t, y t

More information

ITSM-R Reference Manual

ITSM-R Reference Manual ITSM-R Reference Manual George Weigt June 5, 2015 1 Contents 1 Introduction 3 1.1 Time series analysis in a nutshell............................... 3 1.2 White Noise Variance.....................................

More information

Univariate and Multivariate Methods PEARSON. Addison Wesley

Univariate and Multivariate Methods PEARSON. Addison Wesley Time Series Analysis Univariate and Multivariate Methods SECOND EDITION William W. S. Wei Department of Statistics The Fox School of Business and Management Temple University PEARSON Addison Wesley Boston

More information

Discrete Time Series Analysis with ARMA Models

Discrete Time Series Analysis with ARMA Models Discrete Time Series Analysis with ARMA Models Veronica Sitsofe Ahiati (veronica@aims.ac.za) African Institute for Mathematical Sciences (AIMS) Supervised by Tina Marquardt Munich University of Technology,

More information

Analysis and Computation for Finance Time Series - An Introduction

Analysis and Computation for Finance Time Series - An Introduction ECMM703 Analysis and Computation for Finance Time Series - An Introduction Alejandra González Harrison 161 Email: mag208@exeter.ac.uk Time Series - An Introduction A time series is a sequence of observations

More information

Financial TIme Series Analysis: Part II

Financial TIme Series Analysis: Part II Department of Mathematics and Statistics, University of Vaasa, Finland January 29 February 13, 2015 Feb 14, 2015 1 Univariate linear stochastic models: further topics Unobserved component model Signal

More information

Time Series Analysis in Economics. Klaus Neusser

Time Series Analysis in Economics. Klaus Neusser Time Series Analysis in Economics Klaus Neusser May 26, 2015 Contents I Univariate Time Series Analysis 3 1 Introduction 1 1.1 Some examples.......................... 2 1.2 Formal definitions.........................

More information

Time Series Analysis

Time Series Analysis Time Series Analysis Andrea Beccarini Center for Quantitative Economics Winter 2013/2014 Andrea Beccarini (CQE) Time Series Analysis Winter 2013/2014 1 / 143 Introduction Objectives Time series are ubiquitous

More information

Time Series Analysis

Time Series Analysis Time Series Analysis Lecture Notes for 475.726 Ross Ihaka Statistics Department University of Auckland April 14, 2005 ii Contents 1 Introduction 1 1.1 Time Series.............................. 1 1.2 Stationarity

More information

Graphical Tools for Exploring and Analyzing Data From ARIMA Time Series Models

Graphical Tools for Exploring and Analyzing Data From ARIMA Time Series Models Graphical Tools for Exploring and Analyzing Data From ARIMA Time Series Models William Q. Meeker Department of Statistics Iowa State University Ames, IA 50011 January 13, 2001 Abstract S-plus is a highly

More information

In this paper we study how the time-series structure of the demand process affects the value of information

In this paper we study how the time-series structure of the demand process affects the value of information MANAGEMENT SCIENCE Vol. 51, No. 6, June 25, pp. 961 969 issn 25-199 eissn 1526-551 5 516 961 informs doi 1.1287/mnsc.15.385 25 INFORMS Information Sharing in a Supply Chain Under ARMA Demand Vishal Gaur

More information

AR(p) + MA(q) = ARMA(p, q)

AR(p) + MA(q) = ARMA(p, q) AR(p) + MA(q) = ARMA(p, q) Outline 1 3.4: ARMA(p, q) Model 2 Homework 3a Arthur Berg AR(p) + MA(q) = ARMA(p, q) 2/ 12 ARMA(p, q) Model Definition (ARMA(p, q) Model) A time series is ARMA(p, q) if it is

More information

Lecture 4: Seasonal Time Series, Trend Analysis & Component Model Bus 41910, Time Series Analysis, Mr. R. Tsay

Lecture 4: Seasonal Time Series, Trend Analysis & Component Model Bus 41910, Time Series Analysis, Mr. R. Tsay Lecture 4: Seasonal Time Series, Trend Analysis & Component Model Bus 41910, Time Series Analysis, Mr. R. Tsay Business cycle plays an important role in economics. In time series analysis, business cycle

More information

Time Series Analysis III

Time Series Analysis III Lecture 12: Time Series Analysis III MIT 18.S096 Dr. Kempthorne Fall 2013 MIT 18.S096 Time Series Analysis III 1 Outline Time Series Analysis III 1 Time Series Analysis III MIT 18.S096 Time Series Analysis

More information

Traffic Safety Facts. Research Note. Time Series Analysis and Forecast of Crash Fatalities during Six Holiday Periods Cejun Liu* and Chou-Lin Chen

Traffic Safety Facts. Research Note. Time Series Analysis and Forecast of Crash Fatalities during Six Holiday Periods Cejun Liu* and Chou-Lin Chen Traffic Safety Facts Research Note March 2004 DOT HS 809 718 Time Series Analysis and Forecast of Crash Fatalities during Six Holiday Periods Cejun Liu* and Chou-Lin Chen Summary This research note uses

More information

Non-Stationary Time Series andunitroottests

Non-Stationary Time Series andunitroottests Econometrics 2 Fall 2005 Non-Stationary Time Series andunitroottests Heino Bohn Nielsen 1of25 Introduction Many economic time series are trending. Important to distinguish between two important cases:

More information

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

More information

Time Series in Mathematical Finance

Time Series in Mathematical Finance Instituto Superior Técnico (IST, Portugal) and CEMAT cnunes@math.ist.utl.pt European Summer School in Industrial Mathematics Universidad Carlos III de Madrid July 2013 Outline The objective of this short

More information

Luciano Rispoli Department of Economics, Mathematics and Statistics Birkbeck College (University of London)

Luciano Rispoli Department of Economics, Mathematics and Statistics Birkbeck College (University of London) Luciano Rispoli Department of Economics, Mathematics and Statistics Birkbeck College (University of London) 1 Forecasting: definition Forecasting is the process of making statements about events whose

More information

Partial Fractions. Combining fractions over a common denominator is a familiar operation from algebra:

Partial Fractions. Combining fractions over a common denominator is a familiar operation from algebra: Partial Fractions Combining fractions over a common denominator is a familiar operation from algebra: From the standpoint of integration, the left side of Equation 1 would be much easier to work with than

More information

Chapter 4: Vector Autoregressive Models

Chapter 4: Vector Autoregressive Models Chapter 4: Vector Autoregressive Models 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie IV.1 Vector Autoregressive Models (VAR)...

More information

Forecasting the US Dollar / Euro Exchange rate Using ARMA Models

Forecasting the US Dollar / Euro Exchange rate Using ARMA Models Forecasting the US Dollar / Euro Exchange rate Using ARMA Models LIUWEI (9906360) - 1 - ABSTRACT...3 1. INTRODUCTION...4 2. DATA ANALYSIS...5 2.1 Stationary estimation...5 2.2 Dickey-Fuller Test...6 3.

More information

Autocovariance and Autocorrelation

Autocovariance and Autocorrelation Chapter 3 Autocovariance and Autocorrelation If the {X n } process is weakly stationary, the covariance of X n and X n+k depends only on the lag k. This leads to the following definition of the autocovariance

More information

Chapter 1. Vector autoregressions. 1.1 VARs and the identi cation problem

Chapter 1. Vector autoregressions. 1.1 VARs and the identi cation problem Chapter Vector autoregressions We begin by taking a look at the data of macroeconomics. A way to summarize the dynamics of macroeconomic data is to make use of vector autoregressions. VAR models have become

More information

Time Series Analysis

Time Series Analysis Time Series Analysis hm@imm.dtu.dk Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby 1 Outline of the lecture Identification of univariate time series models, cont.:

More information

Estimating an ARMA Process

Estimating an ARMA Process Statistics 910, #12 1 Overview Estimating an ARMA Process 1. Main ideas 2. Fitting autoregressions 3. Fitting with moving average components 4. Standard errors 5. Examples 6. Appendix: Simple estimators

More information

Time Series Analysis

Time Series Analysis Time Series Analysis Identifying possible ARIMA models Andrés M. Alonso Carolina García-Martos Universidad Carlos III de Madrid Universidad Politécnica de Madrid June July, 2012 Alonso and García-Martos

More information

Software Review: ITSM 2000 Professional Version 6.0.

Software Review: ITSM 2000 Professional Version 6.0. Lee, J. & Strazicich, M.C. (2002). Software Review: ITSM 2000 Professional Version 6.0. International Journal of Forecasting, 18(3): 455-459 (June 2002). Published by Elsevier (ISSN: 0169-2070). http://0-

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

THE SVM APPROACH FOR BOX JENKINS MODELS

THE SVM APPROACH FOR BOX JENKINS MODELS REVSTAT Statistical Journal Volume 7, Number 1, April 2009, 23 36 THE SVM APPROACH FOR BOX JENKINS MODELS Authors: Saeid Amiri Dep. of Energy and Technology, Swedish Univ. of Agriculture Sciences, P.O.Box

More information

Time Series - ARIMA Models. Instructor: G. William Schwert

Time Series - ARIMA Models. Instructor: G. William Schwert APS 425 Fall 25 Time Series : ARIMA Models Instructor: G. William Schwert 585-275-247 schwert@schwert.ssb.rochester.edu Topics Typical time series plot Pattern recognition in auto and partial autocorrelations

More information

How To Model A Series With Sas

How To Model A Series With Sas Chapter 7 Chapter Table of Contents OVERVIEW...193 GETTING STARTED...194 TheThreeStagesofARIMAModeling...194 IdentificationStage...194 Estimation and Diagnostic Checking Stage...... 200 Forecasting Stage...205

More information

Analysis of algorithms of time series analysis for forecasting sales

Analysis of algorithms of time series analysis for forecasting sales SAINT-PETERSBURG STATE UNIVERSITY Mathematics & Mechanics Faculty Chair of Analytical Information Systems Garipov Emil Analysis of algorithms of time series analysis for forecasting sales Course Work Scientific

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

Advanced Forecasting Techniques and Models: ARIMA

Advanced Forecasting Techniques and Models: ARIMA Advanced Forecasting Techniques and Models: ARIMA Short Examples Series using Risk Simulator For more information please visit: www.realoptionsvaluation.com or contact us at: admin@realoptionsvaluation.com

More information

TIME SERIES ANALYSIS

TIME SERIES ANALYSIS TIME SERIES ANALYSIS L.M. BHAR AND V.K.SHARMA Indian Agricultural Statistics Research Institute Library Avenue, New Delhi-0 02 lmb@iasri.res.in. Introduction Time series (TS) data refers to observations

More information

Introduction to Time Series Analysis. Lecture 1.

Introduction to Time Series Analysis. Lecture 1. Introduction to Time Series Analysis. Lecture 1. Peter Bartlett 1. Organizational issues. 2. Objectives of time series analysis. Examples. 3. Overview of the course. 4. Time series models. 5. Time series

More information

Quotient Rings and Field Extensions

Quotient Rings and Field Extensions Chapter 5 Quotient Rings and Field Extensions In this chapter we describe a method for producing field extension of a given field. If F is a field, then a field extension is a field K that contains F.

More information

Trend and Seasonal Components

Trend and Seasonal Components Chapter 2 Trend and Seasonal Components If the plot of a TS reveals an increase of the seasonal and noise fluctuations with the level of the process then some transformation may be necessary before doing

More information

Time Series Analysis

Time Series Analysis Time Series 1 April 9, 2013 Time Series Analysis This chapter presents an introduction to the branch of statistics known as time series analysis. Often the data we collect in environmental studies is collected

More information

Matrices and Polynomials

Matrices and Polynomials APPENDIX 9 Matrices and Polynomials he Multiplication of Polynomials Let α(z) =α 0 +α 1 z+α 2 z 2 + α p z p and y(z) =y 0 +y 1 z+y 2 z 2 + y n z n be two polynomials of degrees p and n respectively. hen,

More information

2.2 Elimination of Trend and Seasonality

2.2 Elimination of Trend and Seasonality 26 CHAPTER 2. TREND AND SEASONAL COMPONENTS 2.2 Elimination of Trend and Seasonality Here we assume that the TS model is additive and there exist both trend and seasonal components, that is X t = m t +

More information

Lecture Notes on Univariate Time Series Analysis and Box Jenkins Forecasting John Frain Economic Analysis, Research and Publications April 1992 (reprinted with revisions) January 1999 Abstract These are

More information

Time Series Analysis of Aviation Data

Time Series Analysis of Aviation Data Time Series Analysis of Aviation Data Dr. Richard Xie February, 2012 What is a Time Series A time series is a sequence of observations in chorological order, such as Daily closing price of stock MSFT in

More information

Exam Solutions. X t = µ + βt + A t,

Exam Solutions. X t = µ + βt + A t, Exam Solutions Please put your answers on these pages. Write very carefully and legibly. HIT Shenzhen Graduate School James E. Gentle, 2015 1. 3 points. There was a transcription error on the registrar

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

A FULLY INTEGRATED ENVIRONMENT FOR TIME-DEPENDENT DATA ANALYSIS

A FULLY INTEGRATED ENVIRONMENT FOR TIME-DEPENDENT DATA ANALYSIS A FULLY INTEGRATED ENVIRONMENT FOR TIME-DEPENDENT DATA ANALYSIS Version 1.4 July 2007 First edition Intended for use with Mathematica 6 or higher Software and manual: Yu He, John Novak, Darren Glosemeyer

More information

ARMA, GARCH and Related Option Pricing Method

ARMA, GARCH and Related Option Pricing Method ARMA, GARCH and Related Option Pricing Method Author: Yiyang Yang Advisor: Pr. Xiaolin Li, Pr. Zari Rachev Department of Applied Mathematics and Statistics State University of New York at Stony Brook September

More information

SYSTEMS OF REGRESSION EQUATIONS

SYSTEMS OF REGRESSION EQUATIONS SYSTEMS OF REGRESSION EQUATIONS 1. MULTIPLE EQUATIONS y nt = x nt n + u nt, n = 1,...,N, t = 1,...,T, x nt is 1 k, and n is k 1. This is a version of the standard regression model where the observations

More information

3.3. Solving Polynomial Equations. Introduction. Prerequisites. Learning Outcomes

3.3. Solving Polynomial Equations. Introduction. Prerequisites. Learning Outcomes Solving Polynomial Equations 3.3 Introduction Linear and quadratic equations, dealt within Sections 3.1 and 3.2, are members of a class of equations, called polynomial equations. These have the general

More information

State Space Time Series Analysis

State Space Time Series Analysis State Space Time Series Analysis p. 1 State Space Time Series Analysis Siem Jan Koopman http://staff.feweb.vu.nl/koopman Department of Econometrics VU University Amsterdam Tinbergen Institute 2011 State

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

The Method of Partial Fractions Math 121 Calculus II Spring 2015

The Method of Partial Fractions Math 121 Calculus II Spring 2015 Rational functions. as The Method of Partial Fractions Math 11 Calculus II Spring 015 Recall that a rational function is a quotient of two polynomials such f(x) g(x) = 3x5 + x 3 + 16x x 60. The method

More information

Metric Spaces. Chapter 7. 7.1. Metrics

Metric Spaces. Chapter 7. 7.1. Metrics Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

More information

Continued Fractions. Darren C. Collins

Continued Fractions. Darren C. Collins Continued Fractions Darren C Collins Abstract In this paper, we discuss continued fractions First, we discuss the definition and notation Second, we discuss the development of the subject throughout history

More information

Time Series Analysis and Forecasting

Time Series Analysis and Forecasting Time Series Analysis and Forecasting Math 667 Al Nosedal Department of Mathematics Indiana University of Pennsylvania Time Series Analysis and Forecasting p. 1/11 Introduction Many decision-making applications

More information

Time Series Analysis

Time Series Analysis JUNE 2012 Time Series Analysis CONTENT A time series is a chronological sequence of observations on a particular variable. Usually the observations are taken at regular intervals (days, months, years),

More information

LINEAR EQUATIONS IN TWO VARIABLES

LINEAR EQUATIONS IN TWO VARIABLES 66 MATHEMATICS CHAPTER 4 LINEAR EQUATIONS IN TWO VARIABLES The principal use of the Analytic Art is to bring Mathematical Problems to Equations and to exhibit those Equations in the most simple terms that

More information

Applied Time Series Analysis Part I

Applied Time Series Analysis Part I Applied Time Series Analysis Part I Robert M. Kunst University of Vienna and Institute for Advanced Studies Vienna October 3, 2009 1 Introduction and overview 1.1 What is econometric time-series analysis?

More information

Method To Solve Linear, Polynomial, or Absolute Value Inequalities:

Method To Solve Linear, Polynomial, or Absolute Value Inequalities: Solving Inequalities An inequality is the result of replacing the = sign in an equation with ,, or. For example, 3x 2 < 7 is a linear inequality. We call it linear because if the < were replaced with

More information

Booth School of Business, University of Chicago Business 41202, Spring Quarter 2015, Mr. Ruey S. Tsay. Solutions to Midterm

Booth School of Business, University of Chicago Business 41202, Spring Quarter 2015, Mr. Ruey S. Tsay. Solutions to Midterm Booth School of Business, University of Chicago Business 41202, Spring Quarter 2015, Mr. Ruey S. Tsay Solutions to Midterm Problem A: (30 pts) Answer briefly the following questions. Each question has

More information

Forecasting model of electricity demand in the Nordic countries. Tone Pedersen

Forecasting model of electricity demand in the Nordic countries. Tone Pedersen Forecasting model of electricity demand in the Nordic countries Tone Pedersen 3/19/2014 Abstract A model implemented in order to describe the electricity demand on hourly basis for the Nordic countries.

More information

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions. Algebra I Overview View unit yearlong overview here Many of the concepts presented in Algebra I are progressions of concepts that were introduced in grades 6 through 8. The content presented in this course

More information

1 Lecture: Integration of rational functions by decomposition

1 Lecture: Integration of rational functions by decomposition Lecture: Integration of rational functions by decomposition into partial fractions Recognize and integrate basic rational functions, except when the denominator is a power of an irreducible quadratic.

More information

Topic 5: Stochastic Growth and Real Business Cycles

Topic 5: Stochastic Growth and Real Business Cycles Topic 5: Stochastic Growth and Real Business Cycles Yulei Luo SEF of HKU October 1, 2015 Luo, Y. (SEF of HKU) Macro Theory October 1, 2015 1 / 45 Lag Operators The lag operator (L) is de ned as Similar

More information

Lecture 5 Rational functions and partial fraction expansion

Lecture 5 Rational functions and partial fraction expansion S. Boyd EE102 Lecture 5 Rational functions and partial fraction expansion (review of) polynomials rational functions pole-zero plots partial fraction expansion repeated poles nonproper rational functions

More information

INDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition)

INDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition) INDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition) Abstract Indirect inference is a simulation-based method for estimating the parameters of economic models. Its

More information

On the Efficiency of Competitive Stock Markets Where Traders Have Diverse Information

On the Efficiency of Competitive Stock Markets Where Traders Have Diverse Information Finance 400 A. Penati - G. Pennacchi Notes on On the Efficiency of Competitive Stock Markets Where Traders Have Diverse Information by Sanford Grossman This model shows how the heterogeneous information

More information

Promotional Forecast Demonstration

Promotional Forecast Demonstration Exhibit 2: Promotional Forecast Demonstration Consider the problem of forecasting for a proposed promotion that will start in December 1997 and continues beyond the forecast horizon. Assume that the promotion

More information

Lecture Notes on Polynomials

Lecture Notes on Polynomials Lecture Notes on Polynomials Arne Jensen Department of Mathematical Sciences Aalborg University c 008 Introduction These lecture notes give a very short introduction to polynomials with real and complex

More information

Representation of functions as power series

Representation of functions as power series Representation of functions as power series Dr. Philippe B. Laval Kennesaw State University November 9, 008 Abstract This document is a summary of the theory and techniques used to represent functions

More information

Forecasting Using Eviews 2.0: An Overview

Forecasting Using Eviews 2.0: An Overview Forecasting Using Eviews 2.0: An Overview Some Preliminaries In what follows it will be useful to distinguish between ex post and ex ante forecasting. In terms of time series modeling, both predict values

More information

Trends and Breaks in Cointegrated VAR Models

Trends and Breaks in Cointegrated VAR Models Trends and Breaks in Cointegrated VAR Models Håvard Hungnes Thesis for the Dr. Polit. degree Department of Economics, University of Oslo Defended March 17, 2006 Research Fellow in the Research Department

More information

Lectures 5-6: Taylor Series

Lectures 5-6: Taylor Series Math 1d Instructor: Padraic Bartlett Lectures 5-: Taylor Series Weeks 5- Caltech 213 1 Taylor Polynomials and Series As we saw in week 4, power series are remarkably nice objects to work with. In particular,

More information

Numerical Solution of Differential

Numerical Solution of Differential Chapter 13 Numerical Solution of Differential Equations We have considered numerical solution procedures for two kinds of equations: In chapter 10 the unknown was a real number; in chapter 6 the unknown

More information

Zero: If P is a polynomial and if c is a number such that P (c) = 0 then c is a zero of P.

Zero: If P is a polynomial and if c is a number such that P (c) = 0 then c is a zero of P. MATH 11011 FINDING REAL ZEROS KSU OF A POLYNOMIAL Definitions: Polynomial: is a function of the form P (x) = a n x n + a n 1 x n 1 + + a x + a 1 x + a 0. The numbers a n, a n 1,..., a 1, a 0 are called

More information

The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series.

The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series. Cointegration The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series. Economic theory, however, often implies equilibrium

More information

Definition 8.1 Two inequalities are equivalent if they have the same solution set. Add or Subtract the same value on both sides of the inequality.

Definition 8.1 Two inequalities are equivalent if they have the same solution set. Add or Subtract the same value on both sides of the inequality. 8 Inequalities Concepts: Equivalent Inequalities Linear and Nonlinear Inequalities Absolute Value Inequalities (Sections 4.6 and 1.1) 8.1 Equivalent Inequalities Definition 8.1 Two inequalities are equivalent

More information

Determine If An Equation Represents a Function

Determine If An Equation Represents a Function Question : What is a linear function? The term linear function consists of two parts: linear and function. To understand what these terms mean together, we must first understand what a function is. The

More information

U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, 2009. Notes on Algebra

U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, 2009. Notes on Algebra U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, 2009 Notes on Algebra These notes contain as little theory as possible, and most results are stated without proof. Any introductory

More information

Using JMP Version 4 for Time Series Analysis Bill Gjertsen, SAS, Cary, NC

Using JMP Version 4 for Time Series Analysis Bill Gjertsen, SAS, Cary, NC Using JMP Version 4 for Time Series Analysis Bill Gjertsen, SAS, Cary, NC Abstract Three examples of time series will be illustrated. One is the classical airline passenger demand data with definite seasonal

More information

Chapter 7 - Roots, Radicals, and Complex Numbers

Chapter 7 - Roots, Radicals, and Complex Numbers Math 233 - Spring 2009 Chapter 7 - Roots, Radicals, and Complex Numbers 7.1 Roots and Radicals 7.1.1 Notation and Terminology In the expression x the is called the radical sign. The expression under the

More information

A course in Time Series Analysis. Suhasini Subba Rao Email: suhasini.subbarao@stat.tamu.edu

A course in Time Series Analysis. Suhasini Subba Rao Email: suhasini.subbarao@stat.tamu.edu A course in Time Series Analysis Suhasini Subba Rao Email: suhasini.subbarao@stat.tamu.edu August 5, 205 Contents Introduction 8. Time Series data................................. 8.. R code....................................2

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Nonparametric Time Series Analysis in JMulTi March 29, 2005 Rolf Tschernig

Nonparametric Time Series Analysis in JMulTi March 29, 2005 Rolf Tschernig Nonparametric Time Series Analysis in JMulTi March 29, 2005 Rolf Tschernig Univariate nonparametric time series models are a valuable tool for modelling the conditional mean and conditional volatility

More information

Regression III: Advanced Methods

Regression III: Advanced Methods Lecture 16: Generalized Additive Models Regression III: Advanced Methods Bill Jacoby Michigan State University http://polisci.msu.edu/jacoby/icpsr/regress3 Goals of the Lecture Introduce Additive Models

More information

Algebraic Concepts Algebraic Concepts Writing

Algebraic Concepts Algebraic Concepts Writing Curriculum Guide: Algebra 2/Trig (AR) 2 nd Quarter 8/7/2013 2 nd Quarter, Grade 9-12 GRADE 9-12 Unit of Study: Matrices Resources: Textbook: Algebra 2 (Holt, Rinehart & Winston), Ch. 4 Length of Study:

More information

Introduction to Time Series and Forecasting, Second Edition

Introduction to Time Series and Forecasting, Second Edition Introduction to Time Series and Forecasting, Second Edition Peter J. Brockwell Richard A. Davis Springer Springer Texts in Statistics Advisors: George Casella Stephen Fienberg Ingram Olkin Springer New

More information

PITFALLS IN TIME SERIES ANALYSIS. Cliff Hurvich Stern School, NYU

PITFALLS IN TIME SERIES ANALYSIS. Cliff Hurvich Stern School, NYU PITFALLS IN TIME SERIES ANALYSIS Cliff Hurvich Stern School, NYU The t -Test If x 1,..., x n are independent and identically distributed with mean 0, and n is not too small, then t = x 0 s n has a standard

More information

CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA

CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA We Can Early Learning Curriculum PreK Grades 8 12 INSIDE ALGEBRA, GRADES 8 12 CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA April 2016 www.voyagersopris.com Mathematical

More information