1 Short Introduction to Time Series
|
|
|
- Alban O’Connor’
- 10 years ago
- Views:
Transcription
1 ECONOMICS 7344, Spring 202 Bent E. Sørensen January 24, 202 Short Introduction to Time Series A time series is a collection of stochastic variables x,.., x t,.., x T indexed by an integer value t. The interpretation is that the series represent a vector of stochastic variables observed at equal-spaced time intervals. The series is also some times called a stochastic process. The distinguishing feature of time series is that of temporal dependence: the distribution of x t conditional on previous value of the series depends on the outcome of those previous observations, i.e., the outcomes are not independent. For the purpose of analyzing a time series we will usually model the time series over all the non-negative integers: x t ; t = {0,,.., } or x t ; t = {,.., 0,,.., }. Time or time 0 will be the first period that you observe the series. In a specific model you will have to be explicit about the initial value, as will be clear from the following.. Stationarity Definition A time series is called stationary (more precisely covariance stationary) if E(x t ) = µ, E[(x t µ) 2 ] = γ(0), E[(x t µ)(x t+k µ)] = γ(k) = γ( k) = E[(x t µ)(x t k µ)]; k =, 2,..., where γ(k) ; k = 0,.. are independent of t and finite. There is a quite long tradition in time series to focus on only the first two moments of the process, rather than on the actual distribution of x t. If the process is normally distributed all information is contained in the first two moments and most of the statistical theory of time series estimators is asymptotic and more often than not only dependent on the first two moments of the process. The γ(k) s for k 0 are called autocovariances and if we divide by the variance we obtain the autocorrelations ρ(k) = γ(k)/γ(0). These are the correlation of x t with it own lagged values. Note that if Σ T is the matrix of variances and covariance of x,..., x T then γ(0) γ() γ(2)... γ(t ) γ() γ(0) γ()... γ(t 2) Σ T =..... γ(t 2) γ() γ(t ) γ(t 2)... γ() γ(0).
2 So if we let Ω T be the matrix of autocorrelations, i.e. Σ T = γ(0)ω T we will have Ω T = ρ() ρ(2)... ρ(t ) ρ() ρ()... ρ(t 2)..... ρ(t ) ρ(t 2)... ρ() Time series models are simple models for the (auto-) correlation of the x t s that allow us to avoid specifying the variance-covariance matrices explicitly. Definition: A stationary process e t with mean 0 is called white noise if γ(k) = 0 for k 0. Of course this implies that the autocorrelation matrix is just an identity matrix, so the standard OLS assumptions on the error term can also be formulated as the error term is assumed to be white noise.... The lag-operator, lag-polynomials, and their inverses The lag operator L is defined by Lx t = x t. We will also define the symbol L k as L k x t = x t k. You should think of the lag-operator as moving the whole process {x t ; t =,..., }. Notice that it is here practical to assume that the series is defined for all integer t, rather than starting at some period 0 in applications this may not always make sense and in these cases you may have to worry about your starting values. We will define lag polynomials as polynomials in the lag operator as follows. Let a(l) be the lag polynomial a(l) = a 0 + a L a p L p which is defined as an operator such that a(l)x t = a 0 x t + a x t a p x t p. This simply means that the equation above defines a(l) by the way if operates on x t. A key observation is that we can add and multiply lag-polynomials in exactly the same way as we can add and multiply polynomials in complex variables. For example, if a(l) = ( a L) and b(l) = ( b L) then a(l)b(l)x t = ( a L)( b L)x t = ( a L)(x t b x t ) = x t b x t a L(x t b x t ) which, after simplifying, gives you a(l)b(l)x t = x t (a + b) x t + ab x t 2 = ( (a + b) L + ab L 2 )x t. Notice, that if we denote c(l) = (a + b) L + ab L 2, the coefficients of the lag polynomial c(l) are equal to to the coefficients of the polynomial c(z) = (a+b)z +abz 2 = ( a z)( b z) where z is a real or complex variable. For a given lag-polynomial a(l) we therefore define the corresponding 2
3 z-transform (also a label from the engineering literature) a(z) where z is a complex number. The point is that if we define a(z) and b(z) as the complex polynomials we get from substituting the complex number z for L then we know that a(z) b(z) is a complex polynomial which we can denote c(z). Because the operations are similar, it is also the case that c(l) = a(l) b(l). Conceptually, c(l) is a function that works on a time series and c(z) is totally different animal we just use c(z) as a quick way to find the coefficients of c(l). One can invert lag-polynomials (although it is not always the case that an inverse exist, as in the case of matrices). We use the notation a(l) or a(l). Of course a(l) is defined as the operator such that a(l) a(l)x t = a(l)a(l) x t = x t. All problems concerning inversion of lag polynomials can be reduced to inversion of the first order polynomial az we will show the details for the first order polynomial. You all know the formula (valid for a < ) a = + a + a2 + a (If you are not fully familiar with this equation, then you should take a long look at it, prove it for yourself, and play around with it. It is very important for the following.) For a < we get az = + az + (az)2 + (az) which converges if z =. In order for infinite polynomials like this to converge the z-transform a(z) is defined only for complex numbers of length, i.e., z =. We can therefore find the inverse of the lag-polynomial a L as so al = I + al + (al)2 + (al) ( al) x t = x t + ax t + a 2 x t Inversion of a general lag polynomial. (Update, this now might be on exam.) It is well known that if a(z) = + a z a k z k is a complex polynomial then a(z) = ( α z)( α 2 z)...( α l z) where α,..., α k are the roots of the polynomial. Recall that the roots will be complex conjugates if the coefficients of a(z) are real numbers. Of course the lag-polynomial a(l) factors the same way. This means that a(l) = ( α L)( α 2 L)...( α l L) 3
4 The inverse of the general scalar lag polynomial is now simply defined by inverting the component first order lag polynomials one-by-one: x t = a(l) u t = ( α k L)...( α L) u t ; which is well defined since we can invert the stable first order lag polynomials one by one. Example: Consider the polynomial a(l) = ( +.3L.L 2 ). This can be factored as a(l) = ( +.5L)(.2L). Since ( +.5L) =.5L +.25L 2... and (.2L) = +.2L +.04L 2..., we have +.3L.2L 2 ) = (.5L +.25L 2...)( +.2L +.04L 2...) =.3L +.28L 2... This means that we can write the stationary model x t =.3x t +.x t 2 + u t = u t.3u t +.28u t Finally, notice that to invest the stable model, I need to assume that it is stationary because this is equivalent to the model having started at. The difference operator is defined as = I L giving x t = x t x t..2 AR models: The most commonly used type of time series models are the auto regressive (AR) models. x t = µ + a x t a k x t k + u t, where the innovation u t is white noise (added) with constant variance σ 2. Here k is a positive integer called the order of the AR-process. (The terms AR-model and AR-process are used interchangeably.) Such a process is usually referred to as an AR(k) process. Most of the intuition for AR processes can be gained from looking at the AR() process, which is also by far the most commonly applied model. Consider the process: x t = µ + ax t + u t ( ). If a < the process is called stable. For instance, assume that x 0 is a fixed number. Then x = µ + ax 0 + u using (*). You can use (*) again and find x 2 = ( + a)µ + a 2 x 0 + au + u 2. Continuing this (often called iterating equation (*) or recursive iteration ) you get x t = at a µ + x 0a t + a t u au t + u t. If the u t error terms are i.i.d. with variance σ 2 then var(x t ) = σ 2 [ + a (a 2 ) (t ) ] = σ2 ( a 2t) ) a 2 σ2 a 2, for t. Notice that this process is not stationary if x 0 is fixed (as the variance varies with t). However, 4
5 for large t the process is approximately stationary (the variance is approximately /( a 2 ) independently of t), and if the process was started at then x 0 would be stochastic with variance a 2. So the stationary AR() is a process that we imagine started in the infinite past. Using the lag operator methodology, we have (for µ = 0 for simplicity) ( al)x t = u t x t = ( al) u t x t = u t + au t + a 2 u t Compare this to what we obtained from iterating the process, and you can see that lag-operator manipulations in essence is nothing but a convenient way of iterating the process all the way back to the infinite past. Notice what happens if a : the variance tends to infinity and the stationary initial condition can not be defined - at least in typical case as where u t N(0, σ 2 ) since a normal distribution with infinite variance is not well defined. The AR() process x t = x t + u t, with u t iid white noise, is called a random walk. Random walk models (and other models with infinite variance) used to be considered somewhat pathological (they also behave strangely when used in regression models). Hall s 998 demonstration that consumption is a random walk (more precisely a martingale) in a not-too-farfetched version of the PIH model, was therefore a major surprise for the profession. (The difference between a random walk and a martingale is mainly that in a random walk the variance of the u t term is constant, which it need not be for a martingale.) The AR(k) model can also be written a(l)x t = µ + u t, where the log-polynomial a(l) associated with the AR(k) model is a(l) = a L... a k L k. (Note, I here us the term associated with, but you may encounter different ways of denoting the lag-polynomial of the AR(k) model. This usually doesn t create confusion.) We say that the AR(k) model is stable if all roots of the lag-polynomial a(l) are outside the unit circle. Example: If you are given for example an AR(2) process, like x t =.5x t + x t 2 + u t, 5
6 you should be able to tell if the process is stable. In the example we find the roots of the polynomial.5z x 2 to be r i =.5 (.5 ± ), so the roots are -2 and.5. Since.5 is less than one in absolute value the process is not stable. Stability versus stationarity: Theorem: A stationary model is stable. The same theorem said differently: If a process is not stable it is not stationary. The proof (for the AR() case) follows from the derivation above. For a non-stable model the variance goes to infinity so it cannot be constant and finite. The opposite statement is not true. It is possible for a model to be stable, but not stationary. E.g., this will be the case for a stable model that starts from some x 0 in period 0, unless x 0 is a random variable with exactly the right mean and variance..3 MA models: The simplest time series models are the moving average (MA) models: x t = µ + u t + b u t b l u t l = µ + b(l)u t, where the innovation u t is white noise and the lag-polynomial is defined by the equation. The positive integer l is called the order of the MA-process. MA processes are quite easy to analyze because they are given as a sum of independent (or uncorrelated) variables. [However, they are not so easy to estimate econometrically: since it is only the x t s that are observed, the u t s are unobserved, i.e., latent variables, that one cannot regress on. For the purpose of our class, where we use the models as modeling tools, this is a parenthetic remark.] Consider the simple scalar MA()-model (I leave out the mean for simplicity) ( ) x t = u t + bu t. If u t is an independent series of N(0, σ 2 u) variables, then this model really only states that x t has mean zero and autocovariances: γ(0) = ( + b 2 )σ 2 u; γ() = γ( ) = bσ 2 u; γ(k) = 0; k, 0,. (Notice that I here figured out what the model says about the distribution of the observed x s. In some economic models, the u t terms may have an economic interpretation, but in many applications of time series the the MA- (or AR-) model simply serves as a very convenient way of modeling the 6
7 autocovariances of the x variables.) The autocorrelations between the observations in the MA() model are trivially 0, except for ρ( ) = ρ() = b. +b 2 Consider equation (*) again. In lag-operator notation it reads which can be inverted to x t = ( + bl)u t, u t = ( + bl) x t = x t bx t + b 2 x t It is quite obvious that this expression is not meaningful if b since the power term blows up. In the case where b < the right hand side converges to a well defined random variable. Definition: The scalar MA(q) model is called invertible if all the roots of the lag-polynomial b(l) (strictly speaking the corresponding z-transform b(z)) are outside the unit circle. An ARMA model is a model that has both AR- and MA-component. For example, the model x t =.3x t +.x t 2 + u t +.4u t is called an ARMA(2,) model. (The following is added:) The ARMA model inherits the properties of AR and MA models. An ARMA-model, using lag-operators, takes the form a(l)x t = µ + b(l)u t, where a(l) and b(l) are lag-polynomials, which almost always are of finite order. If the ARcomponent of the ARMA-model is stable, we say the ARMA model is stable and if it is stationary and defined for all t, we can write the model as on infinite MA model: x t = µ + a (L)b(L)u t, where the constant µ = a (L)µ. This is a result, we will need when covering the PIH-model in details. If the MA component of the ARMA model is invertible, we can also write x t as an infinite AR-model: a(l)b (L)x t = µ + u t. In empirical work, x t is our observed data, and this formula shows how the u t terms could be found from the x t s, if those were observed since way back. 7
Time Series Analysis
Time Series Analysis Autoregressive, MA and ARMA processes Andrés M. Alonso Carolina García-Martos Universidad Carlos III de Madrid Universidad Politécnica de Madrid June July, 212 Alonso and García-Martos
SUBGROUPS OF CYCLIC GROUPS. 1. Introduction In a group G, we denote the (cyclic) group of powers of some g G by
SUBGROUPS OF CYCLIC GROUPS KEITH CONRAD 1. Introduction In a group G, we denote the (cyclic) group of powers of some g G by g = {g k : k Z}. If G = g, then G itself is cyclic, with g as a generator. Examples
Some useful concepts in univariate time series analysis
Some useful concepts in univariate time series analysis Autoregressive moving average models Autocorrelation functions Model Estimation Diagnostic measure Model selection Forecasting Assumptions: 1. Non-seasonal
Sales forecasting # 2
Sales forecasting # 2 Arthur Charpentier [email protected] 1 Agenda Qualitative and quantitative methods, a very general introduction Series decomposition Short versus long term forecasting
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written
1 Teaching notes on GMM 1.
Bent E. Sørensen January 23, 2007 1 Teaching notes on GMM 1. Generalized Method of Moment (GMM) estimation is one of two developments in econometrics in the 80ies that revolutionized empirical work in
State Space Time Series Analysis
State Space Time Series Analysis p. 1 State Space Time Series Analysis Siem Jan Koopman http://staff.feweb.vu.nl/koopman Department of Econometrics VU University Amsterdam Tinbergen Institute 2011 State
Univariate Time Series Analysis; ARIMA Models
Econometrics 2 Spring 25 Univariate Time Series Analysis; ARIMA Models Heino Bohn Nielsen of4 Outline of the Lecture () Introduction to univariate time series analysis. (2) Stationarity. (3) Characterizing
Univariate and Multivariate Methods PEARSON. Addison Wesley
Time Series Analysis Univariate and Multivariate Methods SECOND EDITION William W. S. Wei Department of Statistics The Fox School of Business and Management Temple University PEARSON Addison Wesley Boston
15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
Univariate Time Series Analysis; ARIMA Models
Econometrics 2 Fall 25 Univariate Time Series Analysis; ARIMA Models Heino Bohn Nielsen of4 Univariate Time Series Analysis We consider a single time series, y,y 2,..., y T. We want to construct simple
Chapter 1. Vector autoregressions. 1.1 VARs and the identi cation problem
Chapter Vector autoregressions We begin by taking a look at the data of macroeconomics. A way to summarize the dynamics of macroeconomic data is to make use of vector autoregressions. VAR models have become
Time Series Analysis
Time Series Analysis Forecasting with ARIMA models Andrés M. Alonso Carolina García-Martos Universidad Carlos III de Madrid Universidad Politécnica de Madrid June July, 2012 Alonso and García-Martos (UC3M-UPM)
Introduction to Time Series Analysis. Lecture 6.
Introduction to Time Series Analysis. Lecture 6. Peter Bartlett www.stat.berkeley.edu/ bartlett/courses/153-fall2010 Last lecture: 1. Causality 2. Invertibility 3. AR(p) models 4. ARMA(p,q) models 1 Introduction
GROUPS ACTING ON A SET
GROUPS ACTING ON A SET MATH 435 SPRING 2012 NOTES FROM FEBRUARY 27TH, 2012 1. Left group actions Definition 1.1. Suppose that G is a group and S is a set. A left (group) action of G on S is a rule for
a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given
Notes on Factoring. MA 206 Kurt Bryan
The General Approach Notes on Factoring MA 26 Kurt Bryan Suppose I hand you n, a 2 digit integer and tell you that n is composite, with smallest prime factor around 5 digits. Finding a nontrivial factor
Lecture Notes on Polynomials
Lecture Notes on Polynomials Arne Jensen Department of Mathematical Sciences Aalborg University c 008 Introduction These lecture notes give a very short introduction to polynomials with real and complex
Multiple regression - Matrices
Multiple regression - Matrices This handout will present various matrices which are substantively interesting and/or provide useful means of summarizing the data for analytical purposes. As we will see,
a 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4)
ROOTS OF POLYNOMIAL EQUATIONS In this unit we discuss polynomial equations. A polynomial in x of degree n, where n 0 is an integer, is an expression of the form P n (x) =a n x n + a n 1 x n 1 + + a 1 x
1 Sets and Set Notation.
LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most
Continued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column
SYSTEMS OF REGRESSION EQUATIONS
SYSTEMS OF REGRESSION EQUATIONS 1. MULTIPLE EQUATIONS y nt = x nt n + u nt, n = 1,...,N, t = 1,...,T, x nt is 1 k, and n is k 1. This is a version of the standard regression model where the observations
Question 2: How do you solve a matrix equation using the matrix inverse?
Question : How do you solve a matrix equation using the matrix inverse? In the previous question, we wrote systems of equations as a matrix equation AX B. In this format, the matrix A contains the coefficients
Time Series Analysis
JUNE 2012 Time Series Analysis CONTENT A time series is a chronological sequence of observations on a particular variable. Usually the observations are taken at regular intervals (days, months, years),
Revised Version of Chapter 23. We learned long ago how to solve linear congruences. ax c (mod m)
Chapter 23 Squares Modulo p Revised Version of Chapter 23 We learned long ago how to solve linear congruences ax c (mod m) (see Chapter 8). It s now time to take the plunge and move on to quadratic equations.
CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA
We Can Early Learning Curriculum PreK Grades 8 12 INSIDE ALGEBRA, GRADES 8 12 CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA April 2016 www.voyagersopris.com Mathematical
A Coefficient of Variation for Skewed and Heavy-Tailed Insurance Losses. Michael R. Powers[ 1 ] Temple University and Tsinghua University
A Coefficient of Variation for Skewed and Heavy-Tailed Insurance Losses Michael R. Powers[ ] Temple University and Tsinghua University Thomas Y. Powers Yale University [June 2009] Abstract We propose a
Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test
Math Review for the Quantitative Reasoning Measure of the GRE revised General Test www.ets.org Overview This Math Review will familiarize you with the mathematical skills and concepts that are important
Time Series Analysis 1. Lecture 8: Time Series Analysis. Time Series Analysis MIT 18.S096. Dr. Kempthorne. Fall 2013 MIT 18.S096
Lecture 8: Time Series Analysis MIT 18.S096 Dr. Kempthorne Fall 2013 MIT 18.S096 Time Series Analysis 1 Outline Time Series Analysis 1 Time Series Analysis MIT 18.S096 Time Series Analysis 2 A stochastic
Estimating an ARMA Process
Statistics 910, #12 1 Overview Estimating an ARMA Process 1. Main ideas 2. Fitting autoregressions 3. Fitting with moving average components 4. Standard errors 5. Examples 6. Appendix: Simple estimators
Matrices and Polynomials
APPENDIX 9 Matrices and Polynomials he Multiplication of Polynomials Let α(z) =α 0 +α 1 z+α 2 z 2 + α p z p and y(z) =y 0 +y 1 z+y 2 z 2 + y n z n be two polynomials of degrees p and n respectively. hen,
LS.6 Solution Matrices
LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions
Autocovariance and Autocorrelation
Chapter 3 Autocovariance and Autocorrelation If the {X n } process is weakly stationary, the covariance of X n and X n+k depends only on the lag k. This leads to the following definition of the autocovariance
1. (First passage/hitting times/gambler s ruin problem:) Suppose that X has a discrete state space and let i be a fixed state. Let
Copyright c 2009 by Karl Sigman 1 Stopping Times 1.1 Stopping Times: Definition Given a stochastic process X = {X n : n 0}, a random time τ is a discrete random variable on the same probability space as
Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year.
This document is designed to help North Carolina educators teach the Common Core (Standard Course of Study). NCDPI staff are continually updating and improving these tools to better serve teachers. Algebra
1 Portfolio mean and variance
Copyright c 2005 by Karl Sigman Portfolio mean and variance Here we study the performance of a one-period investment X 0 > 0 (dollars) shared among several different assets. Our criterion for measuring
5.1 Radical Notation and Rational Exponents
Section 5.1 Radical Notation and Rational Exponents 1 5.1 Radical Notation and Rational Exponents We now review how exponents can be used to describe not only powers (such as 5 2 and 2 3 ), but also roots
Impulse Response Functions
Impulse Response Functions Wouter J. Den Haan University of Amsterdam April 28, 2011 General definition IRFs The IRF gives the j th -period response when the system is shocked by a one-standard-deviation
Wald s Identity. by Jeffery Hein. Dartmouth College, Math 100
Wald s Identity by Jeffery Hein Dartmouth College, Math 100 1. Introduction Given random variables X 1, X 2, X 3,... with common finite mean and a stopping rule τ which may depend upon the given sequence,
Trend and Seasonal Components
Chapter 2 Trend and Seasonal Components If the plot of a TS reveals an increase of the seasonal and noise fluctuations with the level of the process then some transformation may be necessary before doing
Lecture 4: Seasonal Time Series, Trend Analysis & Component Model Bus 41910, Time Series Analysis, Mr. R. Tsay
Lecture 4: Seasonal Time Series, Trend Analysis & Component Model Bus 41910, Time Series Analysis, Mr. R. Tsay Business cycle plays an important role in economics. In time series analysis, business cycle
Analysis and Computation for Finance Time Series - An Introduction
ECMM703 Analysis and Computation for Finance Time Series - An Introduction Alejandra González Harrison 161 Email: [email protected] Time Series - An Introduction A time series is a sequence of observations
Chapter 2. Dynamic panel data models
Chapter 2. Dynamic panel data models Master of Science in Economics - University of Geneva Christophe Hurlin, Université d Orléans Université d Orléans April 2010 Introduction De nition We now consider
By choosing to view this document, you agree to all provisions of the copyright laws protecting it.
This material is posted here with permission of the IEEE Such permission of the IEEE does not in any way imply IEEE endorsement of any of Helsinki University of Technology's products or services Internal
LINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,
FURTHER VECTORS (MEI)
Mathematics Revision Guides Further Vectors (MEI) (column notation) Page of MK HOME TUITION Mathematics Revision Guides Level: AS / A Level - MEI OCR MEI: C FURTHER VECTORS (MEI) Version : Date: -9-7 Mathematics
Lecture 2: ARMA(p,q) models (part 3)
Lecture 2: ARMA(p,q) models (part 3) Florian Pelgrin University of Lausanne, École des HEC Department of mathematics (IMEA-Nice) Sept. 2011 - Jan. 2012 Florian Pelgrin (HEC) Univariate time series Sept.
In this paper we study how the time-series structure of the demand process affects the value of information
MANAGEMENT SCIENCE Vol. 51, No. 6, June 25, pp. 961 969 issn 25-199 eissn 1526-551 5 516 961 informs doi 1.1287/mnsc.15.385 25 INFORMS Information Sharing in a Supply Chain Under ARMA Demand Vishal Gaur
The Characteristic Polynomial
Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem
3. INNER PRODUCT SPACES
. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.
Time Series Analysis in Economics. Klaus Neusser
Time Series Analysis in Economics Klaus Neusser May 26, 2015 Contents I Univariate Time Series Analysis 3 1 Introduction 1 1.1 Some examples.......................... 2 1.2 Formal definitions.........................
THE DYING FIBONACCI TREE. 1. Introduction. Consider a tree with two types of nodes, say A and B, and the following properties:
THE DYING FIBONACCI TREE BERNHARD GITTENBERGER 1. Introduction Consider a tree with two types of nodes, say A and B, and the following properties: 1. Let the root be of type A.. Each node of type A produces
Luciano Rispoli Department of Economics, Mathematics and Statistics Birkbeck College (University of London)
Luciano Rispoli Department of Economics, Mathematics and Statistics Birkbeck College (University of London) 1 Forecasting: definition Forecasting is the process of making statements about events whose
Linear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
Analysis of Bayesian Dynamic Linear Models
Analysis of Bayesian Dynamic Linear Models Emily M. Casleton December 17, 2010 1 Introduction The main purpose of this project is to explore the Bayesian analysis of Dynamic Linear Models (DLMs). The main
1 VECTOR SPACES AND SUBSPACES
1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such
2.3. Finding polynomial functions. An Introduction:
2.3. Finding polynomial functions. An Introduction: As is usually the case when learning a new concept in mathematics, the new concept is the reverse of the previous one. Remember how you first learned
3.1 Stationary Processes and Mean Reversion
3. Univariate Time Series Models 3.1 Stationary Processes and Mean Reversion Definition 3.1: A time series y t, t = 1,..., T is called (covariance) stationary if (1) E[y t ] = µ, for all t Cov[y t, y t
Multivariate Normal Distribution
Multivariate Normal Distribution Lecture 4 July 21, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2 Lecture #4-7/21/2011 Slide 1 of 41 Last Time Matrices and vectors Eigenvalues
is in plane V. However, it may be more convenient to introduce a plane coordinate system in V.
.4 COORDINATES EXAMPLE Let V be the plane in R with equation x +2x 2 +x 0, a two-dimensional subspace of R. We can describe a vector in this plane by its spatial (D)coordinates; for example, vector x 5
The Effects ofVariation Between Jain Mirman and JMC
MARKET STRUCTURE AND INSIDER TRADING WASSIM DAHER AND LEONARD J. MIRMAN Abstract. In this paper we examine the real and financial effects of two insiders trading in a static Jain Mirman model (Henceforth
Time Series - ARIMA Models. Instructor: G. William Schwert
APS 425 Fall 25 Time Series : ARIMA Models Instructor: G. William Schwert 585-275-247 [email protected] Topics Typical time series plot Pattern recognition in auto and partial autocorrelations
Master s Theory Exam Spring 2006
Spring 2006 This exam contains 7 questions. You should attempt them all. Each question is divided into parts to help lead you through the material. You should attempt to complete as much of each problem
CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.
CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES From Exploratory Factor Analysis Ledyard R Tucker and Robert C MacCallum 1997 180 CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES In
by the matrix A results in a vector which is a reflection of the given
Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that
COLLEGE ALGEBRA. Paul Dawkins
COLLEGE ALGEBRA Paul Dawkins Table of Contents Preface... iii Outline... iv Preliminaries... Introduction... Integer Exponents... Rational Exponents... 9 Real Exponents...5 Radicals...6 Polynomials...5
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
Applications to Data Smoothing and Image Processing I
Applications to Data Smoothing and Image Processing I MA 348 Kurt Bryan Signals and Images Let t denote time and consider a signal a(t) on some time interval, say t. We ll assume that the signal a(t) is
Entry Level College Mathematics: Algebra or Modeling
Entry Level College Mathematics: Algebra or Modeling Dan Kalman Dan Kalman is Associate Professor in Mathematics and Statistics at American University. His interests include matrix theory, curriculum development,
Chapter 17. Orthogonal Matrices and Symmetries of Space
Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length
REVIEW EXERCISES DAVID J LOWRY
REVIEW EXERCISES DAVID J LOWRY Contents 1. Introduction 1 2. Elementary Functions 1 2.1. Factoring and Solving Quadratics 1 2.2. Polynomial Inequalities 3 2.3. Rational Functions 4 2.4. Exponentials and
LEARNING OBJECTIVES FOR THIS CHAPTER
CHAPTER 2 American mathematician Paul Halmos (1916 2006), who in 1942 published the first modern linear algebra book. The title of Halmos s book was the same as the title of this chapter. Finite-Dimensional
S-Parameters and Related Quantities Sam Wetterlin 10/20/09
S-Parameters and Related Quantities Sam Wetterlin 10/20/09 Basic Concept of S-Parameters S-Parameters are a type of network parameter, based on the concept of scattering. The more familiar network parameters
Financial TIme Series Analysis: Part II
Department of Mathematics and Statistics, University of Vaasa, Finland January 29 February 13, 2015 Feb 14, 2015 1 Univariate linear stochastic models: further topics Unobserved component model Signal
The Behavior of Bonds and Interest Rates. An Impossible Bond Pricing Model. 780 w Interest Rate Models
780 w Interest Rate Models The Behavior of Bonds and Interest Rates Before discussing how a bond market-maker would delta-hedge, we first need to specify how bonds behave. Suppose we try to model a zero-coupon
Discrete Time Series Analysis with ARMA Models
Discrete Time Series Analysis with ARMA Models Veronica Sitsofe Ahiati ([email protected]) African Institute for Mathematical Sciences (AIMS) Supervised by Tina Marquardt Munich University of Technology,
1 Solving LPs: The Simplex Algorithm of George Dantzig
Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.
THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS
THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS KEITH CONRAD 1. Introduction The Fundamental Theorem of Algebra says every nonconstant polynomial with complex coefficients can be factored into linear
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
Math 4310 Handout - Quotient Vector Spaces
Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable
11 Ideals. 11.1 Revisiting Z
11 Ideals The presentation here is somewhat different than the text. In particular, the sections do not match up. We have seen issues with the failure of unique factorization already, e.g., Z[ 5] = O Q(
Chapter 6: Multivariate Cointegration Analysis
Chapter 6: Multivariate Cointegration Analysis 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie VI. Multivariate Cointegration
U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, 2009. Notes on Algebra
U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, 2009 Notes on Algebra These notes contain as little theory as possible, and most results are stated without proof. Any introductory
Review Jeopardy. Blue vs. Orange. Review Jeopardy
Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?
NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS
NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS TEST DESIGN AND FRAMEWORK September 2014 Authorized for Distribution by the New York State Education Department This test design and framework document
ISOMETRIES OF R n KEITH CONRAD
ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x
Some facts about polynomials modulo m (Full proof of the Fingerprinting Theorem)
Some facts about polynomials modulo m (Full proof of the Fingerprinting Theorem) In order to understand the details of the Fingerprinting Theorem on fingerprints of different texts from Chapter 19 of the
discuss how to describe points, lines and planes in 3 space.
Chapter 2 3 Space: lines and planes In this chapter we discuss how to describe points, lines and planes in 3 space. introduce the language of vectors. discuss various matters concerning the relative position
Infinitely Repeated Games with Discounting Ù
Infinitely Repeated Games with Discounting Page 1 Infinitely Repeated Games with Discounting Ù Introduction 1 Discounting the future 2 Interpreting the discount factor 3 The average discounted payoff 4
Probability and Random Variables. Generation of random variables (r.v.)
Probability and Random Variables Method for generating random variables with a specified probability distribution function. Gaussian And Markov Processes Characterization of Stationary Random Process Linearly
DEFINITION 5.1.1 A complex number is a matrix of the form. x y. , y x
Chapter 5 COMPLEX NUMBERS 5.1 Constructing the complex numbers One way of introducing the field C of complex numbers is via the arithmetic of matrices. DEFINITION 5.1.1 A complex number is a matrix of
Time Series Analysis
Time Series Analysis Identifying possible ARIMA models Andrés M. Alonso Carolina García-Martos Universidad Carlos III de Madrid Universidad Politécnica de Madrid June July, 2012 Alonso and García-Martos
Time Series Analysis
Time Series 1 April 9, 2013 Time Series Analysis This chapter presents an introduction to the branch of statistics known as time series analysis. Often the data we collect in environmental studies is collected
ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE
ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE YUAN TIAN This synopsis is designed merely for keep a record of the materials covered in lectures. Please refer to your own lecture notes for all proofs.
Notes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
Georgia Department of Education Kathy Cox, State Superintendent of Schools 7/19/2005 All Rights Reserved 1
Accelerated Mathematics 3 This is a course in precalculus and statistics, designed to prepare students to take AB or BC Advanced Placement Calculus. It includes rational, circular trigonometric, and inverse
Vectors 2. The METRIC Project, Imperial College. Imperial College of Science Technology and Medicine, 1996.
Vectors 2 The METRIC Project, Imperial College. Imperial College of Science Technology and Medicine, 1996. Launch Mathematica. Type
Linear Algebra I. Ronald van Luijk, 2012
Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.
