Examples of Stationary Time Series

Similar documents
1 Short Introduction to Time Series

Probability and Random Variables. Generation of random variables (r.v.)

Inner product. Definition of inner product

Introduction to Time Series Analysis. Lecture 1.

Time Series Analysis

Trend and Seasonal Components

Matrices and Polynomials

Time Series Analysis

Section 1.1. Introduction to R n

Estimating an ARMA Process

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

LINEAR ALGEBRA W W L CHEN

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

Correlation and Convolution Class Notes for CMSC 426, Fall 2005 David Jacobs

Sales forecasting # 2

ITSM-R Reference Manual

SIGNAL PROCESSING & SIMULATION NEWSLETTER

FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL

A course in Time Series Analysis. Suhasini Subba Rao

SYSTEMS OF REGRESSION EQUATIONS

Univariate and Multivariate Methods PEARSON. Addison Wesley

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Autocovariance and Autocorrelation

Solving Systems of Linear Equations

Similarity and Diagonalization. Similar Matrices

Advanced Signal Processing and Digital Noise Reduction

Lecture 4: Seasonal Time Series, Trend Analysis & Component Model Bus 41910, Time Series Analysis, Mr. R. Tsay

Inner Product Spaces

LECTURE 4. Last time: Lecture outline

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Dirichlet forms methods for error calculus and sensitivity analysis

State Space Time Series Analysis

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Time Series Analysis by Higher Order Crossings. Benjamin Kedem Department of Mathematics & ISR University of Maryland College Park, MD

Introduction to Complex Fourier Series

Understanding and Applying Kalman Filtering

Analysis of Mean-Square Error and Transient Speed of the LMS Adaptive Algorithm

Discrete Time Series Analysis with ARMA Models

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Unified Lecture # 4 Vectors

5.3 The Cross Product in R 3

Time Series Analysis 1. Lecture 8: Time Series Analysis. Time Series Analysis MIT 18.S096. Dr. Kempthorne. Fall 2013 MIT 18.S096

MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform

Time Series Analysis of Aviation Data

1 TRIGONOMETRY. 1.0 Introduction. 1.1 Sum and product formulae. Objectives

Numerical Analysis Lecture Notes

Introduction to General and Generalized Linear Models

The Monte Carlo Framework, Examples from Finance and Generating Correlated Random Variables

3. INNER PRODUCT SPACES

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Introduction to Matrix Algebra

Data Mining: Algorithms and Applications Matrix Math Review

Factorization Theorems

Time Series Analysis in Economics. Klaus Neusser

1. (First passage/hitting times/gambler s ruin problem:) Suppose that X has a discrete state space and let i be a fixed state. Let

System Identification for Acoustic Comms.:

Notes on Determinant

Least Squares Estimation

Row Echelon Form and Reduced Row Echelon Form

Elliptical copulae. Dorota Kurowicka, Jolanta Misiewicz, Roger Cooke

Math 215 HW #6 Solutions

3. Regression & Exponential Smoothing

LS.6 Solution Matrices

Lecture 2: ARMA(p,q) models (part 3)

Univariate Time Series Analysis; ARIMA Models

Component Ordering in Independent Component Analysis Based on Data Power

NOTES ON LINEAR TRANSFORMATIONS

Chapter 2. Dynamic panel data models

9 Multiplication of Vectors: The Scalar or Dot Product

Recall that two vectors in are perpendicular or orthogonal provided that their dot

by the matrix A results in a vector which is a reflection of the given

Time Series Analysis

CHAPTER 2. Eigenvalue Problems (EVP s) for ODE s

Solving Linear Systems, Continued and The Inverse of a Matrix

Analysis and Computation for Finance Time Series - An Introduction

1 VECTOR SPACES AND SUBSPACES

Vector Math Computer Graphics Scott D. Anderson

Core Maths C2. Revision Notes

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

Differentiation and Integration

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree of PhD of Engineering in Informatics

G.A. Pavliotis. Department of Mathematics. Imperial College London

Time Series Analysis and Forecasting

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Systems of Linear Equations

Introduction to Time Series Analysis. Lecture 6.

Time Series Analysis in WinIDAMS

Math 4310 Handout - Quotient Vector Spaces

Chapter 6. Orthogonality

Conceptual similarity to linear algebra

Time Series Analysis

Continued Fractions and the Euclidean Algorithm

Linear Algebra and TI 89

ANALYTICAL METHODS FOR ENGINEERS

Time Series Analysis

Lecture 8: Signal Detection and Noise Assumption

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Univariate Time Series Analysis; ARIMA Models

The Bivariate Normal Distribution

Transcription:

Statistics 91, #2 1 Overview Examples of Stationary Time Series 1. Stationarity 2. Linear processes 3. Cyclic models 4. Nonlinear models Stationarity Strict stationarity (Defn 1.6) Probability distribution of the stochastic process {X t }is invariant under a shift in time, P (X t1 x 1, X t2 x 2,..., X tk x k ) = F (x t1, x t2,..., x tk ) = F (x h+t1, x h+t2,..., x h+tk ) = P (X h+t1 x 1, X h+t2 x 2,..., X h+tk x k ) for any time shift h and x j. Weak stationarity (Defn 1.7) (aka, second-order stationarity) The mean and autocovariance of the stochastic process are finite and invariant under a shift in time, E X t = µ t = µ Cov(X t, X s ) = E (X t µ t )(X s µ s ) = γ(t, s) = γ(t s) The separation rather than location in time matters. Equivalence If the process is Gaussian with finite second moments, then weak stationarity is equivalent to strong stationarity. Strict stationarity implies weak stationarity only if the necessary moments exist. Relevance Stationarity matters because it provides a framework in which averaging makes sense. Unless properties like the mean and covariance are either fixed or evolve in a known manner, we cannot average the observed data. What operations produce a stationary process? Can we recognize/identify these in data?

Statistics 91, #2 2 Moving Average White noise Sequence of uncorrelated random variables with finite variance, { E W t = µ often σ = Cov(W t, W s ) = w 2 often = 1 if t = s, otherwise The input component ({X t } in what follows) is often modeled as white noise. Strict white noise replaces uncorrelated by independent. Moving average A stochastic process formed by taking a weighted average of another time series, often formed from white noise. If we define {Y t } from {X t } as Y t = c i X t i i= then {Y t } is a moving average of {X t }. In order to guarantee finite mean, we require {c i } l 1, the space of absolutely summable sequences, c i <. In order for {Y t } to have second moments (if the input series {X t } has second moments), then {c i } l 2. (l 2 is the space of all square summable sequences, those for which c 2 i <.) Examples trival Y t = X t differences Y t = X t X t 1 3-term Y t = (X t+1 + X t + X t 1 ) /3 one-sided Y t = i= c ix t i Examples in R The important commands to know are rnorm for simulating Gaussian white noise and filter to form the filtering. (Use the concatenation function c to glue values into a sequence. Note the practical issue of end values: What value do you get for y 1 when forming a moving average? Several approaches are 1. Leave them missing. 2. Extend the input series by, say, back-casting x, x 1,... (such as by the mean or fitting a line). 3. Wrapping the coefficient weights (convolution).

Statistics 91, #2 3 4. Reflecting the coefficients at the ends. Question: For what choices of the weights c j does the moving average look smoother that the input realization? Stationarity of the mean of a moving average {Y t } is immediate if E X t = µ x and the sum of the weights is finite. For {Y t } to be second-order stationary, then (assume that the mean of {X t } is zero, implying that the mean of {Y t } is also zero and that the weights are square summable) γ y (t, t+h) = E Y t Y t+h = E i,j c i c j X t i X t+h j = i,j c i c j γ x (t i, t+h j) If the input is weakly stationary, then γ y (h) = i,j c i c j γ x (t i, t + h j) = i,j c i c j γ x (h j + i) If it also happens that the input is white noise, then the covariance further simplifies to a lagged inner product of the weights used to construct the moving average, γ y (h) = σx 2 c i c i+h (1) Remark The expression (1) can be thought of as a factorization of the covariance matrix associated with the stochastic process {Y t }. First, imagine writing {Y t } as a matrix product i Y = C X where the infinite length vectors X and Y stack the elements of the two prcesses X = (..., X 2, X 1, X, X 1, X 2,...) and Y = (..., Y 2, Y 1, Y, Y 1, Y 2,...) and C is an infinite-dimensional, square, diagonal matrix with c along the diagonal arranged by stacking and shifting the coefficients as...., c 1, c, c 1, c 2, c 3... C =..., c 2, c 1, c, c 1, c 2,......, c 3, c 2, c 1, c, c 1,....

Statistics 91, #2 4 If the input is white noise, then (1) represents the covariance matrix Γ as the product Γ y = σ 2 xcc with Γ holding the covariances Γ y,ij = γ y (i j) Recursive Processes (Autoregression) Feedback Allow past values of the process to influence current values: Y t = αy t 1 + X t Usually, the input series in these models would be white noise. Stationarity To see when/if such a process is stationary, use back-substitution to write such a series as a moving average: Y t = α(αy t 2 + X t 1 + X t = α 2 (αy t 3 + X t 2 ) + X t + αx t 1 = X t + αx t 1 + α 2 X t 2 + Stationarity requires that α < 1. If α = 1, then you have random walk if {X t } consists of independent inputs. Linear The ability to represent the autoregression (in this case, a first-order autoregression) as a moving average implies that the autoregression is a linear process (albeit, one with an infinite sequence of weights). Cyclic Processes Random phase model Define a stochastic process as follows. Let U denote a random variable that is uniformly distributed on [ π, π], and define (Here R is a constant, but we could allow it to be an independent r.v. with mean zero and positive variance.) X t = R cos(φ t + U) Each realization of {X t } is a single sinusoidal wave with frequency φ and amplitude R.

Statistics 91, #2 5 Trig sums are always easier in complex notation unless you use them a lot. You just need to recall Euler s form (with i = 1), Using Euler s result exp(i θ) = cos(θ) + i sin(θ) cos(θ + λ) + i sin(θ + λ) = exp(i(θ + λ)) = exp(iθ) exp(iλ) = (cos(θ) + i sin(θ))(cos(λ) + i sin(λ)) = (cos(θ) cos(λ) sin(θ) sin(λ)) + i(sin(θ) cos(λ) + cos(θ) sin(λ)) Hence we get and cos(a + b) = cos a cos b sin a sin b, sin(a + b) = cos a sin b + cos b sin a, cos(2a) = cos 2 a sin 2 b sin(2a) = 2 cos a sin a Stationarity of random phase now follows by using these identities, EX t = R E cos(φt + U) = R E (cos(φt) cos(u) sin(φt) sin(u)) = since the integral of sine and cosine is zero over a full period is, cos u du = sin u du = Similarly, but with a bit more algebra, (expand the cosine terms and collect the terms in the product) Cov(X t, X s ) = R 2 E (cos(φt + U) cos(φs + U)) = R 2 E [ cos 2 U cos(φt) cos(φs) + sin 2 U sin(φt) sin(φs) cos U sin U (cos(φt) sin(φs) + cos(φs) sin(φt))] = R2 2 = R2 (cos φt cos φs + sin φt sin φs) cos(φ(t s)) 2 using the results that the squared norm of the sine and cosine are 1 2π cos 2 x dx = 1 2π and the orthogonality property cos x sin x dx = sin 2 x dx = 1/2

Statistics 91, #2 6 Nonlinear Models Linear process A moving average is a weighted sum of the input series, which we can express as the linear equation Y = C X. Nonlinear processes describe a time series that does not simply take a weighted average of the input series. For example, we can allow the weights to depend on the value of the input: Y t = c 1 (X t 1 ) + c (X t ) + c 1 (X t+1 ) The conditions that assure stationarity depend on the nature of the input series and the functions c j (X t ). Example To form a nonlinear process, simply let prior values of the input sequence determine the weights. For example, consider Y t = X t + αx t 1 X t 2 (2) ebcause the expression for {Y t } is not linear in {X t }, the process is nonlinear. Is it stationary? (Think about this situation: Suppose {X t } consists of iid r.v.s. What linear process does {Y t } resemble? If we were to model such data as this linear process, we would miss a very useful, improved predictor.) Recursive More interesting examples of nonlinear processes use some type of feedback in which the current value of the process Y t is determined by past values Y t 1, Y t 2,... as well as past values of the input series. For example, the following is an example of a bilinear process: Y t = X t + α 1 Y t 1 + α 2 Y t 1 X t 1 Can we recognize the presence of nonlinearity in data?