Matrices and Polynomials



Similar documents
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj

Notes on Determinant

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

1 Short Introduction to Time Series

Arithmetic and Algebra of Matrices

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Discrete Convolution and the Discrete Fourier Transform

Chapter 6. Orthogonality

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Mathematics Course 111: Algebra I Part IV: Vector Spaces

DEFINITION A complex number is a matrix of the form. x y. , y x

Inner Product Spaces and Orthogonality

LINEAR ALGEBRA. September 23, 2010

The Characteristic Polynomial

Similarity and Diagonalization. Similar Matrices

Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points

Figure 1.1 Vector A and Vector F

Direct Methods for Solving Linear Systems. Matrix Factorization

Zeros of Polynomial Functions

Applied Linear Algebra I Review page 1

MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform

α = u v. In other words, Orthogonal Projection

Data Mining: Algorithms and Applications Matrix Math Review

1 VECTOR SPACES AND SUBSPACES

Precalculus REVERSE CORRELATION. Content Expectations for. Precalculus. Michigan CONTENT EXPECTATIONS FOR PRECALCULUS CHAPTER/LESSON TITLES

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Recall that two vectors in are perpendicular or orthogonal provided that their dot

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.

DATA ANALYSIS II. Matrix Algorithms

5. Orthogonal matrices

Solution of Linear Systems

[1] Diagonal factorization

Introduction to Matrix Algebra

Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors

Lecture L3 - Vectors, Matrices and Coordinate Transformations

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction

Question 2: How do you solve a matrix equation using the matrix inverse?

Zeros of Polynomial Functions

Characterization Of Polynomials Using Reflection Coefficients

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the school year.

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

By choosing to view this document, you agree to all provisions of the copyright laws protecting it.

Nonlinear Iterative Partial Least Squares Method

Lecture 3: Finding integer solutions to systems of linear equations

ALGEBRAIC EIGENVALUE PROBLEM

Thnkwell s Homeschool Precalculus Course Lesson Plan: 36 weeks

by the matrix A results in a vector which is a reflection of the given

Linear Algebra: Vectors

South Carolina College- and Career-Ready (SCCCR) Pre-Calculus

Monogenic Fields and Power Bases Michael Decker 12/07/07

Rotation Matrices and Homogeneous Transformations

BookTOC.txt. 1. Functions, Graphs, and Models. Algebra Toolbox. Sets. The Real Numbers. Inequalities and Intervals on the Real Number Line

Solution to Homework 2

1 Sets and Set Notation.

Linear Algebra I. Ronald van Luijk, 2012

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Lecture 2 Matrix Operations

Orthogonal Diagonalization of Symmetric Matrices

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

MATH APPLIED MATRIX THEORY

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

JUST THE MATHS UNIT NUMBER 1.8. ALGEBRA 8 (Polynomials) A.J.Hobson

Notes on Factoring. MA 206 Kurt Bryan

Eigenvalues and Eigenvectors

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

RINGS WITH A POLYNOMIAL IDENTITY

Elements of Abstract Group Theory

Continued Fractions and the Euclidean Algorithm

Lecture 5: Singular Value Decomposition SVD (1)

How To Understand And Solve Algebraic Equations

3 Orthogonal Vectors and Matrices

CS3220 Lecture Notes: QR factorization and orthogonal transformations

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

Lecture 1: Schur s Unitary Triangularization Theorem

General Framework for an Iterative Solution of Ax b. Jacobi s Method

a 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4)

Chapter 9 Unitary Groups and SU(N)

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

Structure of the Root Spaces for Simple Lie Algebras

Continuous Groups, Lie Groups, and Lie Algebras

3.3. Solving Polynomial Equations. Introduction. Prerequisites. Learning Outcomes

Elementary Linear Algebra

Alum Rock Elementary Union School District Algebra I Study Guide for Benchmark III

COURSE SYLLABUS Pre-Calculus A/B Last Modified: April 2015

ISOMETRIES OF R n KEITH CONRAD

Numerical Methods I Eigenvalue Problems

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include

Solving Systems of Linear Equations

Normalization and Mixed Degrees of Integration in Cointegrated Time Series Systems

Cryptography and Network Security. Prof. D. Mukhopadhyay. Department of Computer Science and Engineering. Indian Institute of Technology, Kharagpur

Using row reduction to calculate the inverse and the determinant of a square matrix

Lecture 5 Principal Minors and the Hessian

CONTROLLABILITY. Chapter Reachable Set and Controllability. Suppose we have a linear system described by the state equation

Essential Mathematics for Computer Graphics fast

State of Stress at Point

NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS

EECS 556 Image Processing W 09. Interpolation. Interpolation techniques B splines

Transcription:

APPENDIX 9 Matrices and Polynomials he Multiplication of Polynomials Let α(z) =α 0 +α 1 z+α 2 z 2 + α p z p and y(z) =y 0 +y 1 z+y 2 z 2 + y n z n be two polynomials of degrees p and n respectively. hen, their product γ(z) = α(z)y(z) is a polynomial of degree p + n of which the coefficients comprise combinations of the coefficient of α(z) and y(z). A simple way of performing the multiplication is via a table of which the margins contain the elements of the two polynomials and in which the cells contain their products. An example of such a table is given below: α 0 α 1 z α 2 (1) y 0 α 0 y 0 α 1 y 0 z α 2 y 0 z 2 y 1 z α 0 y 1 z α 1 y 1 z 2 α 2 y 0 z 3 y 2 z 2 α 0 y 1 z 2 α 1 y 2 z 3 α 2 y 2 z 4 he product is formed by adding all of the elements of the cells. However, if the elements on the SW NE diagonal are gathered together, then a power of the argument z can be factored from their sum and then the associated coefficient is a coefficient of the product polynomial. he following is an example from the table above: γ 0 + γ 1 z + γ 2 z 2 α 0 y 0 +(α 0 y 1 + α 1 y 0 )z +(α 0 y 2 + α 1 y 1 + α 2 y 0 +)z 2 (3) + γ 3 z 3 = +(α 1 y 2 + α 2 y 1 )z 3 + γ 4 z 4 + α 2 y 4 z 4. he coefficients of the product polynomial can also be seen as the products of the convolutions of the sequences {α 0,α 1,α 2...α p } and {y 0,y 1,y 2...y n }. 1

D.S.G. POLLOCK: ECONOMERICS he coefficients of the product polynomials can also be generated by a simple multiplication of a matrix by a vector. hus, from the example, we should have (3) γ 0 y γ 1 y 1 y 0 0 γ 2 = y 2 y 1 y 0 α α 0 α 1 α 0 0 α 1 = α 2 α 1 α 0 y 0 y 1 γ 2 0 y 2 y 1 α 2 0 α 2 α 1 y 2 γ v 0 0 y 2 0 0 α 2 o form the elements of the product polynomial γ(z), powers of z may be associated with elements of the matrices and the vectors of values indicated by the subscripts. he argument z is usually described as an algebraic indeterminate. Its place can be taken by any of a wide variety of operators. Examples are provided by the difference operator and the lag operator that are defined in respect of doubly-infinite sequences. It is also possible to replace z by matrices. However, the fundamental theorem of algebra indicates that all polynomial equations must have solutions that lie in the complex plane. herefore, it is customary, albeit unnecessary, to regard z as a complex number. oeplitz Matrices Polynomials with Matrix Arguments here are two matrix arguments of polynomials that are of particular interest in time series analysis. he first is the matrix lag operator. he operator of order denoted by (4) L =[e 1,e 2,...,e 1, 0] is formed from the identity matrix I =[e 0,e 1,...,e 1 ] by deleting the leading vector e 0 and by appending a column of zeros to the end of the array. In effect, L is the matrix with units on the first subdiagonal band and with zeros elsewhere. Likewise, L 2 has units on the second subdiagonal band and with zeros elsewhere, whereas L 1 has a single unit in the bottom left (i.e. S W ) corner, and L +r = 0 for all r 0. In addition, L 0 = I is identified with the 2

POLYNOMIALS AND MARICES identity matrix of order. he example of L 4 is given below: L 0, L, (5) L 2, L 3 Putting L in place of the argument z of a polynomial in non-negative powers creates a so-called oeplitz banded lower-triangular matrix of order. An example is provided by the quadratic polynomials α(z) =α 0 + α 1 z + α 2 z 2 and y(z) =y 0 + y 1 z + y 2 z 2. hen, there is α(l 3 )y(l 3 )=y(l 3 )α(l 3 ), which is written explicitly as (6) α α 1 α 0 0 y y 1 y 0 0 = y y 1 y 0 0 α α 1 α 0 0 α 2 α 1 α 0 y 2 y 1 y 0 y 2 y 1 y 0 α 2 α 1 α 0 he commutativity of the two matrices in multiplication reflects their polynomial nature. Such commutativity is available both for lower-triangular oeplitz and for upper-triangular oeplitz matrices, which correspond to polynomials in negative powers of z. he commutativity is not available for mixtures of upper and lower triangular matrices; and, in this respect, the matrix algebra differs from the corresponding polynomial algebra. An example is provided by the matrix version of the following polynomial identity: (7) (1 z)(1 z 1 )=2z 0 (z + z 1 )=(1 z 1 )(1 z) Putting L in place of z in each of these expressions creates three different matrices. his can be illustrated with the case of L 3. hen, (1 z)(1 z 1 ) gives rise to (8) 1 0 0 1 1 0 1 1 0 0 1 1 = 1 1 0 1 2 1, 0 1 1 1 2 whereas (1 z 1 )(1 z) gives rise to (9) 1 1 0 0 1 1 1 0 0 1 1 0 = 2 1 0 1 2 1 1 1 0 1 1 3

D.S.G. POLLOCK: ECONOMERICS A oeplitz matrix, in which each band contains only repetitions of the same element, is obtained from the remaining expression by replacing z by L 3 in 2z 0 (z + z 1 ), wherein z 0 is replaced by the identity matrix: (10) 2 1 0 1 2 1 = 1 1 0 0 0 1 1 0 0 1 2 0 0 1 1 1 0 0 1 1 0 0 1 1 0 0 1 Example. It is straightforward to derive the dispersion matrices that are found within the formulae for the finite-sample estimators from the corresponding autocovariance generating functions. Let γ(z) ={γ 0 +γ 1 (z+z 1 )+γ 2 (z 2 +z 2 )+ } denote the autocovariance generating function of a stationary stochastic process. hen, the corresponding dispersion matrix for a sample of consecutive elements drawn from the process is (11) Γ = γ 0 I + 1 γ τ (L τ + F τ ), where F = L is in place of z 1. Since L and F are nilpotent of degree, such that L q,fq = 0 when q, the index of summation has an upper limit of 1. Circulant Matrices In the second of the matrix representations, which is appropriate to a frequency-domain interpretation of filtering, the argument z is replaced by the full-rank circulant matrix (12) K =[e 1,e 2,...,e 1,e 0 ], which is obtained from the identity matrix I =[e 0,e 1,...,e 1 ] by displacing the leading column to the end of the array. his is an orthonormal matrix of which the transpose is the inverse, such that K K = K K = I. he powers of the matrix form a -periodic sequence such that K +q = K q for all q. he periodicity of these powers is analogous to the periodicity of the powers of the argument z = exp{ i2π/}, which is to be found in the Fourier transform of a sequence of order. 4

(13) POLYNOMIALS AND MARICES he example of K 4 is given below K4 0 =, K, K4 2 =, K 3 he matrices K 0 = I,K,...,K 1 form a basis for the set of all circulant matrices of order a circulant matrix X =[x ij ] of order being defined as a matrix in which the value of the generic element x ij is determined by the index {(i j) mod }. his implies that each column of X is equal to the previous column rotated downwards by one element. It follows that there exists a one-to-one correspondence between the set of all polynomials of degree less than and the set of all circulant matrices of order. herefore, if α(z) is a polynomial of degree less that, then there exits a corresponding circulant matrix (14) A = α(k )=α 0 I + α 1 K + + α 1 K 1. A convergent sequence of an indefinite length can also be mapped into a circulant matrix. hus, if {γ i } is an absolutely summable sequence obeying the condition that γ i <, then the z-transform of the sequence, which is defined by γ(z) = γ j z j, is an analytic function on the unit circle. In that case, replacing z by K gives rise to a circulant matrix Γ = γ(k ) with finite-valued elements. In consequence of the periodicity of the powers of K, it follows that (15) { { { } Γ= γ j }I + γ (j+1) }K + + γ (j+ 1) K 1 j=0 j=0 = ϕ 0 I + ϕ 1 K + + ϕ 1 K 1. Given that {γ i } is a convergent sequence, it follows that the sequence of the matrix coefficients {ϕ 0,ϕ 1,...,ϕ 1 } converges to {γ 0,γ 1,...,γ 1 } as increases. Notice that the matrix ϕ(k) =ϕ 0 I + ϕ 1 K + + ϕ 1 K 1, which is derived from a polynomial ϕ(z) of degree 1, is a synonym for the matrix γ(k ), which is derived from the z-transform of an infinite convergent sequence. 5 j=0

D.S.G. POLLOCK: ECONOMERICS he polynomial representation is enough to establish that circulant matrices commute in multiplication and that their product is also a polynomial in K. hat is to say (16) If X = x(k ) and Y = y(k ) are circulant matrices, then XY = YX is also a circulant matrix. he matrix operator K has a spectral factorisation that is particularly useful in analysing the properties of the discrete Fourier transform. o demonstrate this factorisation, we must first define the so-called Fourier matrix. his is a symmetric matrix (17) U = 1/2 [W jt ; t, j =0,..., 1], of which the generic element in the jth row and tth column is (18) W jt = exp( i2πtj/) = cos(ω jt) i sin(ω j t), where ω j =2πj/. he matrix U is a unitary, which is to say that it fulfils the condition (19) Ū U = U Ū = I, where Ū = 1/2 [W jt ; t, j =0,..., 1] denotes the conjugate matrix. he operator can be factorised as (20) K = Ū D U = U D Ū, where (21) D = diag{1,w,w 2,...,W 1 } is a diagonal matrix whose elements are the roots of unity, which are found on the circumference of the unit circle in the complex plane. Observe also that D is -periodic, such that D q+ = D q, and that Kq = Ū D q U = U Dq Ū for any integer q. Since the powers of K form the basis for the set of circulant matrices, it follows that such matrices are amenable to a spectral factorisation based on (13). Example. Consider, in particular, the circulant autocovariances matrix that is obtained by replacing the argument z in the autocovariance generating function γ(z) by the matrix K. Imagine that the autocovariances form a doubly infinite 6

POLYNOMIALS AND MARICES sequence, as is the case for an autoregressive or an autoregressive movingaverage process: (22) Ω = γ(k )=γ 0 I + = ϕ 0 I + 1 γ τ (K τ + K τ ) ϕ τ (K τ + K τ ). Here, ϕ τ ; τ =0,..., 1 are the wrapped coefficients that are obtained from the original coefficients of the autocovariance generating function in the manner indicated by (15). he spectral factorisation gives (23) Ω = γ(k )=Ūγ(D)U. he jth element of the diagonal matrix γ(d) is (24) γ(exp{iω j })=γ 0 +2 γ τ cos(ω j τ). his represents the cosine Fourier transform of the sequence of the ordinary autocovariances; and it corresponds to an ordinate (scaled by 2π) sampled at the point ω j from the spectral density function of the linear (i.e. non-circular) stationary stochastic process. 7