By choosing to view this document, you agree to all provisions of the copyright laws protecting it.


 Bertram Daniels
 2 years ago
 Views:
Transcription
1 This material is posted here with permission of the IEEE Such permission of the IEEE does not in any way imply IEEE endorsement of any of Helsinki University of Technology's products or services Internal or personal use of this material is permitted However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to By choosing to view this document, you agree to all provisions of the copyright laws protecting it
2 ROOTEXCHANGE PROPERTY OF CONSTRAINED LINEAR PREDICTIVE MODELS Tom Bäckström Helsinki University of Technology (HUT), Laboratory of Acoustics and Audio Signal Processing POBox 3000, FIN0205 HUT, ABSTRACT In recent works, we have studied linear predictive models constrained by timedomain filters In the present study, we will study the onedimensional case in more detail Firstly, we obtain rootexchange properties between the roots of an allpole model and corresponding constraints Secondly, using the rootexchange property we can construct a novel matrix decomposition A T RA # = I, where R is a real positive definite symmetric Toeplitz matrix, superscript # signifies reversal of rows and I is the identity matrix In addition, there exists also an inverse matrix decomposition C T R C # = I, where C C is a Vandermonde matrix Potential applications are discussed INTRODUCTION Let us define the residual e n of a linear predictive model of order m as m e n = x n + h i x n i, () i=0 where x n is the widesense stationary input signal, h i (0 i m ) are the model parameters and x n = c n x n, where c n = l k=0 c kξ n k is the impulse response of a causal FIR filter In literature, this type of models are sometimes named generalised linear predictive models The transfer function of the predictor can be written as follows m A = + i=0 k=0 l h i c k z i k = + CH, (2) where C and H are the Ztransforms of c n and h n, respectively In contrast to conventional linear prediction (LP) (eg 2) this model is constrained since it defines a predictor of order m + l with m model parameters Note that according to Eq 2, x(n) is predicted from the samples of x n from the current to the m th delayed sample One would therefore easily be led to believe that the predictor is noncausal Fortunately, however, if the FIR filter is nontrivial (ie c 0 and at least one of the coefficients c k for k is nonzero) then the residual e n can be determined, since its computation (Eq ) contains terms of x n i, where i 0, m+l, and the optimisation problem is unambiguous The transfer function of the predictor obtained will therefore have a coefficient of z 0 that is generally not equal to one In matrix notation, Eq becomes e n = b T x + h T C T x, (3) where x = x 0 x m+l T is the input signal, h = h 0 h m T the parameter vector and b i = 0 0 T The convolution matrix C R (m+l) m is defined such that its elements are C ij = c(j i) Straightforward minimisation of the expected value E of the squared residual, Ee 2 (n)/ h = 0, yields C T RCh = C T Rb, where R = Exx T is the autocorrelation matrix This solution is unique since R is positive definite and C of full rank, and the solution is thus the global minimum Equivalently, the minimum of the residual can be found by defining a = b + Ch, which yields Ee 2 (n) = a T Ra Since a b = Ch and a b is therefore in the column space of C, a suitable constraint is C T 0 (a b) = 0, provided that b is not in the nullspace of C The nullspace C 0 of C is defined by C T C 0 = 0 where C 0 R (m+l) l The objective function is then η(a, g) = a T Ra g T C T 0 (a b), (4) and the minimum is at Ra = C 0 g, where the Lagrange multiplier vector g = γ,, γ l T can be solved from equation C T 0 R C 0 g = C T 0 b 3 In our earlier work, we have shown that A is stable if the zeros ξ i ( i l) of C are real ξ i R, ξ i > and γ i > 0 ( i l) 4, 5 However, in this work we will concentrate on the properties of this model for l = The results presented in this work are mostly of theoretical significance and cannot directly be applied to realworld applications Nevertheless, gaining information on the relation between Toeplitz matrices and the roots of corresponding polynomials could supply improvement to rootfinding algorithms as well as provide us with the ability to control the stability of arbitrary predictors
3 2 MINIMUMPHASE PROPERTY Definition In this article we will adopt the following notation: d a column vector d = d 0,, d m T d # vector d with its rows reversed d ± the symmetric and antisymmetric part of vector d, that is, d ± = d ± d # D the polynomial corresponding to the Ztransform of vector d T the set of all real positive definite symmetric Toeplitz matrices Imaginary Part A A 2 Lemma (Nullspace) Let polynomial C be defined as in Eq 2 and the corresponding convolution matrix C as defined in Eq 3 If C(ξ i ) = 0 then it follows that vector c i =, ξ i,, ξ (m+l ) i T is in the nullspace of C Consequently, the complete nullspace C 0 of C is the set of vectors c i where the ξ i s ( i l) are the zeros of C, provided that the ξ i s are distinct Matrices of form C 0 (in Lemma ) are known as Vandermonde matrices 6 In the following, we will drop the subscript 0 and by C denote all Vandermonde matrices that appear Lemma 2 (Minimumphase constraint) Let R T and predictor a be the solution to (l = ) Ra i =, ξ i, ξi 2,, ξi m T (5) Predictor a is minimumphase if ξ i C, ξ i < Further, for ξ i = predictor a will have its roots on the unit circle Proof of these lemmata was presented in 4, 5 3 THE ROOTEXCHANGE PROPERTY Lemma 3 (Root exchange) Let a 0 be a solution to Eq 5 with ξ 0 C, ξ 0 <, and let ξ i ( i m) be the zeros of polynomial A 0 (with coefficients a 0 ) Then the polynomials A i ( i m), with coefficients a i solving Eq 5 using ξ i, have zeros ξ k for i k In other words, exchanging a root ξ k from A i for ξ i, corresponds in Eq 5 to replacing ξ i with ξ k Then A i ( ξ i z ) = ζ ik A k ( ξ k z ), where ζ ik is some scalar that depends on i and k The root exchange property of one root is illustrated in Fig ( Proof Polynomial A i can be factored as A i = ξk z ) B, or equivalently, a i = B = b 0 0 b b 0 0 b m (6) Real Part Fig An example of the root exchange property of constrained LP models A i (m = 8) Coefficient vectors a i of A i solve Ra i = c i where c i = (ξ k i ) k=0m and ξ i is a root of C j for i j From Ra i = c i we obtain c i = RB = d 0 d d d 0 d m d m, (7) which defines the coefficients d i uniquely From 7 we can readily obtain relation d 0 d d d 0 d m d m = ηc, (8) i where η is a positive constant Recall that matrix B corresponds to a i deconvoluted by ( ξ k z ) and Eq 8 corresponds to convolution of another term ( ξ i z ) Since the result on the right hand side in Eq 8 is ηc k, it must be equal to ηc k = ηra k The corresponding polynomials A i and A k are thus equal except for a scaling coefficient and an exchanged zero ξ k for ξ i This concludes the proof Lemma 4 (Symmetric root exchange) Let a 0 be a solution to Eq 5 with ξ 0 R Then for the symmetric or antisymmetric part a ± 0 we have Ra± 0 = c± 0 Let ξ i with i m be the roots of A 0 Then Ra ± i = c ± i (9) for 0 i m In other words, A i has zeros ξ k with i k and ξ i = for i 0
4 Imaginary Part A A 2 Lemma 5 (Reverse decomposition) For a real, positive definite and symmetric Toeplitz (m + ) (m + ) matrix R, there exists a decomposition matrix A C such that A T RA # = I, where I is the identity matrix Note that A is not unique and that we indeed use the transposition T operator and not the complex conjugate transpose H, also known as the hermitian Further, equation A T RA # = I is equivalent with (A # ) T RA = I since R is symmetric Real Part Fig 2 An example of the symmetric root exchange property of constrained LP models A + i (m = 8) Coefficient vectors a + i of A + i solve Ra+ i = c + i where c + i is the symmetric part of c i = (ξi k) k=0m and ξ i is a root of C j for i j The root exchange property of conjugate pair roots on the unit circle is illustrated in Fig 2 Proof Since predictor a 0 is minimumphase due to Lemma 2, its symmetric and antisymmetric parts a ± 0 will have their roots on the unit circle 7, 9 By factoring a conjugate root pair ξ i and ξ i ( i m) from a 0, that is, A 0 = ( (ξ i + ξ i )z + z 2 )B, we can readily proof the root exchange property similarly as in proof of Lemma 2 Note that in the proof above, ξ 0 in c ± 0 will appear in a± i ( i m) as a root pair ξ 0 and ξ0 Especially, when ξ 0 0, the first and last coefficients of a ± i will tend to zero Further, note that by proper scaling, we can make a ± i and c ± i real This is a significant advantage in reduction of complexity (between the root exchange and symmetric root exchange properties) if we are to apply root exchange in a computational algorithm An interesting consequence of Lemma 4 is that a ± i always has distinct roots To see this, let ξ k be a double root and of A ± i Then A± k will also have a root ξ k by the root exchange rule It follows that 0 = A ± k (ξ k) = T a ± k = a± T k Ra ± k A ± k (ξ k ) = c± k This is a contradiction since R is positive definite, and the assumption that ξ k is a double root is therefore defective Moreover, if we consider the family of equations Eq 9 with ξ 0 (, +), none of these models will have overlapping roots In other words, the roots follow monotonic paths on the unit circle as a function of ξ 0 Proof Let vectors a i, ξ i and c i be defined as in Lemma 3 Further, let matrices A and C consist of column vectors a i and c i, respectively, such that RA = C We know that the roots of A i are ξ k with i k and 0 i, k m Consequently, A i (ξ k ) = a T i c# k = 0 for i k and thus A T C # = D, where D is a diagonal matrix with positive elements It follows that D = A T C # = A T RA # Since coefficients of D are nonzero, we can, with suitable scaling of the columns of A, find an Â such that ÂT RÂ # = I As a corollary to Lemma 5, we can readily see that there exists a matrix decomposition for the inverse of R such that Ĉ T R Ĉ # = I The inverse R exists since R is strictly positive definite While Lemma 5 uses Lemma 3, a similar decomposition can be constructed using the result of Lemma 4 Then both the symmetric and antisymmetric parts of a ± i and c ± i have to be included in the matrices A and C, respectively The advantage of this approach is that the matrices A and C can then be scaled to real while the construction above produces matrices A and C that are generally complex 4 DISCUSSION AND SUMMARY We have presented rootexchange properties between the allpole form of the constrained linear predictive model and the corresponding constraint Further, as a corollary we obtained novel matrix decompositions for real positive definite symmetric Toeplitz matrices and its inverse, the latter with a Vandermonde matrix Intuitively, the rootexchange property can be explained by the fact that the constraint c i is equivalent to requiring that A i is strictly positive at ξ i, that is, A i (ξ i ) = a T i c i = a T i Ra i > 0 Therefore, A i cannot have a zero at ξ i When a zero ξ k is exchanged for ξ i then A k (ξ k ) > 0 and it is not a surprise that A k (ξ i ) = 0 The conventional LP model is a special case of constrained linear predictive models, whereby Eq 5 with ξ 0 = 0 becomes Ra 0 =, 0,, 0 T If vectors a i are defined with the rootexchange property as in Lemmata 3 and 5, then the corresponding A i with i m will have
5 A i (0) = 0, and the m th component a (i) m of a i is thus always a (i) m = 0 In other words, using the rootexchange property, we have means to reduce the (m + ) (m + ) matrix problem to a m m problem This property could potentially be used for order reduction of polynomial rootfinding problems Unfortunately, in general, the matrix reverse decompositions are nonunique and we can find several matrices A and C that satisfy Â T RÂ # = I and Ĉ T R Ĉ # = I However, if we constrain one root, eg the zero root in the case of conventional LP, then the decomposition becomes unique (We have then assumed that C is constrained to complex Vandermonde matrices and RA = C) Moreover, the question of finding the decomposition matrices without solving first the roots of polynomial A 0 remains open On the other hand, assuming that we can, with some convenient criterion, constrain C to Vandermonde matrices, it could be possible to use this decomposition for iterative rootfinding Such a rootfinding algorithm could be applied to any polynomial with distinct roots inside the unit circle, since Ra 0 =, 0,, 0 T uniquely defines R The presented theory can be partially generalised to infinite dimensional matrices of functional analysis Then, the Toeplitz and Vandermonde matrices R and C become Toeplitz (biinfinite) and Vandermonde (infinite or biinfinite) operators, and matrix A is zeroextended into infinity (infinite or biinfinite) This extension of the Toeplitz matrix implies (when ξ = 0) that we introduce new values of R ij corresponding to zero reflection coefficient value Consequently, the extension is quite elegant, since it does not introduce any new information to the problem In addition, this is a sufficient criterion to make the decomposition unique However, the rowreversal operation is not as easily transfered to the infinite dimensional case, since it requires taking elements starting from infinity and making them the first elements 8 Note that the (n + k) n zero extension Ã of n n matrix A is not equivalent to A in the reflection decomposition sense In other words, since A T RA # = I we obtain Ã T RÃ # = Ã T C # = D k A T C # = D k where C is the extension of C (which is also a Vandermonde matrix), D is a diagonal matrix with the exponents of C and we have assumed that R has been extended with zero reflection coefficients This implies that we have a description of the autocorrelation matrix very much alike the Krylovsubspace 6 The author believes, that there is some potential in this property that could lead to new rootfinding algorithms In our earlier work 4, 5, we have shown that polynomials A i whose coefficients solve Eq 5 with real ξ i <, form a convex space of polynomials with the minimumphase property The rootexchange property presented in this paper, is compatible with the convex space formulation only as long as the exchanged roots are real The convexity property can be used, for example, in interpolation between polynomials when optimising the model by a frequency domain criterion Currently, however, the author has not found simple exchangerules for complex conjugate root pairs, other than those on the unit circle Roots of presented models which lie on the unit circle must always be distinct This can easily be seen using the rootexchange rule, by exchanging one of the multiple zeros with a distinct zero The obtained model will thus have a zero on the diagonal of matrix D (in Lemma 5) and R cannot be positive definite This property could be used for creation of polynomial pairs with interlacing zeros on the unit circle Recall that such a property is useful in creation of stable predictors from symmetric/antisymmetric polynomial pairs 9 While the results of this paper do not present any straightforward applications, at least as far as the author is aware, they still offer theoretical insight into the properties of constraints of linear predictive models as well as the properties of Toeplitz matrices and their relation to Vandermonde matrices 5 REFERENCES A Härmä, Linear predictive coding with modified filter structures, IEEE Trans Speech Audio Proc, vol 9, no 8, pp , November J Makhoul, Linear prediction: A tutorial review, Proc IEEE, vol 63, no 5, pp , April T Söderstrom and P Stoica, System Identification, Prentice Hall, New York, T Bäckström and P Alku, On the stability of constrained linear predictive models, in Proc IEEE Acoustics, Speech, and Signal Proc ICASSP 03, Hong Kong, April , vol 6, pp T Bäckström and P Alku, A constrained linear predictive model with the minimumphase property, accepted to Signal Processing, G Golub and C van Loan, Matrix Computations, The Johns Hopkins University Press, London, H W Schüssler, A stability theorem for discrete systems, IEEE Trans Acoust Speech Signal Proc, vol ASSP24, no, pp 87 89, Feb E Kreyszig, Introductory Funcional Analysis with Applications, John Wiley & Sons, Inc, New York, F K Soong and BH Juang, Line spectrum pair (LSP) and speech data compression, in Proc IEEE Acoustics, Speech, and Signal Proc ICASSP 84, San Diego, CA, March 984, vol, pp 0 04
Inverses and powers: Rules of Matrix Arithmetic
Contents 1 Inverses and powers: Rules of Matrix Arithmetic 1.1 What about division of matrices? 1.2 Properties of the Inverse of a Matrix 1.2.1 Theorem (Uniqueness of Inverse) 1.2.2 Inverse Test 1.2.3
More information4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns
L. Vandenberghe EE133A (Spring 2016) 4. Matrix inverses left and right inverse linear independence nonsingular matrices matrices with linearly independent columns matrices with linearly independent rows
More informationThe Characteristic Polynomial
Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem
More informationDETERMINANTS. b 2. x 2
DETERMINANTS 1 Systems of two equations in two unknowns A system of two equations in two unknowns has the form a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 This can be written more concisely in
More informationSystems of Linear Equations
Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and
More informationMATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix.
MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. Inverse matrix Definition. Let A be an n n matrix. The inverse of A is an n n matrix, denoted
More informationCharacterization Of Polynomials Using Reflection Coefficients
Applied Mathematics ENotes, 4(2004), 114121 c ISSN 16072510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ Characterization Of Polynomials Using Reflection Coefficients José LuisDíazBarrero,JuanJosé
More information(a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular.
Theorem.7.: (Properties of Triangular Matrices) (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. (b) The product
More informationMATH10212 Linear Algebra. Systems of Linear Equations. Definition. An ndimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0534405967. Systems of Linear Equations Definition. An ndimensional vector is a row or a column
More informationMathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 19967 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
More informationDecember 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in twodimensional space (1) 2x y = 3 describes a line in twodimensional space The coefficients of x and y in the equation
More information4. MATRICES Matrices
4. MATRICES 170 4. Matrices 4.1. Definitions. Definition 4.1.1. A matrix is a rectangular array of numbers. A matrix with m rows and n columns is said to have dimension m n and may be represented as follows:
More informationThe basic unit in matrix algebra is a matrix, generally expressed as: a 11 a 12. a 13 A = a 21 a 22 a 23
(copyright by Scott M Lynch, February 2003) Brief Matrix Algebra Review (Soc 504) Matrix algebra is a form of mathematics that allows compact notation for, and mathematical manipulation of, highdimensional
More informationContinued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
More informationLinear Dependence Tests
Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More informationMATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.
MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all ndimensional column
More informationUNIT 2 MATRICES  I 2.0 INTRODUCTION. Structure
UNIT 2 MATRICES  I Matrices  I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress
More informationIntroduction to Matrix Algebra I
Appendix A Introduction to Matrix Algebra I Today we will begin the course with a discussion of matrix algebra. Why are we studying this? We will use matrix algebra to derive the linear regression model
More informationNOTES ON LINEAR TRANSFORMATIONS
NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all
More informationIntroduction to Matrix Algebra
Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra  1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary
More informationLeast Squares Estimation
Least Squares Estimation SARA A VAN DE GEER Volume 2, pp 1041 1045 in Encyclopedia of Statistics in Behavioral Science ISBN13: 9780470860809 ISBN10: 0470860804 Editors Brian S Everitt & David
More informationNotes on Factoring. MA 206 Kurt Bryan
The General Approach Notes on Factoring MA 26 Kurt Bryan Suppose I hand you n, a 2 digit integer and tell you that n is composite, with smallest prime factor around 5 digits. Finding a nontrivial factor
More informationA Direct Numerical Method for Observability Analysis
IEEE TRANSACTIONS ON POWER SYSTEMS, VOL 15, NO 2, MAY 2000 625 A Direct Numerical Method for Observability Analysis Bei Gou and Ali Abur, Senior Member, IEEE Abstract This paper presents an algebraic method
More information1 Short Introduction to Time Series
ECONOMICS 7344, Spring 202 Bent E. Sørensen January 24, 202 Short Introduction to Time Series A time series is a collection of stochastic variables x,.., x t,.., x T indexed by an integer value t. The
More informationIRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction
IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationSimilarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
More informationSECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA
SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA This handout presents the second derivative test for a local extrema of a Lagrange multiplier problem. The Section 1 presents a geometric motivation for the
More information15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
More informationCHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.
CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES From Exploratory Factor Analysis Ledyard R Tucker and Robert C MacCallum 1997 180 CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES In
More informationAu = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.
Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry
More informationSOLVING LINEAR SYSTEMS
SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis
More informationCONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation
Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in
More informationThe MoorePenrose Inverse and Least Squares
University of Puget Sound MATH 420: Advanced Topics in Linear Algebra The MoorePenrose Inverse and Least Squares April 16, 2014 Creative Commons License c 2014 Permission is granted to others to copy,
More informationSolution to Homework 2
Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if
More informationClassification of Cartan matrices
Chapter 7 Classification of Cartan matrices In this chapter we describe a classification of generalised Cartan matrices This classification can be compared as the rough classification of varieties in terms
More informationNonlinear Iterative Partial Least Squares Method
Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., RichardPlouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for
More information1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
More informationThe Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression
The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonaldiagonalorthogonal type matrix decompositions Every
More informationMath 312 Homework 1 Solutions
Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please
More informationLecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs
CSE599s: Extremal Combinatorics November 21, 2011 Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs Lecturer: Anup Rao 1 An Arithmetic Circuit Lower Bound An arithmetic circuit is just like
More informationDiagonal, Symmetric and Triangular Matrices
Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by
More informationWeek 5 Integral Polyhedra
Week 5 Integral Polyhedra We have seen some examples 1 of linear programming formulation that are integral, meaning that every basic feasible solution is an integral vector. This week we develop a theory
More informationLinearly Independent Sets and Linearly Dependent Sets
These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for inclass presentation
More informationLecture 6. Inverse of Matrix
Lecture 6 Inverse of Matrix Recall that any linear system can be written as a matrix equation In one dimension case, ie, A is 1 1, then can be easily solved as A x b Ax b x b A 1 A b A 1 b provided that
More informationAPPLICATIONS OF MATRICES. Adj A is nothing but the transpose of the cofactor matrix [A ij ] of A.
APPLICATIONS OF MATRICES ADJOINT: Let A = [a ij ] be a square matrix of order n. Let Aij be the cofactor of a ij. Then the n th order matrix [A ij ] T is called the adjoint of A. It is denoted by adj
More informationNotes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 918/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
More informationSolutions to Math 51 First Exam January 29, 2015
Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not
More informationOn the representability of the biuniform matroid
On the representability of the biuniform matroid Simeon Ball, Carles Padró, Zsuzsa Weiner and Chaoping Xing August 3, 2012 Abstract Every biuniform matroid is representable over all sufficiently large
More informationMath 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.
Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(
More information26. Determinants I. 1. Prehistory
26. Determinants I 26.1 Prehistory 26.2 Definitions 26.3 Uniqueness and other properties 26.4 Existence Both as a careful review of a more pedestrian viewpoint, and as a transition to a coordinateindependent
More informationSolving Systems of Linear Equations
LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how
More informationSimilar matrices and Jordan form
Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive
More informationα = u v. In other words, Orthogonal Projection
Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v
More informationFinal Year Project Progress Report. FrequencyDomain Adaptive Filtering. Myles Friel. Supervisor: Dr.Edward Jones
Final Year Project Progress Report FrequencyDomain Adaptive Filtering Myles Friel 01510401 Supervisor: Dr.Edward Jones Abstract The Final Year Project is an important part of the final year of the Electronic
More informationSection 6.1  Inner Products and Norms
Section 6.1  Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,
More informationMathematics of Cryptography
CHAPTER 2 Mathematics of Cryptography Part I: Modular Arithmetic, Congruence, and Matrices Objectives This chapter is intended to prepare the reader for the next few chapters in cryptography. The chapter
More informationAN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS
AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS Revised Edition James Epperson Mathematical Reviews BICENTENNIAL 0, 1 8 0 7 z ewiley wu 2007 r71 BICENTENNIAL WILEYINTERSCIENCE A John Wiley & Sons, Inc.,
More informationA matrix over a field F is a rectangular array of elements from F. The symbol
Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F) denotes the collection of all m n matrices over F Matrices will usually be denoted
More informationSolution of Linear Systems
Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start
More informationMatrix Differentiation
1 Introduction Matrix Differentiation ( and some other stuff ) Randal J. Barnes Department of Civil Engineering, University of Minnesota Minneapolis, Minnesota, USA Throughout this presentation I have
More informationLinear Codes. In the V[n,q] setting, the terms word and vector are interchangeable.
Linear Codes Linear Codes In the V[n,q] setting, an important class of codes are the linear codes, these codes are the ones whose code words form a subvector space of V[n,q]. If the subspace of V[n,q]
More information1 Review of Least Squares Solutions to Overdetermined Systems
cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares
More informationInner Product Spaces and Orthogonality
Inner Product Spaces and Orthogonality week 34 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,
More information1 Introduction. 2 Matrices: Definition. Matrix Algebra. Hervé Abdi Lynne J. Williams
In Neil Salkind (Ed.), Encyclopedia of Research Design. Thousand Oaks, CA: Sage. 00 Matrix Algebra Hervé Abdi Lynne J. Williams Introduction Sylvester developed the modern concept of matrices in the 9th
More information2.1: MATRIX OPERATIONS
.: MATRIX OPERATIONS What are diagonal entries and the main diagonal of a matrix? What is a diagonal matrix? When are matrices equal? Scalar Multiplication 45 Matrix Addition Theorem (pg 0) Let A, B, and
More informationISOMETRIES OF R n KEITH CONRAD
ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x
More informationChapter 8. Matrices II: inverses. 8.1 What is an inverse?
Chapter 8 Matrices II: inverses We have learnt how to add subtract and multiply matrices but we have not defined division. The reason is that in general it cannot always be defined. In this chapter, we
More information7 Gaussian Elimination and LU Factorization
7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method
More informationApplied Linear Algebra I Review page 1
Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties
More informationThese axioms must hold for all vectors ū, v, and w in V and all scalars c and d.
DEFINITION: A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers), subject to the following axioms
More informationThe determinant of a skewsymmetric matrix is a square. This can be seen in small cases by direct calculation: 0 a. 12 a. a 13 a 24 a 14 a 23 a 14
4 Symplectic groups In this and the next two sections, we begin the study of the groups preserving reflexive sesquilinear forms or quadratic forms. We begin with the symplectic groups, associated with
More informationThe Solution of Linear Simultaneous Equations
Appendix A The Solution of Linear Simultaneous Equations Circuit analysis frequently involves the solution of linear simultaneous equations. Our purpose here is to review the use of determinants to solve
More information6. Cholesky factorization
6. Cholesky factorization EE103 (Fall 201112) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix
More informationUsing the Singular Value Decomposition
Using the Singular Value Decomposition Emmett J. Ientilucci Chester F. Carlson Center for Imaging Science Rochester Institute of Technology emmett@cis.rit.edu May 9, 003 Abstract This report introduces
More informationLINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,
More information10.3 POWER METHOD FOR APPROXIMATING EIGENVALUES
58 CHAPTER NUMERICAL METHODS. POWER METHOD FOR APPROXIMATING EIGENVALUES In Chapter 7 you saw that the eigenvalues of an n n matrix A are obtained by solving its characteristic equation n c nn c nn...
More informationWe shall turn our attention to solving linear systems of equations. Ax = b
59 Linear Algebra We shall turn our attention to solving linear systems of equations Ax = b where A R m n, x R n, and b R m. We already saw examples of methods that required the solution of a linear system
More informationAdvanced Signal Processing and Digital Noise Reduction
Advanced Signal Processing and Digital Noise Reduction Saeed V. Vaseghi Queen's University of Belfast UK WILEY HTEUBNER A Partnership between John Wiley & Sons and B. G. Teubner Publishers Chichester New
More informationTorgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances
Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances It is possible to construct a matrix X of Cartesian coordinates of points in Euclidean space when we know the Euclidean
More informationUniversity of Lille I PC first year list of exercises n 7. Review
University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients
More informationPolynomials and the Fast Fourier Transform (FFT) Battle Plan
Polynomials and the Fast Fourier Transform (FFT) Algorithm Design and Analysis (Wee 7) 1 Polynomials Battle Plan Algorithms to add, multiply and evaluate polynomials Coefficient and pointvalue representation
More informationPresentation 3: Eigenvalues and Eigenvectors of a Matrix
Colleen Kirksey, Beth Van Schoyck, Dennis Bowers MATH 280: Problem Solving November 18, 2011 Presentation 3: Eigenvalues and Eigenvectors of a Matrix Order of Presentation: 1. Definitions of Eigenvalues
More informationRegression III: Advanced Methods
Lecture 5: Linear leastsquares Regression III: Advanced Methods William G. Jacoby Department of Political Science Michigan State University http://polisci.msu.edu/jacoby/icpsr/regress3 Simple Linear Regression
More informationIdeal Class Group and Units
Chapter 4 Ideal Class Group and Units We are now interested in understanding two aspects of ring of integers of number fields: how principal they are (that is, what is the proportion of principal ideals
More informationMAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =
MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the
More informationNotes on Symmetric Matrices
CPSC 536N: Randomized Algorithms 201112 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.
More informationMore than you wanted to know about quadratic forms
CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences More than you wanted to know about quadratic forms KC Border Contents 1 Quadratic forms 1 1.1 Quadratic forms on the unit
More informationLECTURE 1 I. Inverse matrices We return now to the problem of solving linear equations. Recall that we are trying to find x such that IA = A
LECTURE I. Inverse matrices We return now to the problem of solving linear equations. Recall that we are trying to find such that A = y. Recall: there is a matri I such that for all R n. It follows that
More informationElementary GradientBased Parameter Estimation
Elementary GradientBased Parameter Estimation Julius O. Smith III Center for Computer Research in Music and Acoustics (CCRMA Department of Music, Stanford University, Stanford, California 94305 USA Abstract
More information8. Linear leastsquares
8. Linear leastsquares EE13 (Fall 21112) definition examples and applications solution of a leastsquares problem, normal equations 81 Definition overdetermined linear equations if b range(a), cannot
More informationAdaptive Online Gradient Descent
Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650
More informationRESULTANT AND DISCRIMINANT OF POLYNOMIALS
RESULTANT AND DISCRIMINANT OF POLYNOMIALS SVANTE JANSON Abstract. This is a collection of classical results about resultants and discriminants for polynomials, compiled mainly for my own use. All results
More informationBackground: State Estimation
State Estimation Cyber Security of the Smart Grid Dr. Deepa Kundur Background: State Estimation University of Toronto Dr. Deepa Kundur (University of Toronto) Cyber Security of the Smart Grid 1 / 81 Dr.
More informationInverses. Stephen Boyd. EE103 Stanford University. October 27, 2015
Inverses Stephen Boyd EE103 Stanford University October 27, 2015 Outline Left and right inverses Inverse Solving linear equations Examples Pseudoinverse Left and right inverses 2 Left inverses a number
More informationCS3220 Lecture Notes: QR factorization and orthogonal transformations
CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss
More informationSubsets of Euclidean domains possessing a unique division algorithm
Subsets of Euclidean domains possessing a unique division algorithm Andrew D. Lewis 2009/03/16 Abstract Subsets of a Euclidean domain are characterised with the following objectives: (1) ensuring uniqueness
More informationMatrices and Polynomials
APPENDIX 9 Matrices and Polynomials he Multiplication of Polynomials Let α(z) =α 0 +α 1 z+α 2 z 2 + α p z p and y(z) =y 0 +y 1 z+y 2 z 2 + y n z n be two polynomials of degrees p and n respectively. hen,
More informationEnhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm
1 Enhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm Hani Mehrpouyan, Student Member, IEEE, Department of Electrical and Computer Engineering Queen s University, Kingston, Ontario,
More information