By choosing to view this document, you agree to all provisions of the copyright laws protecting it.

Size: px
Start display at page:

Download "By choosing to view this document, you agree to all provisions of the copyright laws protecting it."

Transcription

1 This material is posted here with permission of the IEEE Such permission of the IEEE does not in any way imply IEEE endorsement of any of Helsinki University of Technology's products or services Internal or personal use of this material is permitted However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to By choosing to view this document, you agree to all provisions of the copyright laws protecting it

2 ROOT-EXCHANGE PROPERTY OF CONSTRAINED LINEAR PREDICTIVE MODELS Tom Bäckström Helsinki University of Technology (HUT), Laboratory of Acoustics and Audio Signal Processing POBox 3000, FIN-0205 HUT, ABSTRACT In recent works, we have studied linear predictive models constrained by time-domain filters In the present study, we will study the one-dimensional case in more detail Firstly, we obtain root-exchange properties between the roots of an all-pole model and corresponding constraints Secondly, using the root-exchange property we can construct a novel matrix decomposition A T RA # = I, where R is a real positive definite symmetric Toeplitz matrix, superscript # signifies reversal of rows and I is the identity matrix In addition, there exists also an inverse matrix decomposition C T R C # = I, where C C is a Vandermonde matrix Potential applications are discussed INTRODUCTION Let us define the residual e n of a linear predictive model of order m as m e n = x n + h i x n i, () i=0 where x n is the wide-sense stationary input signal, h i (0 i m ) are the model parameters and x n = c n x n, where c n = l k=0 c kξ n k is the impulse response of a causal FIR filter In literature, this type of models are sometimes named generalised linear predictive models The transfer function of the predictor can be written as follows m A = + i=0 k=0 l h i c k z i k = + CH, (2) where C and H are the Z-transforms of c n and h n, respectively In contrast to conventional linear prediction (LP) (eg 2) this model is constrained since it defines a predictor of order m + l with m model parameters Note that according to Eq 2, x(n) is predicted from the samples of x n from the current to the m th delayed sample One would therefore easily be led to believe that the predictor is noncausal Fortunately, however, if the FIR filter is non-trivial (ie c 0 and at least one of the coefficients c k for k is non-zero) then the residual e n can be determined, since its computation (Eq ) contains terms of x n i, where i 0, m+l, and the optimisation problem is unambiguous The transfer function of the predictor obtained will therefore have a coefficient of z 0 that is generally not equal to one In matrix notation, Eq becomes e n = b T x + h T C T x, (3) where x = x 0 x m+l T is the input signal, h = h 0 h m T the parameter vector and b i = 0 0 T The convolution matrix C R (m+l) m is defined such that its elements are C ij = c(j i) Straightforward minimisation of the expected value E of the squared residual, Ee 2 (n)/ h = 0, yields C T RCh = C T Rb, where R = Exx T is the autocorrelation matrix This solution is unique since R is positive definite and C of full rank, and the solution is thus the global minimum Equivalently, the minimum of the residual can be found by defining a = b + Ch, which yields Ee 2 (n) = a T Ra Since a b = Ch and a b is therefore in the column space of C, a suitable constraint is C T 0 (a b) = 0, provided that b is not in the null-space of C The null-space C 0 of C is defined by C T C 0 = 0 where C 0 R (m+l) l The objective function is then η(a, g) = a T Ra g T C T 0 (a b), (4) and the minimum is at Ra = C 0 g, where the Lagrange multiplier vector g = γ,, γ l T can be solved from equation C T 0 R C 0 g = C T 0 b 3 In our earlier work, we have shown that A is stable if the zeros ξ i ( i l) of C are real ξ i R, ξ i > and γ i > 0 ( i l) 4, 5 However, in this work we will concentrate on the properties of this model for l = The results presented in this work are mostly of theoretical significance and cannot directly be applied to real-world applications Nevertheless, gaining information on the relation between Toeplitz matrices and the roots of corresponding polynomials could supply improvement to root-finding algorithms as well as provide us with the ability to control the stability of arbitrary predictors

3 2 MINIMUM-PHASE PROPERTY Definition In this article we will adopt the following notation: d a column vector d = d 0,, d m T d # vector d with its rows reversed d ± the symmetric and antisymmetric part of vector d, that is, d ± = d ± d # D the polynomial corresponding to the Z-transform of vector d T the set of all real positive definite symmetric Toeplitz matrices Imaginary Part A A 2 Lemma (Null-space) Let polynomial C be defined as in Eq 2 and the corresponding convolution matrix C as defined in Eq 3 If C(ξ i ) = 0 then it follows that vector c i =, ξ i,, ξ (m+l ) i T is in the null-space of C Consequently, the complete null-space C 0 of C is the set of vectors c i where the ξ i s ( i l) are the zeros of C, provided that the ξ i s are distinct Matrices of form C 0 (in Lemma ) are known as Vandermonde matrices 6 In the following, we will drop the subscript 0 and by C denote all Vandermonde matrices that appear Lemma 2 (Minimum-phase constraint) Let R T and predictor a be the solution to (l = ) Ra i =, ξ i, ξi 2,, ξi m T (5) Predictor a is minimum-phase if ξ i C, ξ i < Further, for ξ i = predictor a will have its roots on the unit circle Proof of these lemmata was presented in 4, 5 3 THE ROOT-EXCHANGE PROPERTY Lemma 3 (Root exchange) Let a 0 be a solution to Eq 5 with ξ 0 C, ξ 0 <, and let ξ i ( i m) be the zeros of polynomial A 0 (with coefficients a 0 ) Then the polynomials A i ( i m), with coefficients a i solving Eq 5 using ξ i, have zeros ξ k for i k In other words, exchanging a root ξ k from A i for ξ i, corresponds in Eq 5 to replacing ξ i with ξ k Then A i ( ξ i z ) = ζ ik A k ( ξ k z ), where ζ ik is some scalar that depends on i and k The root exchange property of one root is illustrated in Fig ( Proof Polynomial A i can be factored as A i = ξk z ) B, or equivalently, a i = B = b 0 0 b b 0 0 b m (6) Real Part Fig An example of the root exchange property of constrained LP models A i (m = 8) Coefficient vectors a i of A i solve Ra i = c i where c i = (ξ k i ) k=0m and ξ i is a root of C j for i j From Ra i = c i we obtain c i = RB = d 0 d d d 0 d m d m, (7) which defines the coefficients d i uniquely From 7 we can readily obtain relation d 0 d d d 0 d m d m = ηc, (8) i where η is a positive constant Recall that matrix B corresponds to a i deconvoluted by ( ξ k z ) and Eq 8 corresponds to convolution of another term ( ξ i z ) Since the result on the right hand side in Eq 8 is ηc k, it must be equal to ηc k = ηra k The corresponding polynomials A i and A k are thus equal except for a scaling coefficient and an exchanged zero ξ k for ξ i This concludes the proof Lemma 4 (Symmetric root exchange) Let a 0 be a solution to Eq 5 with ξ 0 R Then for the symmetric or antisymmetric part a ± 0 we have Ra± 0 = c± 0 Let ξ i with i m be the roots of A 0 Then Ra ± i = c ± i (9) for 0 i m In other words, A i has zeros ξ k with i k and ξ i = for i 0

4 Imaginary Part A A 2 Lemma 5 (Reverse decomposition) For a real, positive definite and symmetric Toeplitz (m + ) (m + ) matrix R, there exists a decomposition matrix A C such that A T RA # = I, where I is the identity matrix Note that A is not unique and that we indeed use the transposition T operator and not the complex conjugate transpose H, also known as the hermitian Further, equation A T RA # = I is equivalent with (A # ) T RA = I since R is symmetric Real Part Fig 2 An example of the symmetric root exchange property of constrained LP models A + i (m = 8) Coefficient vectors a + i of A + i solve Ra+ i = c + i where c + i is the symmetric part of c i = (ξi k) k=0m and ξ i is a root of C j for i j The root exchange property of conjugate pair roots on the unit circle is illustrated in Fig 2 Proof Since predictor a 0 is minimum-phase due to Lemma 2, its symmetric and antisymmetric parts a ± 0 will have their roots on the unit circle 7, 9 By factoring a conjugate root pair ξ i and ξ i ( i m) from a 0, that is, A 0 = ( (ξ i + ξ i )z + z 2 )B, we can readily proof the root exchange property similarly as in proof of Lemma 2 Note that in the proof above, ξ 0 in c ± 0 will appear in a± i ( i m) as a root pair ξ 0 and ξ0 Especially, when ξ 0 0, the first and last coefficients of a ± i will tend to zero Further, note that by proper scaling, we can make a ± i and c ± i real This is a significant advantage in reduction of complexity (between the root exchange and symmetric root exchange properties) if we are to apply root exchange in a computational algorithm An interesting consequence of Lemma 4 is that a ± i always has distinct roots To see this, let ξ k be a double root and of A ± i Then A± k will also have a root ξ k by the root exchange rule It follows that 0 = A ± k (ξ k) = T a ± k = a± T k Ra ± k A ± k (ξ k ) = c± k This is a contradiction since R is positive definite, and the assumption that ξ k is a double root is therefore defective Moreover, if we consider the family of equations Eq 9 with ξ 0 (, +), none of these models will have overlapping roots In other words, the roots follow monotonic paths on the unit circle as a function of ξ 0 Proof Let vectors a i, ξ i and c i be defined as in Lemma 3 Further, let matrices A and C consist of column vectors a i and c i, respectively, such that RA = C We know that the roots of A i are ξ k with i k and 0 i, k m Consequently, A i (ξ k ) = a T i c# k = 0 for i k and thus A T C # = D, where D is a diagonal matrix with positive elements It follows that D = A T C # = A T RA # Since coefficients of D are non-zero, we can, with suitable scaling of the columns of A, find an  such that ÂT R # = I As a corollary to Lemma 5, we can readily see that there exists a matrix decomposition for the inverse of R such that Ĉ T R Ĉ # = I The inverse R exists since R is strictly positive definite While Lemma 5 uses Lemma 3, a similar decomposition can be constructed using the result of Lemma 4 Then both the symmetric and antisymmetric parts of a ± i and c ± i have to be included in the matrices A and C, respectively The advantage of this approach is that the matrices A and C can then be scaled to real while the construction above produces matrices A and C that are generally complex 4 DISCUSSION AND SUMMARY We have presented root-exchange properties between the all-pole form of the constrained linear predictive model and the corresponding constraint Further, as a corollary we obtained novel matrix decompositions for real positive definite symmetric Toeplitz matrices and its inverse, the latter with a Vandermonde matrix Intuitively, the root-exchange property can be explained by the fact that the constraint c i is equivalent to requiring that A i is strictly positive at ξ i, that is, A i (ξ i ) = a T i c i = a T i Ra i > 0 Therefore, A i cannot have a zero at ξ i When a zero ξ k is exchanged for ξ i then A k (ξ k ) > 0 and it is not a surprise that A k (ξ i ) = 0 The conventional LP model is a special case of constrained linear predictive models, whereby Eq 5 with ξ 0 = 0 becomes Ra 0 =, 0,, 0 T If vectors a i are defined with the root-exchange property as in Lemmata 3 and 5, then the corresponding A i with i m will have

5 A i (0) = 0, and the m th component a (i) m of a i is thus always a (i) m = 0 In other words, using the root-exchange property, we have means to reduce the (m + ) (m + ) matrix problem to a m m problem This property could potentially be used for order reduction of polynomial rootfinding problems Unfortunately, in general, the matrix reverse decompositions are non-unique and we can find several matrices A and C that satisfy  T R # = I and Ĉ T R Ĉ # = I However, if we constrain one root, eg the zero root in the case of conventional LP, then the decomposition becomes unique (We have then assumed that C is constrained to complex Vandermonde matrices and RA = C) Moreover, the question of finding the decomposition matrices without solving first the roots of polynomial A 0 remains open On the other hand, assuming that we can, with some convenient criterion, constrain C to Vandermonde matrices, it could be possible to use this decomposition for iterative root-finding Such a root-finding algorithm could be applied to any polynomial with distinct roots inside the unit circle, since Ra 0 =, 0,, 0 T uniquely defines R The presented theory can be partially generalised to infinite dimensional matrices of functional analysis Then, the Toeplitz and Vandermonde matrices R and C become Toeplitz (bi-infinite) and Vandermonde (infinite or bi-infinite) operators, and matrix A is zero-extended into infinity (infinite or bi-infinite) This extension of the Toeplitz matrix implies (when ξ = 0) that we introduce new values of R ij corresponding to zero reflection coefficient value Consequently, the extension is quite elegant, since it does not introduce any new information to the problem In addition, this is a sufficient criterion to make the decomposition unique However, the row-reversal operation is not as easily transfered to the infinite dimensional case, since it requires taking elements starting from infinity and making them the first elements 8 Note that the (n + k) n zero extension à of n n matrix A is not equivalent to A in the reflection decomposition sense In other words, since A T RA # = I we obtain à T Rà # = à T C # = D k A T C # = D k where C is the extension of C (which is also a Vandermonde matrix), D is a diagonal matrix with the exponents of C and we have assumed that R has been extended with zero reflection coefficients This implies that we have a description of the autocorrelation matrix very much alike the Krylov-subspace 6 The author believes, that there is some potential in this property that could lead to new root-finding algorithms In our earlier work 4, 5, we have shown that polynomials A i whose coefficients solve Eq 5 with real ξ i <, form a convex space of polynomials with the minimumphase property The root-exchange property presented in this paper, is compatible with the convex space formulation only as long as the exchanged roots are real The convexity property can be used, for example, in interpolation between polynomials when optimising the model by a frequency domain criterion Currently, however, the author has not found simple exchange-rules for complex conjugate root pairs, other than those on the unit circle Roots of presented models which lie on the unit circle must always be distinct This can easily be seen using the root-exchange rule, by exchanging one of the multiple zeros with a distinct zero The obtained model will thus have a zero on the diagonal of matrix D (in Lemma 5) and R cannot be positive definite This property could be used for creation of polynomial pairs with interlacing zeros on the unit circle Recall that such a property is useful in creation of stable predictors from symmetric/antisymmetric polynomial pairs 9 While the results of this paper do not present any straightforward applications, at least as far as the author is aware, they still offer theoretical insight into the properties of constraints of linear predictive models as well as the properties of Toeplitz matrices and their relation to Vandermonde matrices 5 REFERENCES A Härmä, Linear predictive coding with modified filter structures, IEEE Trans Speech Audio Proc, vol 9, no 8, pp , November J Makhoul, Linear prediction: A tutorial review, Proc IEEE, vol 63, no 5, pp , April T Söderstrom and P Stoica, System Identification, Prentice Hall, New York, T Bäckström and P Alku, On the stability of constrained linear predictive models, in Proc IEEE Acoustics, Speech, and Signal Proc ICASSP 03, Hong Kong, April , vol 6, pp T Bäckström and P Alku, A constrained linear predictive model with the minimum-phase property, accepted to Signal Processing, G Golub and C van Loan, Matrix Computations, The Johns Hopkins University Press, London, H W Schüssler, A stability theorem for discrete systems, IEEE Trans Acoust Speech Signal Proc, vol ASSP-24, no, pp 87 89, Feb E Kreyszig, Introductory Funcional Analysis with Applications, John Wiley & Sons, Inc, New York, F K Soong and B-H Juang, Line spectrum pair (LSP) and speech data compression, in Proc IEEE Acoustics, Speech, and Signal Proc ICASSP 84, San Diego, CA, March 984, vol, pp 0 04

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

Characterization Of Polynomials Using Reflection Coefficients

Characterization Of Polynomials Using Reflection Coefficients Applied Mathematics E-Notes, 4(2004), 114-121 c ISSN 1607-2510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ Characterization Of Polynomials Using Reflection Coefficients José LuisDíaz-Barrero,JuanJosé

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

Notes on Factoring. MA 206 Kurt Bryan

Notes on Factoring. MA 206 Kurt Bryan The General Approach Notes on Factoring MA 26 Kurt Bryan Suppose I hand you n, a 2 digit integer and tell you that n is composite, with smallest prime factor around 5 digits. Finding a nontrivial factor

More information

Least Squares Estimation

Least Squares Estimation Least Squares Estimation SARA A VAN DE GEER Volume 2, pp 1041 1045 in Encyclopedia of Statistics in Behavioral Science ISBN-13: 978-0-470-86080-9 ISBN-10: 0-470-86080-4 Editors Brian S Everitt & David

More information

A Direct Numerical Method for Observability Analysis

A Direct Numerical Method for Observability Analysis IEEE TRANSACTIONS ON POWER SYSTEMS, VOL 15, NO 2, MAY 2000 625 A Direct Numerical Method for Observability Analysis Bei Gou and Ali Abur, Senior Member, IEEE Abstract This paper presents an algebraic method

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

1 Short Introduction to Time Series

1 Short Introduction to Time Series ECONOMICS 7344, Spring 202 Bent E. Sørensen January 24, 202 Short Introduction to Time Series A time series is a collection of stochastic variables x,.., x t,.., x T indexed by an integer value t. The

More information

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible

More information

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C. CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES From Exploratory Factor Analysis Ledyard R Tucker and Robert C MacCallum 1997 180 CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES In

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA This handout presents the second derivative test for a local extrema of a Lagrange multiplier problem. The Section 1 presents a geometric motivation for the

More information

Nonlinear Iterative Partial Least Squares Method

Nonlinear Iterative Partial Least Squares Method Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for

More information

Classification of Cartan matrices

Classification of Cartan matrices Chapter 7 Classification of Cartan matrices In this chapter we describe a classification of generalised Cartan matrices This classification can be compared as the rough classification of varieties in terms

More information

Solution to Homework 2

Solution to Homework 2 Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

Math 312 Homework 1 Solutions

Math 312 Homework 1 Solutions Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

More information

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every

More information

Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs

Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs CSE599s: Extremal Combinatorics November 21, 2011 Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs Lecturer: Anup Rao 1 An Arithmetic Circuit Lower Bound An arithmetic circuit is just like

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

On the representability of the bi-uniform matroid

On the representability of the bi-uniform matroid On the representability of the bi-uniform matroid Simeon Ball, Carles Padró, Zsuzsa Weiner and Chaoping Xing August 3, 2012 Abstract Every bi-uniform matroid is representable over all sufficiently large

More information

26. Determinants I. 1. Prehistory

26. Determinants I. 1. Prehistory 26. Determinants I 26.1 Prehistory 26.2 Definitions 26.3 Uniqueness and other properties 26.4 Existence Both as a careful review of a more pedestrian viewpoint, and as a transition to a coordinate-independent

More information

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i. Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

More information

Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 51 First Exam January 29, 2015 Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

More information

Similar matrices and Jordan form

Similar matrices and Jordan form Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

More information

Linearly Independent Sets and Linearly Dependent Sets

Linearly Independent Sets and Linearly Dependent Sets These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for in-class presentation

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

ISOMETRIES OF R n KEITH CONRAD

ISOMETRIES OF R n KEITH CONRAD ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x

More information

Final Year Project Progress Report. Frequency-Domain Adaptive Filtering. Myles Friel. Supervisor: Dr.Edward Jones

Final Year Project Progress Report. Frequency-Domain Adaptive Filtering. Myles Friel. Supervisor: Dr.Edward Jones Final Year Project Progress Report Frequency-Domain Adaptive Filtering Myles Friel 01510401 Supervisor: Dr.Edward Jones Abstract The Final Year Project is an important part of the final year of the Electronic

More information

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS Revised Edition James Epperson Mathematical Reviews BICENTENNIAL 0, 1 8 0 7 z ewiley wu 2007 r71 BICENTENNIAL WILEY-INTERSCIENCE A John Wiley & Sons, Inc.,

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

Solution of Linear Systems

Solution of Linear Systems Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

Matrix Differentiation

Matrix Differentiation 1 Introduction Matrix Differentiation ( and some other stuff ) Randal J. Barnes Department of Civil Engineering, University of Minnesota Minneapolis, Minnesota, USA Throughout this presentation I have

More information

1 Review of Least Squares Solutions to Overdetermined Systems

1 Review of Least Squares Solutions to Overdetermined Systems cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares

More information

RESULTANT AND DISCRIMINANT OF POLYNOMIALS

RESULTANT AND DISCRIMINANT OF POLYNOMIALS RESULTANT AND DISCRIMINANT OF POLYNOMIALS SVANTE JANSON Abstract. This is a collection of classical results about resultants and discriminants for polynomials, compiled mainly for my own use. All results

More information

7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization 7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

More information

Applied Linear Algebra I Review page 1

Applied Linear Algebra I Review page 1 Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

More information

6. Cholesky factorization

6. Cholesky factorization 6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix

More information

We shall turn our attention to solving linear systems of equations. Ax = b

We shall turn our attention to solving linear systems of equations. Ax = b 59 Linear Algebra We shall turn our attention to solving linear systems of equations Ax = b where A R m n, x R n, and b R m. We already saw examples of methods that required the solution of a linear system

More information

Advanced Signal Processing and Digital Noise Reduction

Advanced Signal Processing and Digital Noise Reduction Advanced Signal Processing and Digital Noise Reduction Saeed V. Vaseghi Queen's University of Belfast UK WILEY HTEUBNER A Partnership between John Wiley & Sons and B. G. Teubner Publishers Chichester New

More information

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d. DEFINITION: A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers), subject to the following axioms

More information

Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances

Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances It is possible to construct a matrix X of Cartesian coordinates of points in Euclidean space when we know the Euclidean

More information

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7. Review University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

More information

Ideal Class Group and Units

Ideal Class Group and Units Chapter 4 Ideal Class Group and Units We are now interested in understanding two aspects of ring of integers of number fields: how principal they are (that is, what is the proportion of principal ideals

More information

More than you wanted to know about quadratic forms

More than you wanted to know about quadratic forms CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences More than you wanted to know about quadratic forms KC Border Contents 1 Quadratic forms 1 1.1 Quadratic forms on the unit

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A = MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the

More information

Matrices and Polynomials

Matrices and Polynomials APPENDIX 9 Matrices and Polynomials he Multiplication of Polynomials Let α(z) =α 0 +α 1 z+α 2 z 2 + α p z p and y(z) =y 0 +y 1 z+y 2 z 2 + y n z n be two polynomials of degrees p and n respectively. hen,

More information

Adaptive Online Gradient Descent

Adaptive Online Gradient Descent Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650

More information

Background: State Estimation

Background: State Estimation State Estimation Cyber Security of the Smart Grid Dr. Deepa Kundur Background: State Estimation University of Toronto Dr. Deepa Kundur (University of Toronto) Cyber Security of the Smart Grid 1 / 81 Dr.

More information

8. Linear least-squares

8. Linear least-squares 8. Linear least-squares EE13 (Fall 211-12) definition examples and applications solution of a least-squares problem, normal equations 8-1 Definition overdetermined linear equations if b range(a), cannot

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

ON THE DEGREES OF FREEDOM OF SIGNALS ON GRAPHS. Mikhail Tsitsvero and Sergio Barbarossa

ON THE DEGREES OF FREEDOM OF SIGNALS ON GRAPHS. Mikhail Tsitsvero and Sergio Barbarossa ON THE DEGREES OF FREEDOM OF SIGNALS ON GRAPHS Mikhail Tsitsvero and Sergio Barbarossa Sapienza Univ. of Rome, DIET Dept., Via Eudossiana 18, 00184 Rome, Italy E-mail: tsitsvero@gmail.com, sergio.barbarossa@uniroma1.it

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

CS3220 Lecture Notes: QR factorization and orthogonal transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss

More information

Subsets of Euclidean domains possessing a unique division algorithm

Subsets of Euclidean domains possessing a unique division algorithm Subsets of Euclidean domains possessing a unique division algorithm Andrew D. Lewis 2009/03/16 Abstract Subsets of a Euclidean domain are characterised with the following objectives: (1) ensuring uniqueness

More information

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

More information

2.3. Finding polynomial functions. An Introduction:

2.3. Finding polynomial functions. An Introduction: 2.3. Finding polynomial functions. An Introduction: As is usually the case when learning a new concept in mathematics, the new concept is the reverse of the previous one. Remember how you first learned

More information

Some Polynomial Theorems. John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom.

Some Polynomial Theorems. John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom. Some Polynomial Theorems by John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom.com This paper contains a collection of 31 theorems, lemmas,

More information

Enhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm

Enhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm 1 Enhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm Hani Mehrpouyan, Student Member, IEEE, Department of Electrical and Computer Engineering Queen s University, Kingston, Ontario,

More information

MixedÀ¾ нOptimization Problem via Lagrange Multiplier Theory

MixedÀ¾ нOptimization Problem via Lagrange Multiplier Theory MixedÀ¾ нOptimization Problem via Lagrange Multiplier Theory Jun WuÝ, Sheng ChenÞand Jian ChuÝ ÝNational Laboratory of Industrial Control Technology Institute of Advanced Process Control Zhejiang University,

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

Orthogonal Projections

Orthogonal Projections Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors

More information

A Primer on Index Notation

A Primer on Index Notation A Primer on John Crimaldi August 28, 2006 1. Index versus Index notation (a.k.a. Cartesian notation) is a powerful tool for manipulating multidimensional equations. However, there are times when the more

More information

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called

More information

Elementary Gradient-Based Parameter Estimation

Elementary Gradient-Based Parameter Estimation Elementary Gradient-Based Parameter Estimation Julius O. Smith III Center for Computer Research in Music and Acoustics (CCRMA Department of Music, Stanford University, Stanford, California 94305 USA Abstract

More information

Linear Programming. March 14, 2014

Linear Programming. March 14, 2014 Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

More information

Inner products on R n, and more

Inner products on R n, and more Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 8, AUGUST 2008 3425. 1 If the capacity can be expressed as C(SNR) =d log(snr)+o(log(snr))

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 8, AUGUST 2008 3425. 1 If the capacity can be expressed as C(SNR) =d log(snr)+o(log(snr)) IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 54, NO 8, AUGUST 2008 3425 Interference Alignment and Degrees of Freedom of the K-User Interference Channel Viveck R Cadambe, Student Member, IEEE, and Syed

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

Fast Deconvolution Using Frequency-Dependent Regularization

Fast Deconvolution Using Frequency-Dependent Regularization Fast Deconvolution Using Frequency-Dependent Regularization Ole Kirkeby, Per Rubak, and Angelo Farina * Department of Communication Techology, Fredrik Bajers Vej 7, Aalborg University, DK-9220 Aalborg

More information

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all. 1. Differentiation The first derivative of a function measures by how much changes in reaction to an infinitesimal shift in its argument. The largest the derivative (in absolute value), the faster is evolving.

More information

Formulations of Model Predictive Control. Dipartimento di Elettronica e Informazione

Formulations of Model Predictive Control. Dipartimento di Elettronica e Informazione Formulations of Model Predictive Control Riccardo Scattolini Riccardo Scattolini Dipartimento di Elettronica e Informazione Impulse and step response models 2 At the beginning of the 80, the early formulations

More information

2.3 Convex Constrained Optimization Problems

2.3 Convex Constrained Optimization Problems 42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

More information

Methods for Finding Bases

Methods for Finding Bases Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION 4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:

More information

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

More information

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Linear Systems, Continued and The Inverse of a Matrix , Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

More information

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication). MAT 2 (Badger, Spring 202) LU Factorization Selected Notes September 2, 202 Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix

More information

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

More information

Analysis of Mean-Square Error and Transient Speed of the LMS Adaptive Algorithm

Analysis of Mean-Square Error and Transient Speed of the LMS Adaptive Algorithm IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 48, NO. 7, JULY 2002 1873 Analysis of Mean-Square Error Transient Speed of the LMS Adaptive Algorithm Onkar Dabeer, Student Member, IEEE, Elias Masry, Fellow,

More information

Lecture 4: Partitioned Matrices and Determinants

Lecture 4: Partitioned Matrices and Determinants Lecture 4: Partitioned Matrices and Determinants 1 Elementary row operations Recall the elementary operations on the rows of a matrix, equivalent to premultiplying by an elementary matrix E: (1) multiplying

More information

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

More information

Analysis of an Artificial Hormone System (Extended abstract)

Analysis of an Artificial Hormone System (Extended abstract) c 2013. This is the author s version of the work. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purpose or for creating

More information

Efficient Data Recovery scheme in PTS-Based OFDM systems with MATRIX Formulation

Efficient Data Recovery scheme in PTS-Based OFDM systems with MATRIX Formulation Efficient Data Recovery scheme in PTS-Based OFDM systems with MATRIX Formulation Sunil Karthick.M PG Scholar Department of ECE Kongu Engineering College Perundurau-638052 Venkatachalam.S Assistant Professor

More information