A note on companion matrices



Similar documents
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Similarity and Diagonalization. Similar Matrices

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

Orthogonal Diagonalization of Symmetric Matrices

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

Applied Linear Algebra I Review page 1

Solving Linear Systems, Continued and The Inverse of a Matrix

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Continued Fractions and the Euclidean Algorithm

Chapter 6. Orthogonality

LINEAR ALGEBRA. September 23, 2010

Continuity of the Perron Root

Notes on Symmetric Matrices

Linear Algebra: Determinants, Inverses, Rank

Inner Product Spaces

More than you wanted to know about quadratic forms

Chapter 17. Orthogonal Matrices and Symmetries of Space

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

Suk-Geun Hwang and Jin-Woo Park

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

Inner Product Spaces and Orthogonality

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Bounds on the spectral radius of a Hadamard product of nonnegative or positive semidefinite matrices

α = u v. In other words, Orthogonal Projection

Notes on Determinant

Math 319 Problem Set #3 Solution 21 February 2002

Numerical Analysis Lecture Notes

x = + x 2 + x

Row Echelon Form and Reduced Row Echelon Form

[1] Diagonal factorization

Linear Algebra Review. Vectors

7 Gaussian Elimination and LU Factorization

Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra Notes for Marsden and Tromba Vector Calculus

The last three chapters introduced three major proof techniques: direct,

THE SIGN OF A PERMUTATION

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

Introduction to Matrix Algebra

Factorization Theorems

FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

Similar matrices and Jordan form

ISOMETRIES OF R n KEITH CONRAD

CONTROLLABILITY. Chapter Reachable Set and Controllability. Suppose we have a linear system described by the state equation

The Determinant: a Means to Calculate Volume

Section Inner Products and Norms

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

by the matrix A results in a vector which is a reflection of the given

8 Square matrices continued: Determinants

Eigenvalues and Eigenvectors

Using row reduction to calculate the inverse and the determinant of a square matrix

UNCOUPLING THE PERRON EIGENVECTOR PROBLEM

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

Lecture 3: Finding integer solutions to systems of linear equations

1 Introduction to Matrices

RINGS WITH A POLYNOMIAL IDENTITY

Classification of Cartan matrices

Split Nonthreshold Laplacian Integral Graphs

Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs

Lecture 5: Singular Value Decomposition SVD (1)

Solution to Homework 2

Unit 18 Determinants

Systems of Linear Equations

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Linear Codes. Chapter Basics

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

The Characteristic Polynomial

Data Mining: Algorithms and Applications Matrix Math Review

T ( a i x i ) = a i T (x i ).

5. Orthogonal matrices

A characterization of trace zero symmetric nonnegative 5x5 matrices

Quadratic forms Cochran s theorem, degrees of freedom, and all that

1 = (a 0 + b 0 α) (a m 1 + b m 1 α) 2. for certain elements a 0,..., a m 1, b 0,..., b m 1 of F. Multiplying out, we obtain

DATA ANALYSIS II. Matrix Algorithms

= = 3 4, Now assume that P (k) is true for some fixed k 2. This means that

CS3220 Lecture Notes: QR factorization and orthogonal transformations

15. Symmetric polynomials

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

2 Polynomials over a field

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

LINEAR ALGEBRA W W L CHEN

Banded Matrices with Banded Inverses and A = LPU

1 VECTOR SPACES AND SUBSPACES

Lecture 4: Partitioned Matrices and Determinants

6.2 Permutations continued

Lecture 5 Principal Minors and the Hessian

Orthogonal Bases and the QR Algorithm

Lecture 1: Schur s Unitary Triangularization Theorem

March 29, S4.4 Theorems about Zeros of Polynomial Functions

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include

Inner product. Definition of inner product

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Transcription:

Linear Algebra and its Applications 372 (2003) 325 33 www.elsevier.com/locate/laa A note on companion matrices Miroslav Fiedler Academy of Sciences of the Czech Republic Institute of Computer Science Pod vodáren. věží 2 82 07 Praha 8 Czech Republic Received 8 October 2002; accepted 2 April 2003 Submitted by H. Schneider Abstract We show that the usual companion matrix of a polynomial of degree n can be factored into a product of n matrices n of them being the identity matrix in which a 2 2 identity submatrix in two consecutive rows (and columns) is replaced by an appropriate 2 2 matrix the remaining being the identity matrix with the last entry replaced by possibly different entry. By a certain similarity transformation we obtain a simple new companion matrix in a pentadiagonal form. Some generalizations are also possible. 2003 Elsevier Inc. All rights reserved. AMS classification: 5A23; 5A57; 65F5 Keywords: Companion matrix; Characteristic polynomial; Pentadiagonal matrix; Zeros of polynomials. Introduction Let p(x) = x n + a x n + +a n x + a n () be a polynomial with coefficients over an arbitrary field. As is well known the matrix a a 2... a n a n 0 0 A = 0 0 (2) 0 0 E-mail address: fiedler@math.cas.cz (M. Fiedler). Research supported by grant A030003. 0024-3795/$ - see front matter 2003 Elsevier Inc. All rights reserved. doi:0.06/s0024-3795(03)00548-2

326 M. Fiedler / Linear Algebra and its Applications 372 (2003) 325 33 has the property that det(xi A) = p(x). The matrix A or some of its modifications is being called companion matrix of the polynomial p(x) since its characteristic polynomial is p(x). In the sequel we find another simple matrix  which is similar to A and has thus the same property with respect to the polynomial p(x). 2. Results We start with a simple lemma. Lemma 2.. For k =...n denote by A k the matrix k A k = I C k (3) I n k where C k is a 2 2 matrix ak C k = (4) and by A n the matrix A n = diag{... a n }. (5) Then A = A A 2 A n A n. Proof. Follows easily by induction from a a 2 a k 0 0 0 0 0 0 I k 0 0 0 a k+ 0 0 0 0 0 0 0 0 a a 2 a k a k+ 0 0 0 = 0 0 0. 0 0 0 0 0 0 Lemma 2.2. All the matrices obtained as products A i A in for some permutation (i...i n ) of (...n)have the same spectrum including multiplicities. They even are all similar.

M. Fiedler / Linear Algebra and its Applications 372 (2003) 325 33 327 Proof. Clearly A i A k = A k A i if i k >. This allows us to bring every such product A i A in to the form (A j A j 2 A )(A j2 A j2 2 A j ) (A n A n A js ) with the property that in both permutations (i...i n ) and (j j 2... j 2 j 2 2...j...nn...j s ) each pair (k k + ) is either ordered positively i.e. k precedes k + or negatively i.e. k + precedes k. Moreover by a well known theorem [4 Theorem.3.20] if A and B are square matrices then AB and BA have the same spectrum including multiplicities; these matrices are even similar if one of the matrices A B is non-singular. This allows us to rotate the permutations which determine the product without changing the spectrum in the sense that the permutation (i...i k i k+...i n ) can be replaced by (i k+...i n i...i k ). It thus suffices to prove that the matrix corresponding to any permutation can be obtained by these operations from the matrix A( 3 5...; 2 4 6...) n ; here we denote by A(i...i n ) n the matrix A i A i2 A in and the subscript n means the number of elements in the permutation. Observe that all resulting matrices are similar since at most the matrix A n from (5) can be singular. We prove the assertion by induction with respect to n. It is immediate that for n = 2andn = 3 the assertion is true. Let thus n>2 and suppose the assertion is true for n. Let (i...i n ) be a permutation. We distinguish two cases. Case. The pair ( 2) is positive. Then ( 3...; 2 4...) n (3 5...; 2 4...) n (we use here the symbol to express that the spectrum of A( 3...; 2 4...) n is the same as the spectrum of A(3 5...; 2 4...) n ) which is (3 5...; 2 4...) n. We now take the pair ( 2) as one element and diminish indices of all the remaining indices by one: thus (3 5...; 2 4...) n (2 4...; 3...) n which is again equivalent to ( 3...; 2 4...) n by the rotational operation. By the induction hypothesis this permutation is equivalent to the permutation obtained from (i...i n ) by putting together the pair ( 2) by moving to the right until it hits 2 and then denoting it by and diminishing all the remaining indices by one. Going back we reconstruct the chain of operations which brings the permutation ( 3...; 2 4...) n into (i...i n ). Case 2. The pair ( 2) is negative. By rotation we can arrange that will be the first element in the permutation. Of course the pair ( 2) will then be positive and by Case the assertion is correct. Theorem 2.3. All matrices A i A in for any permutation (i...i n ) are companion matrices of p(x) and are similar to the matrix (2). In particular this holds for the matrix  = BC where B is the matrix A A 3 and C is the matrix A 2 A 4 where A i are the matrices from (3). The matrix B is the direct sum of the matrices C C 3 etc. the matrix C the direct sum of the identity matrix and the matrices C 2 C 4 etc. from (4). For n even

328 M. Fiedler / Linear Algebra and its Applications 372 (2003) 325 33 the matrix C ends with the block ( a n ) for n odd B ends with the block ( a n ). The matrix  is pentadiagonal and contains the same entries as the usual companion matrix (2). Proof. The first part follows from Lemma 2.2 since by Lemma 2. the matrix A A 2 A n is the usual companion matrix. The last assertion follows from the fact that both matrices B and C are tridiagonal and from the comment in the following example. Example 2.4. Let us present explicitly the matrices B C and BC for n = 5and n = 6 (the even and odd cases slightly differ): For n = 5 a B = a 3 C = a 5 a a 2 0 0 0 0 BC = 0 a 3 0 a 4 0 0 0. 0 0 0 a 5 0 For n = 6 a B = a 3 a 5 a 2 C = a 4 a 6 a a 2 0 0 0 0 0 0 BC = 0 a 3 0 a 4 0 0 0 0. 0 0 0 a 5 0 a 6 0 0 0 0 a 2 a 4

M. Fiedler / Linear Algebra and its Applications 372 (2003) 325 33 329 We see that and this is true in general the first two rows of BC contain non-zero entries in the first three columns only: a a 2 ; 0 the following pairs of rows 2j and2j contain non-zero entries only in four columns with indices 2j 2...2j + and the submatrices a2j a 2j 0 0 in these rows and columns contain two entries a i and two ones; the remaining are zero. The last two rows in the even case contain non-zeros only in the last three columns an a n ; 0 the last row in the odd case is as in the example above: 0...0 a n 0. Remark 2.5. The matrix  from Theorem 2.3 can be transformed by a permutational similarity starting with odd rows and columns and continuing with even rows and columns to the block form Z Z 2 Z 2 Z 22 where a 0 Z =...... 0 a 2 0 0... 0 a 3 a 4 0... 0 Z 2 = 0 a 5 a 6... 0...... 0 0... 0 0... 0 Z 2 = 0 0... 0...... Z... 0 0 22 = 0... 0 0........ 0 0... 0 0 0... If n is even Z 2 and Z 2 are square. If n is odd Z 2 is of size 2 (n + ) 2 (n ).

330 M. Fiedler / Linear Algebra and its Applications 372 (2003) 325 33 3. Comments Note that the presented companion matrix cannot be obtained (for n>2) by permutational similarity from the usual companion matrix. Similarly to the results in [25] the new companion can be used in estimates of the roots possibly also in computations of the roots using algorithms for the eigenvalues. Since det(xi BC) = det C det(xc B) for C non-singular and the matrix C in Theorem 2.3 can easily be inverted we obtain Theorem 3.. Therootsofp(x) = 0 coincide with the roots of the equation in determinantal form in which the n n matrix is symmetric and tridiagonal: a + x x det x a 3 + a 2 x x = 0. x a 5 + a 4 x......... We have now the following lemma the proof of which by checking is immediate. Lemma 3.2. The matrix 2α G = has the spectral decomposition G = QDQ T where for w = α 2 + Q = ( ) w + α w α 2w w α w + α is orthogonal D = diag{α + w α w} diagonal. In addition the modulus G i.e. the positive semidefinite matrix P for which P 2 = GG T is G = ( 2α 2 ) + α ; w α the positive semidefinite square root of G is R = ( (w + α) 3/2 + (w α) 3/2 (w + α) /2 (w α) /2 ) 2w (w + α) /2 (w α) /2 (w + α) /2 + (w α) /2 ; the spectral decomposition of R is QD 0 Q T where D 0 = diag{ w + α w α}.

M. Fiedler / Linear Algebra and its Applications 372 (2003) 325 33 33 Using this lemma one can find explicitly the moduli of the matrices B and C in Theorem 2.3. The singular values of BC are the eigenvalues of B C ; these can thus be obtained as eigenvalues of the symmetric heptadiagonal matrix B /2 C B /2 and used for estimation of the roots. In our opinion a particularly interesting feature of the matrices B and C from Theorem 2.3 is the following: If g(x) and h(x) are polynomials for which f(x)= g(x 2 ) + xh(x 2 ) then for n odd B depends on the coefficients of g only whereas C depends on the coefficients of h. For n even it is the other way. It is also easy to find explicitly the QR- RQ- etc. decompositions of both matrices B and C from Theorem 2.3 and use them for manipulation. As a sample if B = Q R and C = R 2 Q 2 then Q 2 Q R R 2 is the QR-decomposition of another companion matrix which is a banded matrix with a small number of bands. For n = 5 one gets a w w 0 0 0 0 0 a 3 w 3 w 3 0 Q = w a w 0 0 0 0 0 0 0 0 0 w 3 a 3 w 3 0 w a w a a 2 w 0 0 w w a 2 0 0 R = 0 0 w 3 a 3 w 3 a 3 a 4 w 3 0 0 0 w 3 w 3 a 4 0 0 0 0 a 5 where w k is set as ak 2 + fork = 3. Observe also that some of the matrices considered above have superdiagonal rank (sometimes subdiagonal rank) one. Here as in [3] we call subdiagonal rank (respectively superdiagonal rank) of a square matrix the order of the maximal nonsingular submatrix all of whose entries are in the subdiagonal (respectively superdiagonal) part. In fact in the matrix R the superdiagonal rank is one even if we add to the superdiagonal part all the even diagonal positions (in the sense of [3]). In some mentioned cases a similar property holds for the subdiagonal or superdiagonal rank too. References [] S. Barnett Congenial matrices Linear Algebra Appl. 4 (98) 277 298. [2] L. Brand Applications of the companion matrix Amer. Math. Monthly 75 (968) 46 52. [3] M. Fiedler Structure ranks of matrices Linear Algebra Appl. 79 (993) 9 28. [4] R.A. Horn C.R. Johnson Matrix Analysis Cambridge University Press Cambridge 985. [5] H. Linden Bounds for the zeros of polynomials from eigenvalues and singular values of some companion matrices Linear Algebra Appl. 27 (998) 4 82.