Characterizations are given for the positive and completely positive maps on n n

Size: px
Start display at page:

Download "Characterizations are given for the positive and completely positive maps on n n"

Transcription

1 Special Classes of Positive and Completely Positive Maps Chi-Kwong Li and Hugo J. Woerdemany Department of Mathematics The College of William and Mary Williamsburg, Virginia ABSTRACT Characterizations are given for the positive and completely positive maps on n n complex matrices that leave invariant the diagonal entries or the kth elementary symmetric function of the diagonal entries, < k n. In addition, it was shown that such a positive map is always decomposable if n 3, and this fails to hold if n > 3. The real case is also considered.. INTRODUCTION Let M n denote the -algebra of n n complex matrices, where is hermitian conjugation. A (complex) linear map : M n! M n is positive if maps the set of positive denite matrices into itself; is m?positive if I m is positive on M n M m, where I m is the identity operator on M m ; and is completely positive if is m-positive for every m. The structure of completely positive maps on M n is quite well-understood (e.g., see [C, PH, BHH, LoS]). Recently, there has been interest in studying completely positive maps satisfying some special properties such as leaving invariant the trace function, or leaving invariant the identity map, (e.g., see [BPS, LS] and their references). In particular, it has been shown that the structure of such maps is quite complicated. For k n, let S k (x ; : : : ; x n ) denote the kth elementary symmetric function of the numbers x ; : : : ; x n. The purpose of this note is to characterize those positive and completely positive maps : M n! M n that leave invariant the diagonal entries of matrices, i.e., (X) ii = X ii for all X and all i n; and those positive and completely positive maps : M n! M m, m n, that leave invariant the kth elementary symmetric function S k of the diagonal entries, i.e., S k ((X) ; (X) ; : : : ; (X) mm ) = S k (X ; : : : ; X nn ) for all X, where < k m. It turns out that all the trouble is restricted to the case k =. For k >, the structure of those maps is easy to describe. We also obtain similar results for the real case. After the rst version of this paper was submitted, we learned that our Theorems and 3 had been obtained independently by Kye, and would appear in [K]. A sizable part y Partially supported by NSF grant DMS 9594

2 of [K] was devoted to showing that a positive linear map on M n that leaves invariant the diagonal entries must be decomposable if n = 3. In [KK], it was shown that this conclusion fails if n 4. We found a dierent characterization for the decomposability of such positive maps (cf. Theorem 5 (c) ) that led to shorter proofs of the results in [K] and [KK]. We include these results in Section 4. Since the proofs of our Theorems and 3 are rather short, we include them in our discussion for the sake of completeness even though they have been proved in [K]. In our discussion, we shall use fe ; E ; : : : ; E nn g to denote the standard basis of M n. For a matrix A M n, we write A ( A > ) if A is positive semi-denite (positive denite).. RESULTS ON COMPLETELY POSITIVE MAPS Recall that a A M n is a correlation matrix if A with all diagonal entries equal to one. It is well-known that if A M n is a correlation matrix, then the map A : M n! M n of the form X 7! A X is completely positive; here A X denotes the Schur or Hadamard product. Clearly, for such a completely positive map, A (X) and X have the same diagonal entries. In Theorem we show that every completely positive map that leaves invariant the diagonal entries is of this form. As a result, the set of such completely positive maps is linearly isomorphic to the convex set of correlation matrices. For the structure of the set of correlation matrices, one may see [CV, GP, Lo, LiT]. Theorem Suppose : M n! M n is a completely positive map. The following conditions are equivalent. (a) satises (X) ii = X ii for all X and all i n. (b) (E ii ) = E ii for all i n. (c) There is a correlation matrix A M n such that is of the form X 7! A X. Proof. (a) ) (b): If : M n! M n is completely positive and leaves the diagonal entries invariant, then (E ii ) with diagonal entries equal to those of E ii. It follows that (E ii ) = E ii. (b) ) (c): It is known (e.g., see [C] and [PH]) that : M n! M n is a completely positive map if and only if the n n block matrix ((E ij )) i;jn is positive semidenite. It follows that for any i 6= j, (E ij ) = a ij E ij for some a ij C with a ij = a ji. Set a ii = for i = ; : : : ; n, and let A = (a ij ). Then is of the form X 7! A X. Since J = P i;jn E ij, we have A = A J = (J). (c) ) (a): Clear. The set of completely positive maps that leave the trace function invariant have been studied by several authors (see [BPS, LS] and their references). It was shown that the

3 structure of this set is rather complicated. Notice that the trace function is just the rst elementary symmetric function of the diagonal entries. In Theorem we show that for k >, the structure of those completely positive maps that leave invariant the kth elementary symmetric function of diagonal entries is much simpler. To simplify our notation, we shall use S k (X) to denote the kth elemenatry symmetric function of the diagonal entries of the square matrix X. Theorem Let m; n; k be positive integers such that < k m n. A completely positive map : M n! M m satises S k ( (X)) = S k (X) for all X M n if and only if m = n and is of the form X 7! P t (A X)P, where A M n is a correlation matrix, and (i) if < k < n then P M n is a permutation matrix, (ii) if k = n then P M n is a product of a permutation matrix and a diagonal matrix P n a j= je jj for some positive numbers a ; : : : ; a n satisfying n j= a j =. Proof. Suppose : M n! M m is a completely positive map leaving the kth elementary symmetric function of the diagonal entries invariant. Then S k ( (te jj ) + (D)) = S k ( (te jj +D)) = S k (te jj +D) is a polynomial in t of degree one for any positive diagonal matrix D. It follows that (E jj ) = a j E rj ;r j for some r j m. Suppose the function j 7! r j is not injective, say both (E pp ) and (E qq ) are multiples of E ss. Then there exists K f; : : : ; ng with k elements including p and q such that = S k ( P jk E jj), but ( P jk E jj) has less than k diagonal entries and hence S k ( ( P jk E jj)) =. As a result, we have m = n. Furthermore, there is a matrix P M n such that ( P n j= a= j E jj )P is a permutation and the linear operator ~ dened by ~ (X) = P t XP is a completely positive map satisfying ~ (E jj ) = E jj for all j = ; : : : ; n. Applying Theorem to ~, we conclude that ~ is of the form X 7! A X for some correlation matrix A. For any j < < j k n, we have = S k ( P k t= E j t ;j t ) = S k ( ~ ( P k E t= j t ;j t )) = k t=a jt. If k = n, we have n a j= j = ; if k < n, we have a j = for all j = ; : : : ; n. Thus is of the asserted form. The converse of the theorem is clear. 3. RESULTS ON POSITIVE MAPS In this section, we characterize those positive maps that leave invariant the diagonal entries or the kth elementary function of the diagonal entries for a xed k >. As can be seen, the structures of these maps are just slightly more complicated than those in the previous section. We shall denote by A;B the map on M n dened by A;B (X) = I X + A X + B X t for given hermitian matrices A and B with zero diagonals. 3

4 Theorem 3 Suppose : M n! M n is a positive map. The following conditions are equivalent. (a) satises (X) ii = X ii for all X and all i n. (b) (E ii ) = E ii for all i n. (c) = A;B for some hermitian matrices A; B M n with zero diagonals and such that I + D AD + DBD for all diagonal unitary matrices D. Proof. (a) ) (b): Use the same arguments in the proof of Theorem. (b) ) (c): For any r < s n, (E rr + E rs + E sr + E ss ) with diagonal equal to that of E rr + E ss. Thus (E rs + E sr ) = x rs E rs + x sr E sr with x rs = x sr. Similarly, we have (ie rs? ie sr ) = y rs E rs + y sr E sr with y rs = y sr. Set x rr = y rr = for r n, and set X = (x rs ); Y = (y rs ) M n. Then for any hermitian matrix H = R + is, where R is real symmetric and S is real skew-symmetric, we have (H) = (I + X) R + i(y S). Let Z = (H + ih ) + i(g + ig ) M n, where H ; G are real symmetric and H and G are real skew-symmetric. We have (Z) = (I + X) (H + ig ) + Y (ih? G ) = I Z +AZ +B Z t, where A = (X +Y )= and B = (X?Y )=. Since is a positive map, if Z = vv with v = (v ; : : : ; v n ) t where jv j = = jv n j =, and if D = P n j= v je jj, then (Z) = I + D AD + DBD. (c) ) (a): Let be dened as in (c). Clearly, (X) ii = X ii for all X and all i n. It remains to show that is a positive map. Now, for any x = (x ; : : : ; x n ) t C n with x j 6= for all j n, we have (xx ) = X(I +D AD+DBD )X for X = P n j= jx jje jj and some diagonal unitary matrix D. Hence (xx ). By continuity, we have (xx ) for all x C n, and hence maps the cone of positive semi-denite matrices into itself. If X is positive denite, then X = ti + ~ X for some t > and some ~ X. It follows that (X) = (ti + ~ X) = ti + ( ~ X) >. Remark Suppose A and B are hermitian matrices with zero diagonals. One easily checks that the following conditions are equivalent. (i) I + D AD + DBD for all diagonal unitary matrices D. (ii) I + A + DBD for all diagonal unitary matrices D. (iii) I + D AD + B for all diagonal unitary matrices D. We chose to use condition (i) in Theorem 3 (c) because of its symmetry with respect to A and B. It would be nice to have an explicit condition on hermitian matrices A and B with zero diagonals such that the conditions (i) { (iii) hold. When n =, this happens if and only if ja j + jb j. The problem seems rather dicult for n 3. As pointed out by Dr. D. Farenick, one can deduce Theorem from Theorem 3. 4

5 By Theorem 3 and arguments similar to those in the proof of Theorem, we have the following result characterizing those positive maps that leave invariant the kth elementary symmetric function of the diagonal entries for k >. Theorem 4 Let m; n; k be positive integers such that < k m n. A positive map : M n! M m satises S k ( (X)) = S k (X) for all X M n if and only if m = n and is of the form X 7! P t ( A;B (X))P for some hermitian matrices A; B M n with zero diagonals, and some P satisfying Theorem (i) or (ii). 4. DECOMPOSABILITY A positive linear map : M n! M n is said to be decomposable if it is the sum of a completely positive map and a completely co-positive map. Recall that is completely co-positive if the mapping A 7! (A t ) is completely positive. For positive maps that leave invariant the diagonal entries, we have the following equivalent conditions for decomposability. Theorem 5 Let A;B : M n! M n be positive, i.e., satisfy the condition in Theorem 3 (c). The following are equivalent. (a) A;B is decomposable. (b) There exist diagonal matrices D A and D B such that I = D A + D B ; D A + A ; and D B + B : () (c) For all X; Y with I X = I Y we have tr (X + AX + BY ) : () Proof. For the equivalence of (a) and (b) see [K, Theorem.3]. For (b) ) (c) note that tr (D A + A)X ; tr (D B + B)Y implies (), where we use the identities I = D A + D B and I X = I Y. (c) ) (b): First note that it suces to nd diagonal matrices D = P n j= d je jj and E = P n j= e je jj so that I? D? E ; D + A ; E + B : (3) Indeed, if (3) holds we may put D A = D and D B = I? D to obtain (). 5

6 @ A Let A = B A ; Ai = E ii A ; An+i =?E E ii A ; i = ; :::; n:?e ii Then (3) is equivalent to A + nx d i A i + nx e i A n+i : (4) By a standard result on linear matrix inequalities (see, e.g., [BEFB, Section..]), there exist d i and e i such that (4) holds if and only if for every positive semi-denite Z M 3n tr (ZA i ) = ; i = ; :::; n; implies tr (ZA ) : (5) Now, suppose X; Y with I X = I Y. Then Z := X X Y is positive semi-denite and tr (ZA i ) = ; i = ; :::; n, and thus it follows from (5) that () must hold. By Remark, the positivity requirement on A;B may be reformulated as tr (X + AX + BD XD) (6) for all X and all diagonal unitary matrices D. Since I (D XD) = I X for any diagonal unitary matrix D, we see that (6) is a particular case of (). This, of course, is no surprise as decomposability implies positivity. It is natural to ask to what extent the converse is also true, i.e., when does positivity of A;B imply decomposability? We next show that this converse holds for dimensions 3 and lower, but in general fails to hold for dimensions 4 and higher. Theorem 6 Every positive map of the form A;B, where A; B M n are hermitian matrices with zero diagonals, on M n is decomposable if and only if n 3. Proof. For n = the statement is trivial. For n = one can choose D A = ja ji and D B = I? D A to be the decomposition. Let n = 3. Suppose (6) holds for all diagonal unitary matrices D. We need to show that () holds for all X; Y with I X = I Y. It suces to consider the case when X and Y have positive diagonals. The general case will then follow from continuity. For such X and Y, let E = (I X) =. Then E? XE? and E? Y E? are correlation matrices. Since (e.g., see [LiT]) all 3 3 correlation matrices are convex combinations of rank one correlation matrices, we have E? XE? = P r a ix i and E? Y E? = P s b iy i, where 6

7 a a r > and b b s > are two sets of convex coecients, and X i, Y i are rank one correlation matrices. Suppose a r b s. Write E? XE? as E? XE? = Xr? a i X i + (a r? b s )X r + b s X r : Apply these arguments to the matrices Xr? a i X i + (a r? b s )X r and Xs? b i Y i to remove Y s? if a r? b s b s? or X r if a r? b s < b s?. After nitely many repetitions of this process, we can write E? XE? = c i ~ Xi and E? Y E? = where c i form a set of convex coecients, and ~ X i, ~ Y i are rank one correlation matrices. Since rank one correlation matrices are similar via diagonal unitary matrices, we have ~Y i = D i ~ X i D i for some diagonal unitary D i. It then follows that and Hence Y = E( tr (X + AX + BY ) = X = E( c i ~ Xi )E = c i D i ~ X i D i )E = c i (E ~ X i E) c i D i (E ~ X i E)D i : c i ~ Yi ; c i tr ((E ~ Xi E) + A(E ~ Xi E) + BD i (E ~ Xi E)D i ) : It remains to provide for each n 4 an example of a positive map A;B that is not decomposable. It suces to provide such A;B for n = 4 as one may embed this example in higher dimensions by taking A and B. Introduce the matrices M = B@ p p p p p i p p?i p?i 4 +i 4 CA ; N = 6 7 B@???? CA :

8 We claim that there exists an > such that I + M + DND I > (7) for all diagonal unitary matrices D M 4. Subsequently we shall show that for (A; B) = (M; N)=(? ), A;B is positive but not decomposable. To prove our claim, observe that I + M and I + N, and ( B Ker I + M) = span?ca ; p B@? i p CA )?? ; Ker ( I + N) = span ( CA ) : Suppose that no > exists such that (7) holds. Then, by the compactness of the set of 4 4 diagonal unitary matrices, there exists a diagonal unitary matrix D such that Ker ( I + M) \ Ker ( I + DND ) 6= : In particular, there exist ; C such that f(?;?; p ; ) t + (?; i; ; p ) t g= p = D(;?;?; ) t : It follows that jj = ; jj = ; j + j = p ; j? ij = p : Hence = j + j? jj? jj = Re( ); = j? ij? jj? jj = Re(i ); and thus = j j = fre( )g + Re(i )g = ; which is a contradiction. In conclusion, there exists an > such that (7) holds. Now, let (A; B) = (M; N)=(? ). By Theorem 3(c) and (7) we obtain that A;B is positive. On the other hand, if X = B@ i+i p? p? p i?i p? p i p? p? p? p? i p?? B@ CA CA ; Y = ; 8??????

9 then X, Y, I X = I Y and tr (X + AX + BY ) =?4 < : By Theorem 5(c) we see that A;B is not decomposable. Theorem 6 was also obtained in [K] and [KK] using dierent methods. By Theorems 4 and 6, we have the following corollary. Corollary 7 Let k be a given integer with < k n. Every positive map on M n that preserves the kth elementary symmetric function of the diagonal entries is decomposable if and only if n RESULTS ON REAL MATRICES There has also been interest in studying completely positive maps and positive maps on M n (IR), the algebra of n n real matrices. As in the complex case, we say that a (real) linear transformation : M n (IR)! M m (IR) is positive if maps (real symmetric) positive denite matrices to positive denite matrices, and is completely positive if the block matrix ((E ij )) i;jn is positive semi-denite. Using these denitions, one easily obtains the real analogs of Theorem and Theorem. However, the results on positive maps are slightly dierent due to the fact that a positive map imposes no condition on the space of skew-symmetric matrices. Theorem 8 Suppose : M n (IR)! M n (IR) is a positive map. The following conditions are equivalent. (a) satises (X) ii = X ii for all X and all i n. (b) (E ii ) = E ii for all i n. (c) There is a correlation matrix A M n (IR) and a linear transformation ~ : fx M n (IR) : X =?X t g! fx M n (IR) : X ii = for i = ; : : : ; ng such that is of the form X 7! A X + ~ (X? X t ). Theorem 9 Let m; n; k be positive integers such that < k m n. A positive map : M n (IR)! M m (IR) satises S k ( (X)) = S k (X) for all X M n if and only if m = n and is of the form X 7! P t ((X))P, where satises condition (c) of Theorem 8, and P satises Theorem (i) or (ii). 9

10 ACKNOWLEDGEMENT The rst author would like to thank Dr. D. Farenick for some helpful discussion and useful references. This research was motivated by a question of Dr. R. Mathias raised at the Fourth International Linear Algebra Society Conference. Thanks are also due to Dr. S.-H. Kye for supplying preprints of [K] and [KK]. REFERENCES [BHH] G.P. Barker, R.D. Hill and R.D. Haertel, On the completely positive an positive-semidenite-preserving cones, Linear Algebra Appl. 56:-9 (984). [BPS] R. Bhat, V. Pati and V.S. Sunder, On some convex sets and their extreme points, Math. Ann. 96: (993). [BEFB] S. Boyd, L. El Ghaoui, E. Feron and V. Balakrishnan, Linear Matrix Inequalities in Systems and Control Theory, SIAM, Philadelphia, 994. [C] M.D. Choi, Completely positive linear maps on complex matrices, Linear Algebra Appl. :85-9 (975). [CV] J.P.R. Christensen and J. Vesterstrom, A note on extreme positive denite matrices, Math. Ann. 44:65-68 (979). [GPW] R. Grone, S. Pierce and W. Watkins, Extremal correlation matrices, Linear Algebra Appl. 34:63-79 (99). [KK] H.-J. Kim and S.-H. Kye, Indecomposable extreme positive linear maps in matrix algebras, Bulletin London Math. Soc. 6: (994). [K] S.-H. Kye, Positive linear maps between matrix algebras which x diagonals, Linear Algebra Appl. 6:39-56 (995). [LS] L.J. Landau and R.F. Streater, On Birkho's theorem for doubly stochastic completely positive maps of matrix algebras, Linear Algebra Appl. 93:7-7 (993). [LiT] C.K. Li and B.S. Tam, A note on extreme correlation matrices, SIAM J. Matrix Analysis Appl. 5:93-98 (994). [Lo] R. Loewy, Extreme points of a convex subset of the cone of positive semidenite matrices, Math. Ann. 53:7-3 (98). [LoS] R. Loewy and H. Schneider, Positive operators on the n-dimensional ice cream cone, J. Math. Anal. Appl. 49: (967). [PH] J.A. Poluikis and R.D. Hill, Completely positive and Hermitian-preserving linear transformations, Linear Algebra Appl. 35:- (98).

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

More information

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

Finite dimensional C -algebras

Finite dimensional C -algebras Finite dimensional C -algebras S. Sundar September 14, 2012 Throughout H, K stand for finite dimensional Hilbert spaces. 1 Spectral theorem for self-adjoint opertors Let A B(H) and let {ξ 1, ξ 2,, ξ n

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

More information

Math 312 Homework 1 Solutions

Math 312 Homework 1 Solutions Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

More information

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible

More information

7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization 7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

More information

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α

More information

Matrix Representations of Linear Transformations and Changes of Coordinates

Matrix Representations of Linear Transformations and Changes of Coordinates Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under

More information

Continuity of the Perron Root

Continuity of the Perron Root Linear and Multilinear Algebra http://dx.doi.org/10.1080/03081087.2014.934233 ArXiv: 1407.7564 (http://arxiv.org/abs/1407.7564) Continuity of the Perron Root Carl D. Meyer Department of Mathematics, North

More information

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Solution to Homework 2

Solution to Homework 2 Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if

More information

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION 4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:

More information

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

More information

MATH PROBLEMS, WITH SOLUTIONS

MATH PROBLEMS, WITH SOLUTIONS MATH PROBLEMS, WITH SOLUTIONS OVIDIU MUNTEANU These are free online notes that I wrote to assist students that wish to test their math skills with some problems that go beyond the usual curriculum. These

More information

A characterization of trace zero symmetric nonnegative 5x5 matrices

A characterization of trace zero symmetric nonnegative 5x5 matrices A characterization of trace zero symmetric nonnegative 5x5 matrices Oren Spector June 1, 009 Abstract The problem of determining necessary and sufficient conditions for a set of real numbers to be the

More information

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

Classification of Cartan matrices

Classification of Cartan matrices Chapter 7 Classification of Cartan matrices In this chapter we describe a classification of generalised Cartan matrices This classification can be compared as the rough classification of varieties in terms

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

How To Prove The Dirichlet Unit Theorem

How To Prove The Dirichlet Unit Theorem Chapter 6 The Dirichlet Unit Theorem As usual, we will be working in the ring B of algebraic integers of a number field L. Two factorizations of an element of B are regarded as essentially the same if

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH

SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH 31 Kragujevac J. Math. 25 (2003) 31 49. SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH Kinkar Ch. Das Department of Mathematics, Indian Institute of Technology, Kharagpur 721302, W.B.,

More information

Mathematical Induction. Mary Barnes Sue Gordon

Mathematical Induction. Mary Barnes Sue Gordon Mathematics Learning Centre Mathematical Induction Mary Barnes Sue Gordon c 1987 University of Sydney Contents 1 Mathematical Induction 1 1.1 Why do we need proof by induction?.... 1 1. What is proof by

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

DIFFERENTIABILITY OF COMPLEX FUNCTIONS. Contents

DIFFERENTIABILITY OF COMPLEX FUNCTIONS. Contents DIFFERENTIABILITY OF COMPLEX FUNCTIONS Contents 1. Limit definition of a derivative 1 2. Holomorphic functions, the Cauchy-Riemann equations 3 3. Differentiability of real functions 5 4. A sufficient condition

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

1 Homework 1. [p 0 q i+j +... + p i 1 q j+1 ] + [p i q j ] + [p i+1 q j 1 +... + p i+j q 0 ]

1 Homework 1. [p 0 q i+j +... + p i 1 q j+1 ] + [p i q j ] + [p i+1 q j 1 +... + p i+j q 0 ] 1 Homework 1 (1) Prove the ideal (3,x) is a maximal ideal in Z[x]. SOLUTION: Suppose we expand this ideal by including another generator polynomial, P / (3, x). Write P = n + x Q with n an integer not

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i. Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

More information

Chapter 13: Basic ring theory

Chapter 13: Basic ring theory Chapter 3: Basic ring theory Matthew Macauley Department of Mathematical Sciences Clemson University http://www.math.clemson.edu/~macaule/ Math 42, Spring 24 M. Macauley (Clemson) Chapter 3: Basic ring

More information

Killer Te categors of Subspace in CpNN

Killer Te categors of Subspace in CpNN Three observations regarding Schatten p classes Gideon Schechtman Abstract The paper contains three results, the common feature of which is that they deal with the Schatten p class. The first is a presentation

More information

4. Binomial Expansions

4. Binomial Expansions 4. Binomial Expansions 4.. Pascal's Triangle The expansion of (a + x) 2 is (a + x) 2 = a 2 + 2ax + x 2 Hence, (a + x) 3 = (a + x)(a + x) 2 = (a + x)(a 2 + 2ax + x 2 ) = a 3 + ( + 2)a 2 x + (2 + )ax 2 +

More information

Ideal Class Group and Units

Ideal Class Group and Units Chapter 4 Ideal Class Group and Units We are now interested in understanding two aspects of ring of integers of number fields: how principal they are (that is, what is the proportion of principal ideals

More information

Methods for Finding Bases

Methods for Finding Bases Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,

More information

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

Chapter 12 Modal Decomposition of State-Space Models 12.1 Introduction The solutions obtained in previous chapters, whether in time domain or transfor

Chapter 12 Modal Decomposition of State-Space Models 12.1 Introduction The solutions obtained in previous chapters, whether in time domain or transfor Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter

More information

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

More information

On the representability of the bi-uniform matroid

On the representability of the bi-uniform matroid On the representability of the bi-uniform matroid Simeon Ball, Carles Padró, Zsuzsa Weiner and Chaoping Xing August 3, 2012 Abstract Every bi-uniform matroid is representable over all sufficiently large

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

Data analysis in supersaturated designs

Data analysis in supersaturated designs Statistics & Probability Letters 59 (2002) 35 44 Data analysis in supersaturated designs Runze Li a;b;, Dennis K.J. Lin a;b a Department of Statistics, The Pennsylvania State University, University Park,

More information

Theory of Matrices. Chapter 5

Theory of Matrices. Chapter 5 Chapter 5 Theory of Matrices As before, F is a field We use F[x] to represent the set of all polynomials of x with coefficients in F We use M m,n (F) and M m,n (F[x]) to denoted the set of m by n matrices

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

Mathematical Induction

Mathematical Induction Mathematical Induction (Handout March 8, 01) The Principle of Mathematical Induction provides a means to prove infinitely many statements all at once The principle is logical rather than strictly mathematical,

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5.

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5. PUTNAM TRAINING POLYNOMIALS (Last updated: November 17, 2015) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include

More information

The cover SU(2) SO(3) and related topics

The cover SU(2) SO(3) and related topics The cover SU(2) SO(3) and related topics Iordan Ganev December 2011 Abstract The subgroup U of unit quaternions is isomorphic to SU(2) and is a double cover of SO(3). This allows a simple computation of

More information

Quotient Rings and Field Extensions

Quotient Rings and Field Extensions Chapter 5 Quotient Rings and Field Extensions In this chapter we describe a method for producing field extension of a given field. If F is a field, then a field extension is a field K that contains F.

More information

Duality of linear conic problems

Duality of linear conic problems Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

Multi-variable Calculus and Optimization

Multi-variable Calculus and Optimization Multi-variable Calculus and Optimization Dudley Cooke Trinity College Dublin Dudley Cooke (Trinity College Dublin) Multi-variable Calculus and Optimization 1 / 51 EC2040 Topic 3 - Multi-variable Calculus

More information

Completely Positive Cone and its Dual

Completely Positive Cone and its Dual On the Computational Complexity of Membership Problems for the Completely Positive Cone and its Dual Peter J.C. Dickinson Luuk Gijben July 3, 2012 Abstract Copositive programming has become a useful tool

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

A simple criterion on degree sequences of graphs

A simple criterion on degree sequences of graphs Discrete Applied Mathematics 156 (2008) 3513 3517 Contents lists available at ScienceDirect Discrete Applied Mathematics journal homepage: www.elsevier.com/locate/dam Note A simple criterion on degree

More information

LEARNING OBJECTIVES FOR THIS CHAPTER

LEARNING OBJECTIVES FOR THIS CHAPTER CHAPTER 2 American mathematician Paul Halmos (1916 2006), who in 1942 published the first modern linear algebra book. The title of Halmos s book was the same as the title of this chapter. Finite-Dimensional

More information

MATH10040 Chapter 2: Prime and relatively prime numbers

MATH10040 Chapter 2: Prime and relatively prime numbers MATH10040 Chapter 2: Prime and relatively prime numbers Recall the basic definition: 1. Prime numbers Definition 1.1. Recall that a positive integer is said to be prime if it has precisely two positive

More information

Two classes of ternary codes and their weight distributions

Two classes of ternary codes and their weight distributions Two classes of ternary codes and their weight distributions Cunsheng Ding, Torleiv Kløve, and Francesco Sica Abstract In this paper we describe two classes of ternary codes, determine their minimum weight

More information

The Ideal Class Group

The Ideal Class Group Chapter 5 The Ideal Class Group We will use Minkowski theory, which belongs to the general area of geometry of numbers, to gain insight into the ideal class group of a number field. We have already mentioned

More information

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

More information

1 Symmetries of regular polyhedra

1 Symmetries of regular polyhedra 1230, notes 5 1 Symmetries of regular polyhedra Symmetry groups Recall: Group axioms: Suppose that (G, ) is a group and a, b, c are elements of G. Then (i) a b G (ii) (a b) c = a (b c) (iii) There is an

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

F. ABTAHI and M. ZARRIN. (Communicated by J. Goldstein)

F. ABTAHI and M. ZARRIN. (Communicated by J. Goldstein) Journal of Algerian Mathematical Society Vol. 1, pp. 1 6 1 CONCERNING THE l p -CONJECTURE FOR DISCRETE SEMIGROUPS F. ABTAHI and M. ZARRIN (Communicated by J. Goldstein) Abstract. For 2 < p

More information

GROUP ALGEBRAS. ANDREI YAFAEV

GROUP ALGEBRAS. ANDREI YAFAEV GROUP ALGEBRAS. ANDREI YAFAEV We will associate a certain algebra to a finite group and prove that it is semisimple. Then we will apply Wedderburn s theory to its study. Definition 0.1. Let G be a finite

More information

1.2 Solving a System of Linear Equations

1.2 Solving a System of Linear Equations 1.. SOLVING A SYSTEM OF LINEAR EQUATIONS 1. Solving a System of Linear Equations 1..1 Simple Systems - Basic De nitions As noticed above, the general form of a linear system of m equations in n variables

More information

Factoring of Prime Ideals in Extensions

Factoring of Prime Ideals in Extensions Chapter 4 Factoring of Prime Ideals in Extensions 4. Lifting of Prime Ideals Recall the basic AKLB setup: A is a Dedekind domain with fraction field K, L is a finite, separable extension of K of degree

More information

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited Physics 116A Winter 2011 The Matrix Elements of a 3 3 Orthogonal Matrix Revisited 1. Introduction In a class handout entitled, Three-Dimensional Proper and Improper Rotation Matrices, I provided a derivation

More information

UNCOUPLING THE PERRON EIGENVECTOR PROBLEM

UNCOUPLING THE PERRON EIGENVECTOR PROBLEM UNCOUPLING THE PERRON EIGENVECTOR PROBLEM Carl D Meyer INTRODUCTION Foranonnegative irreducible matrix m m with spectral radius ρ,afundamental problem concerns the determination of the unique normalized

More information

4: SINGLE-PERIOD MARKET MODELS

4: SINGLE-PERIOD MARKET MODELS 4: SINGLE-PERIOD MARKET MODELS Ben Goldys and Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2015 B. Goldys and M. Rutkowski (USydney) Slides 4: Single-Period Market

More information

RESULTANT AND DISCRIMINANT OF POLYNOMIALS

RESULTANT AND DISCRIMINANT OF POLYNOMIALS RESULTANT AND DISCRIMINANT OF POLYNOMIALS SVANTE JANSON Abstract. This is a collection of classical results about resultants and discriminants for polynomials, compiled mainly for my own use. All results

More information

The Determinant: a Means to Calculate Volume

The Determinant: a Means to Calculate Volume The Determinant: a Means to Calculate Volume Bo Peng August 20, 2007 Abstract This paper gives a definition of the determinant and lists many of its well-known properties Volumes of parallelepipeds are

More information

ADDITIVE GROUPS OF RINGS WITH IDENTITY

ADDITIVE GROUPS OF RINGS WITH IDENTITY ADDITIVE GROUPS OF RINGS WITH IDENTITY SIMION BREAZ AND GRIGORE CĂLUGĂREANU Abstract. A ring with identity exists on a torsion Abelian group exactly when the group is bounded. The additive groups of torsion-free

More information

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

More information

On closed-form solutions of a resource allocation problem in parallel funding of R&D projects

On closed-form solutions of a resource allocation problem in parallel funding of R&D projects Operations Research Letters 27 (2000) 229 234 www.elsevier.com/locate/dsw On closed-form solutions of a resource allocation problem in parallel funding of R&D proects Ulku Gurler, Mustafa. C. Pnar, Mohamed

More information

Chapter 7. Permutation Groups

Chapter 7. Permutation Groups Chapter 7 Permutation Groups () We started the study of groups by considering planar isometries In the previous chapter, we learnt that finite groups of planar isometries can only be cyclic or dihedral

More information

Let H and J be as in the above lemma. The result of the lemma shows that the integral

Let H and J be as in the above lemma. The result of the lemma shows that the integral Let and be as in the above lemma. The result of the lemma shows that the integral ( f(x, y)dy) dx is well defined; we denote it by f(x, y)dydx. By symmetry, also the integral ( f(x, y)dx) dy is well defined;

More information

RINGS WITH A POLYNOMIAL IDENTITY

RINGS WITH A POLYNOMIAL IDENTITY RINGS WITH A POLYNOMIAL IDENTITY IRVING KAPLANSKY 1. Introduction. In connection with his investigation of projective planes, M. Hall [2, Theorem 6.2]* proved the following theorem: a division ring D in

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

15. Symmetric polynomials

15. Symmetric polynomials 15. Symmetric polynomials 15.1 The theorem 15.2 First examples 15.3 A variant: discriminants 1. The theorem Let S n be the group of permutations of {1,, n}, also called the symmetric group on n things.

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I. Ronald van Luijk, 2012 Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

More information

ALMOST COMMON PRIORS 1. INTRODUCTION

ALMOST COMMON PRIORS 1. INTRODUCTION ALMOST COMMON PRIORS ZIV HELLMAN ABSTRACT. What happens when priors are not common? We introduce a measure for how far a type space is from having a common prior, which we term prior distance. If a type

More information

The Australian Journal of Mathematical Analysis and Applications

The Australian Journal of Mathematical Analysis and Applications The Australian Journal of Mathematical Analysis and Applications Volume 7, Issue, Article 11, pp. 1-14, 011 SOME HOMOGENEOUS CYCLIC INEQUALITIES OF THREE VARIABLES OF DEGREE THREE AND FOUR TETSUYA ANDO

More information

a 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4)

a 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4) ROOTS OF POLYNOMIAL EQUATIONS In this unit we discuss polynomial equations. A polynomial in x of degree n, where n 0 is an integer, is an expression of the form P n (x) =a n x n + a n 1 x n 1 + + a 1 x

More information

Lecture 1: Schur s Unitary Triangularization Theorem

Lecture 1: Schur s Unitary Triangularization Theorem Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections

More information