SukGeun Hwang and JinWoo Park


 Jennifer Gibson
 1 years ago
 Views:
Transcription
1 Bull. Korean Math. Soc. 43 (2006), No. 3, pp A NOTE ON PARTIAL SIGNSOLVABILITY SukGeun Hwang and JinWoo Park Abstract. In this paper we prove that if Ax = b is a partial signsolvable linear system with A being sign nonsingular matrix and if α = {j : x j is signdetermined by Ax = b}, then A α x α = b α is a signsolvable linear system, where A α denotes the submatrix of A occupying rows and columns in α and x α and b α are subvectors of x and b whose components lie in α. For a sign nonsingular matrix A, let A 1,..., A k be the fully indecomposable components of A and let α i denote the set of row numbers of A r, r = 1,..., k. We also show that if Ax = b is a partial signsolvable linear system, then, for r = 1,..., k, if one of the components of x αr is a fixed zero solution of Ax = b, then so are all the components of x αr. 1. Introduction For a real number a, the sign of a, sign(a), is defined by 1, if a > 0, sign(a) = 1, if a < 0, 0, if a = 0. For a real matrix A, let Q(A) denote the set of all real matrices B = [b ij ] with the same size as A such that sign(b ij ) = sign(a ij ) for all i, j. For a matrix A, the matrix obtained from A by replacing each of the entries by its sign is a (1, 1, 0)matrix. In that sense, a (1, 1, 0)matrix is called a signpattern matrix. Consider a linear system Ax = b, where x = (x 1,..., x n ) T. A component x j of x is said to be signdetermined by Ax = b if, for every Ã Q(A) and every b Q(b), x j is determined by Ã x = b and all Received February 25, Mathematics Subject Classification: 15A99. Key words and phrases: signsolvable linear system, partial signsolvable linear system. This work was supported by the SRC/ERC program of MOST/KOSEF (grant # R ).
2 472 SukGeun Hwang and JinWoo Park the elements of the set { x j : Ã x = b, Ã Q(A), b Q(b)} have the same sign. Such a common sign will be denoted by sign(x j : Ax = b) in the sequel. In particular, if sign(x j : Ax = b) = 0, then we call x j a fixed zero solution. Ax = b is called partial signsolvable if at least one of the components of x is signdetermined by Ax = b [3]. The system is signsolvable if all of the components of x are signdetermined [5]. A matrix A is called an Lmatrix if every Ã Q(A) has linearly independent rows. A square Lmatrix is called a sign nonsingular matrix (abb. SNSmatrix) [2]. It is well known that A is an SNSmatrix if and only if at least one of the n! terms in the expansion of det A is not zero and all the nonzero terms have the same sign [1, 2, 5]. It is noted in [3] that, for an m n matrix A having no p q zero submatrix, where p + q n, if Ax = b is partial signsolvable then A T is an Lmatrix. Thus A is a SNSmatrix if m = n. An n n matrix is called fully indecomposable if it contains no p q zero submatrix, where p + q = n. Suppose that Ax = b is a partial signsolvable linear system, where A is a sign nonsingular matrix. Let α = {j : x j is signdetermined by Ax = b}. Let A α denote the principal submatrix of A occupying rows and columns in α and let x α, b α denote the subvectors of x and b respectively whose components lie in α. In this paper we prove that A α x α = b α is a signsolvable linear system and that, for j α, sign(x j : A α x α = b α ) = sign(x j : Ax = b) unless sign(x j : A α x α = b α ) = 0 while sign(x j : Ax = b) 0. By a theorem of Frobenius [4], A can be transformed into the form A 1 O O O A 21 A 2 O O (1.1) , A k 1,1 A k 1,2 A k 1 O A k1 A k2 A k,k 1 A k where A r, r = 1,..., k, are fully indecomposable matrices by permutations of rows and columns if necessary. The matrices A 1,..., A k are called the fully indecomposable components of A. For each r = 1,..., k, let α r denote the set of row numbers of A r. Let x (r), b (r) denote the vectors x αr and b αr for brevity. In this paper we show that if Ax = b is partial signsolvable, then, for each r = 1,..., k, either none of the components of x (r) is a fixed
3 A note on partial signsolvability 473 zero solution of Ax = b or all of the components of x (r) are fixed zero solutions. 2. Preliminaries For an n n matrix A = [a ij ], the digraph D(A) associated with A is defined as the one whose vertices are 1, 2,..., n and there is an arc i j in D(A) if and only if a ij 0. A 1factor of D(A) is a set of cycles or loops γ 1,..., γ k such that each of the vertices 1, 2,..., n belongs to exactly one of the γ i s. A 1factor of D(A) gives rise to a nonzero term in the expansion of det A. We call a 1factor a positive (negative resp.) 1factor if the product of signs of the arcs in the 1factor is positive (negative resp.), where the sign of an arc i j is defined to be the same as sign(a ij ). A square matrix A is called irreducible if there does not exist a permutation matrix P such that [ ] P AP T X O =, Y where X and Y are nonvacuous square matrices. Clearly a fully indecomposable matrix is an irreducible matrix. A digraph D is called strongly connected if for any two vertices u, v of D there is a path from u to v. It is well known that, for a square matrix A, A is irreducible if and only if D(A) is strongly connected [4]. Given a linear system Ax = b, let Bx = c be the linear system obtained from Ax = b by applying one or more of the following operations. Permuting rows of (A, b). Simultaneously permuting columns of A and components of x. Multiplying a row of (A, b) by 1. Multiplying a column of A and the corresponding component of x by 1. Then Bx = c has the same solution as Ax = b. If A is fully indecomposable, then there are permutation matrices P, Q, R and diagonal matrices D, E with diagonal entries 1 or 1 such that RDP AQER T has the form (1.1) and has 1 s on its main diagonal. Assume that A is an n n SNSmatrix. Suppose that Ax = b is partially signsolvable linear system. Since, by the Cramer s rule, x j = det A(j b), (j = 1, 2,..., n), det A
4 474 SukGeun Hwang and JinWoo Park where and in the sequel A(j b) denotes the matrix obtained from A by replacing the column j by the vector b, it is clear that x j is signdetermined if and only if either A(j b) has identically zero determinant or A(j b) is an SNSmatrix. In the sequel, for a submatrix B of A, we assume that B uses the same row number and column number as those in A. From example, if A = a 11 a 12 a 13 [ ] a 21 a 22 a 23 a22 a, B = 23 a a 31 a 32 a 32 a then row 2 and row 3 of B are (a 22, a 23 ), (a 32, a 33 ) and the column 2 and column 3 of B are (a 22, a 32 ) T, (a 23, a 33 ) T. By the same way, the indices of components of subvectors of x and b will be used. We close this section with the following lemma. Lemma 2.1. Let A be an n n SNSmatrix with negative main diagonal entries and let α {1, 2,..., n}. Then the following hold. (1) A α is an SNSmatrix. (2) If, for some j α, x j is signdetermined by Ax = b, then x j is signdetermined by A α x α = b α. Proof. (1) Since A α is a principal submatrix of A, D(A α ) has a 1 factor. Since every 1factor of D(A α ) gives rise to a 1factor of A, the sign nonsingularity of A α follows from that of A. (2) Suppose that j α and that x j is not signdetermined by A α x α = b α. Then A α (j b α ) has both a positive 1factor and a negative 1 factor, which yield a positive 1factor and a negative 1factor of A(j b) since A has negative main diagonal entries, contradicting that x j is signdetermined by Ax = b. 3. Main results We first show that a partial signsolvable linear system has a signsolvable subsystem. Theorem 3.1. Let A be an n n SNSmatrix with negative main diagonal entries. Let Ax = b be a partial signsolvable linear system and let α = {j : x j is signdetermined by Ax = b}. Then the following hold. (1) A α is an SNSmatrix and A α x α = b α is signsolvable. (2) For j α,
5 A note on partial signsolvability 475 (i) if x j = 0 is a fixed zero solution of Ax = b, then x j = 0 is a fixed zero solution of A α x α = b α, and (ii) if x j is not a fixed zero solution of Ax = b, then sign(x j : A α x α = b α ) = sign(x j : Ax = b) unless sign(x j : A α x α = b α ) = 0. Proof. (1) follows directly from Lemma 2.1. (2) (i) Every 1factor of D(A α (j b α )) yields a 1factor of D(A(j b)). Since x j = 0 is a fixed zero solution of Ax = b, D(A(j b)) has no 1factor. Therefore D(A α (j b α )) has no 1factor, telling us that A α (j b α ) has identically zero determinant. (ii) Suppose that x j is not a fixed zero solution of Ax = b. Let α = m, where α stands for the number of elements of α. Then sign(det A(j b)) = ( 1) n m sign(det A α (j b α )) unless A α (j b α ) has identically zero determinant. Since sign(det A) = ( 1) n m sign(det A α ) we see that det A α (j b α ) det A α, det A(j b) det A have the same sign unless det A α (j b α ) = 0. It may well be possible that sign(x j : A α x α = b α ) = 0 even though sign(x j : Ax = b) 0 as we see in the following example. Example 3.2. Let A = , b = 1 0, and let x = (x 1, x 2, x 3, x 4 ) T be a solution of Ax = b. Then sign(x 3 : Ax = b) = sign(x 4 : Ax = b) = 1, but x 3 and x 4 are fixed zero solutions in the signsolvable subsystem [ ] [ ] [ 1 0 x3 0 =. 1 1 x 4 0] In the rest of this section we investigate the distribution of fixed zero solutions of a partial signsolvable linear system. Lemma 3.3. Let A be a fully indecomposable SNSmatrix and let Ax = b be a partial signsolvable linear system. Then the following are equivalent. (1) Ax = b has a fixed zero solution.
6 476 SukGeun Hwang and JinWoo Park (2) b = 0. (3) Every component of x is a fixed zero solution of Ax = b. Proof. (1) (2) Suppose that b = (b 1,..., b n ) T 0. Then b i 0 for some i. Since x j = 0 is a fixed zero solution of Ax = b, it follows that A(j b) has identically zero determinant. Since the (i, j) entry of A(j b), which is b i, is not zero, it follows that A(i j), the (n 1) (n 1) matrix obtained from A by deleting the row i and the column j, has identically zero determinant so that there exist a p q zero submatrix of A(i j), where p + q = (n 1) + 1, contradicting the fully indecomposability of A. (2) (3) and (3) (1) are clear. Lemma 3.4. Let A be a square matrix of the form A 1 O O B 1 B 2 A 2 O O. A = B.. 3 O O,..... A k 1 O B k A k where A r is fully indecomposable and B r O for each r = 1, 2,..., k. Then D(A) has a 1factor. Proof. If, for some distinct p, q {1,..., k}, there is an arc in D(A) from a vertex of D(A p ) to a vertex of D(A q ), we simply say that there is an arc from D(A p ) to D(A q ). By the structure of A, we see that there is an arc e r from D(A r ) to D(A r 1 ) for each r = 1, 2,..., k, where we assume that A 0 equals A k. Since D(A r ) is strongly connected for each r = 1,..., k, there occurs a cycle in D(A) containing all e r s as its part, and the lemma is proved. Now we are ready to prove our last main theorem. Theorem 3.5. Let A be an SNSmatrix of the form (1.1) with negative main diagonal entries. Then for each r = 1,..., k, either non of the components of x αr is a fixed zero solution of Ax = b or all of the components of x αr are fixed zero solutions of Ax = b. Moreover, in the latter case, b αr = 0. Proof. We write x (r) and b (r) instead of x αr and b αr for the sake of simplicity. We prove the theorem by induction on k. The theorem for k = 1 follows directly from Lemma 3.3.
7 A note on partial signsolvability 477 Let k 2. In the proof of the theorem, we shall call a diagonal block A r an isolated block if all the blocks in the block row r are O s except for A r and if b r = 0. Let x j = 0 is a fixed zero solution of Ax = b, where j α p. Suppose that non of A 1,..., A p is an isolated block. Then, first of all, b 1 0 because A 1 is not an isolated block. If b p 0, then there is an i α p such that b i 0. A(j b) αp = A p (j b p ). Note that b i 0 is the (i, j)entry of A p (j b p ) so that there is an arc e in D(A p (j b p )). Remember that A p (j b p ) uses the same row numbers and column numbers. Since A p is strongly connected there is a path γ from j to i in D(A p ) and hence in D(A p (j b p )). Now, the path γ together with the preceded arc e gives rise to a 1factor of A p (j b p ) which yields a 1factor of A(j b), contradicting that x j = 0 is a fixed zero solution of Ax = b. Thus b p = 0. Since b 1 0, it must be that p 2. We claim that there exists a sequence r 1,..., r m such that p = r 1 > r 2 > > r m 1, A r1 r 2 O, A r2 r 3 O,..., A rm 1 r m O and b m 0. Since A p is not isolated and b p = 0, [A p1,..., A p,p 1 ] O and hence there is an r 2 < p such that A pr2 O. If b r2 0, then we end up with the sequence p, r 2. If b r2 = 0, then since [A r2 1,..., A r2,r 2 1] O, there is an r 3 < r 2 such that A r2 r 3 O. If b r3 0, then we are done. Otherwise we repeat this process. Eventually we will end up with a sequence with the required property because b 1 0. Thus there occurs a 1factor of A(j b) by Lemma 3.4 contradicting that x j = 0 is a fixed zero solution of Ax = b. Therefore it is shown that A q is an isolated block for some q {1,..., p}. Now the system Ax = b can be written as { Aq x q = 0 A β x β = b β, where β = {1,..., n} α q, and it follows, by induction, that every component of x p is a fixed zero solution of Ax = b. References [1] L. Bassett, The scope of qualitative economics, Rev. Econ. Studies 29 (1962), [2] L. Bassett, J. Maybee, and J. Quirk, Qualitative economics and the scope of the correspondence pronciple, Econometrica 36 (1968), [3] R. A. Brualdi and B. L. Shader, Matrices of signsolvable linear systems, Cambridge University Press, New York, [4] H. Minc, Nonnegative matrices, Wiley, New York, 1988.
8 478 SukGeun Hwang and JinWoo Park [5] P. A. Samuelson, Foundations of Economic Analysis, Harvard University Press, Cambridge, 1947, Atheneum, New York, Department of Mathematics Education, Kyungpook National University, Taegu , Korea
Sign pattern matrices that admit M, N, P or inverse M matrices
Sign pattern matrices that admit M, N, P or inverse M matrices C Mendes Araújo, Juan R Torregrosa CMAT  Centro de Matemática / Dpto de Matemática Aplicada Universidade do Minho / Universidad Politécnica
More information(a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular.
Theorem.7.: (Properties of Triangular Matrices) (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. (b) The product
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More informationDeterminants. Dr. Doreen De Leon Math 152, Fall 2015
Determinants Dr. Doreen De Leon Math 52, Fall 205 Determinant of a Matrix Elementary Matrices We will first discuss matrices that can be used to produce an elementary row operation on a given matrix A.
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More information10. Graph Matrices Incidence Matrix
10 Graph Matrices Since a graph is completely determined by specifying either its adjacency structure or its incidence structure, these specifications provide far more efficient ways of representing a
More information9 Matrices, determinants, inverse matrix, Cramer s Rule
AAC  Business Mathematics I Lecture #9, December 15, 2007 Katarína Kálovcová 9 Matrices, determinants, inverse matrix, Cramer s Rule Basic properties of matrices: Example: Addition properties: Associative:
More informationNotes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 918/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
More informationCofactor Expansion: Cramer s Rule
Cofactor Expansion: Cramer s Rule MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Introduction Today we will focus on developing: an efficient method for calculating
More informationInverses and powers: Rules of Matrix Arithmetic
Contents 1 Inverses and powers: Rules of Matrix Arithmetic 1.1 What about division of matrices? 1.2 Properties of the Inverse of a Matrix 1.2.1 Theorem (Uniqueness of Inverse) 1.2.2 Inverse Test 1.2.3
More informationMath 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.
Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(
More informationMATH10212 Linear Algebra. Systems of Linear Equations. Definition. An ndimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0534405967. Systems of Linear Equations Definition. An ndimensional vector is a row or a column
More informationSimilarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
More information1 Determinants. Definition 1
Determinants The determinant of a square matrix is a value in R assigned to the matrix, it characterizes matrices which are invertible (det 0) and is related to the volume of a parallelpiped described
More informationSystems of Linear Equations
Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and
More informationNON SINGULAR MATRICES. DEFINITION. (Non singular matrix) An n n A is called non singular or invertible if there exists an n n matrix B such that
NON SINGULAR MATRICES DEFINITION. (Non singular matrix) An n n A is called non singular or invertible if there exists an n n matrix B such that AB = I n = BA. Any matrix B with the above property is called
More informationLecture Notes: Matrix Inverse. 1 Inverse Definition
Lecture Notes: Matrix Inverse Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Inverse Definition We use I to represent identity matrices,
More informationSolution. Area(OABC) = Area(OAB) + Area(OBC) = 1 2 det( [ 5 2 1 2. Question 2. Let A = (a) Calculate the nullspace of the matrix A.
Solutions to Math 30 Takehome prelim Question. Find the area of the quadrilateral OABC on the figure below, coordinates given in brackets. [See pp. 60 63 of the book.] y C(, 4) B(, ) A(5, ) O x Area(OABC)
More informationUNIT 2 MATRICES  I 2.0 INTRODUCTION. Structure
UNIT 2 MATRICES  I Matrices  I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress
More informationMATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix.
MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. Inverse matrix Definition. Let A be an n n matrix. The inverse of A is an n n matrix, denoted
More informationDETERMINANTS. b 2. x 2
DETERMINANTS 1 Systems of two equations in two unknowns A system of two equations in two unknowns has the form a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 This can be written more concisely in
More information2.1: Determinants by Cofactor Expansion. Math 214 Chapter 2 Notes and Homework. Evaluate a Determinant by Expanding by Cofactors
2.1: Determinants by Cofactor Expansion Math 214 Chapter 2 Notes and Homework Determinants The minor M ij of the entry a ij is the determinant of the submatrix obtained from deleting the i th row and the
More informationLecture 4: Partitioned Matrices and Determinants
Lecture 4: Partitioned Matrices and Determinants 1 Elementary row operations Recall the elementary operations on the rows of a matrix, equivalent to premultiplying by an elementary matrix E: (1) multiplying
More informationA note on companion matrices
Linear Algebra and its Applications 372 (2003) 325 33 www.elsevier.com/locate/laa A note on companion matrices Miroslav Fiedler Academy of Sciences of the Czech Republic Institute of Computer Science Pod
More informationA CHARACTERIZATION OF TREE TYPE
A CHARACTERIZATION OF TREE TYPE LON H MITCHELL Abstract Let L(G) be the Laplacian matrix of a simple graph G The characteristic valuation associated with the algebraic connectivity a(g) is used in classifying
More informationThe Determinant: a Means to Calculate Volume
The Determinant: a Means to Calculate Volume Bo Peng August 20, 2007 Abstract This paper gives a definition of the determinant and lists many of its wellknown properties Volumes of parallelepipeds are
More informationMATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix.
MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix. Matrices Definition. An mbyn matrix is a rectangular array of numbers that has m rows and n columns: a 11
More informationNotes on Matrix Multiplication and the Transitive Closure
ICS 6D Due: Wednesday, February 25, 2015 Instructor: Sandy Irani Notes on Matrix Multiplication and the Transitive Closure An n m matrix over a set S is an array of elements from S with n rows and m columns.
More informationDiagonal, Symmetric and Triangular Matrices
Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by
More informationMinimally Infeasible Set Partitioning Problems with Balanced Constraints
Minimally Infeasible Set Partitioning Problems with alanced Constraints Michele Conforti, Marco Di Summa, Giacomo Zambelli January, 2005 Revised February, 2006 Abstract We study properties of systems of
More informationDiagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions
Chapter 3 Diagonalisation Eigenvalues and eigenvectors, diagonalisation of a matrix, orthogonal diagonalisation fo symmetric matrices Reading As in the previous chapter, there is no specific essential
More informationUNCOUPLING THE PERRON EIGENVECTOR PROBLEM
UNCOUPLING THE PERRON EIGENVECTOR PROBLEM Carl D Meyer INTRODUCTION Foranonnegative irreducible matrix m m with spectral radius ρ,afundamental problem concerns the determination of the unique normalized
More information7 Gaussian Elimination and LU Factorization
7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method
More informationSCORE SETS IN ORIENTED GRAPHS
Applicable Analysis and Discrete Mathematics, 2 (2008), 107 113. Available electronically at http://pefmath.etf.bg.ac.yu SCORE SETS IN ORIENTED GRAPHS S. Pirzada, T. A. Naikoo The score of a vertex v in
More informationMatrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Pearson Education, Inc.
2 Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Theorem 8: Let A be a square matrix. Then the following statements are equivalent. That is, for a given A, the statements are either all true
More informationWe call this set an ndimensional parallelogram (with one vertex 0). We also refer to the vectors x 1,..., x n as the edges of P.
Volumes of parallelograms 1 Chapter 8 Volumes of parallelograms In the present short chapter we are going to discuss the elementary geometrical objects which we call parallelograms. These are going to
More informationUniversity of Lille I PC first year list of exercises n 7. Review
University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients
More informationAu = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.
Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry
More informationContinuity of the Perron Root
Linear and Multilinear Algebra http://dx.doi.org/10.1080/03081087.2014.934233 ArXiv: 1407.7564 (http://arxiv.org/abs/1407.7564) Continuity of the Perron Root Carl D. Meyer Department of Mathematics, North
More informationThe Hadamard Product
The Hadamard Product Elizabeth Million April 12, 2007 1 Introduction and Basic Results As inexperienced mathematicians we may have once thought that the natural definition for matrix multiplication would
More informationMatrix Inverse and Determinants
DM554 Linear and Integer Programming Lecture 5 and Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1 2 3 4 and Cramer s rule 2 Outline 1 2 3 4 and
More informationLinear Algebra Notes
Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note
More informationA matrix over a field F is a rectangular array of elements from F. The symbol
Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F) denotes the collection of all m n matrices over F Matrices will usually be denoted
More informationSummary of week 8 (Lectures 22, 23 and 24)
WEEK 8 Summary of week 8 (Lectures 22, 23 and 24) This week we completed our discussion of Chapter 5 of [VST] Recall that if V and W are inner product spaces then a linear map T : V W is called an isometry
More informationWe know a formula for and some properties of the determinant. Now we see how the determinant can be used.
Cramer s rule, inverse matrix, and volume We know a formula for and some properties of the determinant. Now we see how the determinant can be used. Formula for A We know: a b d b =. c d ad bc c a Can we
More informationMATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006
MATH 511 ADVANCED LINEAR ALGEBRA SPRING 26 Sherod Eubanks HOMEWORK 1 1.1 : 3, 5 1.2 : 4 1.3 : 4, 6, 12, 13, 16 1.4 : 1, 5, 8 Section 1.1: The EigenvalueEigenvector Equation Problem 3 Let A M n (R). If
More informationMore than you wanted to know about quadratic forms
CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences More than you wanted to know about quadratic forms KC Border Contents 1 Quadratic forms 1 1.1 Quadratic forms on the unit
More informationUsing determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible:
Cramer s Rule and the Adjugate Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible: Theorem [Cramer s Rule] If A is an invertible
More informationCalculus and linear algebra for biomedical engineering Week 4: Inverse matrices and determinants
Calculus and linear algebra for biomedical engineering Week 4: Inverse matrices and determinants Hartmut Führ fuehr@matha.rwthaachen.de Lehrstuhl A für Mathematik, RWTH Aachen October 30, 2008 Overview
More informationa 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given
More information6. Cholesky factorization
6. Cholesky factorization EE103 (Fall 201112) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix
More information2.5 Elementary Row Operations and the Determinant
2.5 Elementary Row Operations and the Determinant Recall: Let A be a 2 2 matrtix : A = a b. The determinant of A, denoted by det(a) c d or A, is the number ad bc. So for example if A = 2 4, det(a) = 2(5)
More informationMath 313 Lecture #10 2.2: The Inverse of a Matrix
Math 1 Lecture #10 2.2: The Inverse of a Matrix Matrix algebra provides tools for creating many useful formulas just like real number algebra does. For example, a real number a is invertible if there is
More informationLinear Dependence Tests
Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks
More informationI. GROUPS: BASIC DEFINITIONS AND EXAMPLES
I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called
More informationGood luck, veel succes!
Final exam Advanced Linear Programming, May 7, 13.0016.00 Switch off your mobile phone, PDA and any other mobile device and put it far away. No books or other reading materials are allowed. This exam
More informationLecture 11. Shuanglin Shao. October 2nd and 7th, 2013
Lecture 11 Shuanglin Shao October 2nd and 7th, 2013 Matrix determinants: addition. Determinants: multiplication. Adjoint of a matrix. Cramer s rule to solve a linear system. Recall that from the previous
More informationSection 2.1. Section 2.2. Exercise 6: We have to compute the product AB in two ways, where , B =. 2 1 3 5 A =
Section 2.1 Exercise 6: We have to compute the product AB in two ways, where 4 2 A = 3 0 1 3, B =. 2 1 3 5 Solution 1. Let b 1 = (1, 2) and b 2 = (3, 1) be the columns of B. Then Ab 1 = (0, 3, 13) and
More informationMATH10212 Linear Algebra B Homework 7
MATH22 Linear Algebra B Homework 7 Students are strongly advised to acquire a copy of the Textbook: D C Lay, Linear Algebra and its Applications Pearson, 26 (or other editions) Normally, homework assignments
More informationPractical Guide to the Simplex Method of Linear Programming
Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear
More informationContinued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
More informationFactorization Theorems
Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization
More informationRow Echelon Form and Reduced Row Echelon Form
These notes closely follow the presentation of the material given in David C Lay s textbook Linear Algebra and its Applications (3rd edition) These notes are intended primarily for inclass presentation
More informationSolving Linear Systems, Continued and The Inverse of a Matrix
, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing
More informationDETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH
DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH CHRISTOPHER RH HANUSA AND THOMAS ZASLAVSKY Abstract We investigate the least common multiple of all subdeterminants,
More informationFrom the Even Cycle Mystery to the LMatrix Problem and Beyond. Michael Brundage
From the Even Cycle Mystery to the LMatrix Problem and Beyond by Michael Brundage A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science University of Washington
More informationLinear Systems. Singular and Nonsingular Matrices. Find x 1, x 2, x 3 such that the following three equations hold:
Linear Systems Example: Find x, x, x such that the following three equations hold: x + x + x = 4x + x + x = x + x + x = 6 We can write this using matrixvector notation as 4 {{ A x x x {{ x = 6 {{ b General
More information1 Eigenvalues and Eigenvectors
Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x
More informationSolutions for Chapter 8
Solutions for Chapter 8 Solutions for exercises in section 8. 2 8.2.1. The eigenvalues are σ (A ={12, 6} with alg mult A (6 = 2, and it s clear that 12 = ρ(a σ (A. The eigenspace N(A 12I is spanned by
More informationMATH 240 Fall, Chapter 1: Linear Equations and Matrices
MATH 240 Fall, 2007 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 9th Ed. written by Prof. J. Beachy Sections 1.1 1.5, 2.1 2.3, 4.2 4.9, 3.1 3.5, 5.3 5.5, 6.1 6.3, 6.5, 7.1 7.3 DEFINITIONS
More informationMATH36001 Background Material 2015
MATH3600 Background Material 205 Matrix Algebra Matrices and Vectors An ordered array of mn elements a ij (i =,, m; j =,, n) written in the form a a 2 a n A = a 2 a 22 a 2n a m a m2 a mn is said to be
More informationSCHOOL OF MATHEMATICS MATHEMATICS FOR PART I ENGINEERING. Self Study Course
SCHOOL OF MATHEMATICS MATHEMATICS FOR PART I ENGINEERING Self Study Course MODULE 17 MATRICES II Module Topics 1. Inverse of matrix using cofactors 2. Sets of linear equations 3. Solution of sets of linear
More informationHelpsheet. Giblin Eunson Library MATRIX ALGEBRA. library.unimelb.edu.au/libraries/bee. Use this sheet to help you:
Helpsheet Giblin Eunson Library ATRIX ALGEBRA Use this sheet to help you: Understand the basic concepts and definitions of matrix algebra Express a set of linear equations in matrix notation Evaluate determinants
More informationB such that AB = I and BA = I. (We say B is an inverse of A.) Definition A square matrix A is invertible (or nonsingular) if matrix
Matrix inverses Recall... Definition A square matrix A is invertible (or nonsingular) if matrix B such that AB = and BA =. (We say B is an inverse of A.) Remark Not all square matrices are invertible.
More informationSplit Nonthreshold Laplacian Integral Graphs
Split Nonthreshold Laplacian Integral Graphs Stephen Kirkland University of Regina, Canada kirkland@math.uregina.ca Maria Aguieiras Alvarez de Freitas Federal University of Rio de Janeiro, Brazil maguieiras@im.ufrj.br
More information6.2 Permutations continued
6.2 Permutations continued Theorem A permutation on a finite set A is either a cycle or can be expressed as a product (composition of disjoint cycles. Proof is by (strong induction on the number, r, of
More information1 Solving LPs: The Simplex Algorithm of George Dantzig
Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.
More informationNumerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J. Olver 4. Gaussian Elimination In this part, our focus will be on the most basic method for solving linear algebraic systems, known as Gaussian Elimination in honor
More information2.5 Gaussian Elimination
page 150 150 CHAPTER 2 Matrices and Systems of Linear Equations 37 10 the linear algebra package of Maple, the three elementary 20 23 1 row operations are 12 1 swaprow(a,i,j): permute rows i and j 3 3
More information1. LINEAR EQUATIONS. A linear equation in n unknowns x 1, x 2,, x n is an equation of the form
1. LINEAR EQUATIONS A linear equation in n unknowns x 1, x 2,, x n is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b, where a 1, a 2,..., a n, b are given real numbers. For example, with x and
More information1.5 Elementary Matrices and a Method for Finding the Inverse
.5 Elementary Matrices and a Method for Finding the Inverse Definition A n n matrix is called an elementary matrix if it can be obtained from I n by performing a single elementary row operation Reminder:
More informationDirect Methods for Solving Linear Systems. Matrix Factorization
Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011
More informationChapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. ( 1 1 5 2 0 6
Chapter 7 Matrices Definition An m n matrix is an array of numbers set out in m rows and n columns Examples (i ( 1 1 5 2 0 6 has 2 rows and 3 columns and so it is a 2 3 matrix (ii 1 0 7 1 2 3 3 1 is a
More informationAbstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).
MAT 2 (Badger, Spring 202) LU Factorization Selected Notes September 2, 202 Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix
More informationNOTES on LINEAR ALGEBRA 1
School of Economics, Management and Statistics University of Bologna Academic Year 205/6 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura
More information8 Square matrices continued: Determinants
8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You
More informationIRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction
IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible
More informationAN ITERATIVE METHOD FOR THE HERMITIANGENERALIZED HAMILTONIAN SOLUTIONS TO THE INVERSE PROBLEM AX = B WITH A SUBMATRIX CONSTRAINT
Bulletin of the Iranian Mathematical Society Vol. 39 No. 6 (2013), pp 1291260. AN ITERATIVE METHOD FOR THE HERMITIANGENERALIZED HAMILTONIAN SOLUTIONS TO THE INVERSE PROBLEM AX = B WITH A SUBMATRIX CONSTRAINT
More informationSHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH
31 Kragujevac J. Math. 25 (2003) 31 49. SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH Kinkar Ch. Das Department of Mathematics, Indian Institute of Technology, Kharagpur 721302, W.B.,
More informationMincost flow problems and network simplex algorithm
Mincost flow problems and network simplex algorithm The particular structure of some LP problems can be sometimes used for the design of solution techniques more efficient than the simplex algorithm.
More informationVector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a nonempty
More informationClassification of Cartan matrices
Chapter 7 Classification of Cartan matrices In this chapter we describe a classification of generalised Cartan matrices This classification can be compared as the rough classification of varieties in terms
More informationWeek 5 Integral Polyhedra
Week 5 Integral Polyhedra We have seen some examples 1 of linear programming formulation that are integral, meaning that every basic feasible solution is an integral vector. This week we develop a theory
More informationMarkov Chains, Stochastic Processes, and Advanced Matrix Decomposition
Markov Chains, Stochastic Processes, and Advanced Matrix Decomposition Jack Gilbert Copyright (c) 2014 Jack Gilbert. Permission is granted to copy, distribute and/or modify this document under the terms
More informationLecture Notes 1: Matrix Algebra Part B: Determinants and Inverses
University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 57 Lecture Notes 1: Matrix Algebra Part B: Determinants and Inverses Peter J. Hammond email: p.j.hammond@warwick.ac.uk Autumn 2012,
More information4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION
4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:
More information2.3 Convex Constrained Optimization Problems
42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions
More informationLecture 21: The Inverse of a Matrix
Lecture 21: The Inverse of a Matrix Winfried Just, Ohio University October 16, 2015 Review: Our chemical reaction system Recall our chemical reaction system A + 2B 2C A + B D A + 2C 2D B + D 2C If we write
More informationT ( a i x i ) = a i T (x i ).
Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)
More informationIntroduction to Flocking {Stochastic Matrices}
Supelec EECI Graduate School in Control Introduction to Flocking {Stochastic Matrices} A. S. Morse Yale University Gif sur  Yvette May 21, 2012 CRAIG REYNOLDS  1987 BOIDS The Lion King CRAIG REYNOLDS
More information