# Suk-Geun Hwang and Jin-Woo Park

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 Bull. Korean Math. Soc. 43 (2006), No. 3, pp A NOTE ON PARTIAL SIGN-SOLVABILITY Suk-Geun Hwang and Jin-Woo Park Abstract. In this paper we prove that if Ax = b is a partial signsolvable linear system with A being sign non-singular matrix and if α = {j : x j is sign-determined by Ax = b}, then A α x α = b α is a sign-solvable linear system, where A α denotes the submatrix of A occupying rows and columns in α and x α and b α are subvectors of x and b whose components lie in α. For a sign non-singular matrix A, let A 1,..., A k be the fully indecomposable components of A and let α i denote the set of row numbers of A r, r = 1,..., k. We also show that if Ax = b is a partial sign-solvable linear system, then, for r = 1,..., k, if one of the components of x αr is a fixed zero solution of Ax = b, then so are all the components of x αr. 1. Introduction For a real number a, the sign of a, sign(a), is defined by 1, if a > 0, sign(a) = 1, if a < 0, 0, if a = 0. For a real matrix A, let Q(A) denote the set of all real matrices B = [b ij ] with the same size as A such that sign(b ij ) = sign(a ij ) for all i, j. For a matrix A, the matrix obtained from A by replacing each of the entries by its sign is a (1, 1, 0)-matrix. In that sense, a (1, 1, 0)-matrix is called a sign-pattern matrix. Consider a linear system Ax = b, where x = (x 1,..., x n ) T. A component x j of x is said to be sign-determined by Ax = b if, for every Ã Q(A) and every b Q(b), x j is determined by Ã x = b and all Received February 25, Mathematics Subject Classification: 15A99. Key words and phrases: sign-solvable linear system, partial sign-solvable linear system. This work was supported by the SRC/ERC program of MOST/KOSEF (grant # R ).

2 472 Suk-Geun Hwang and Jin-Woo Park the elements of the set { x j : Ã x = b, Ã Q(A), b Q(b)} have the same sign. Such a common sign will be denoted by sign(x j : Ax = b) in the sequel. In particular, if sign(x j : Ax = b) = 0, then we call x j a fixed zero solution. Ax = b is called partial sign-solvable if at least one of the components of x is sign-determined by Ax = b [3]. The system is sign-solvable if all of the components of x are sign-determined [5]. A matrix A is called an L-matrix if every Ã Q(A) has linearly independent rows. A square L-matrix is called a sign non-singular matrix (abb. SNS-matrix) [2]. It is well known that A is an SNS-matrix if and only if at least one of the n! terms in the expansion of det A is not zero and all the nonzero terms have the same sign [1, 2, 5]. It is noted in [3] that, for an m n matrix A having no p q zero submatrix, where p + q n, if Ax = b is partial sign-solvable then A T is an L-matrix. Thus A is a SNS-matrix if m = n. An n n matrix is called fully indecomposable if it contains no p q zero submatrix, where p + q = n. Suppose that Ax = b is a partial sign-solvable linear system, where A is a sign non-singular matrix. Let α = {j : x j is sign-determined by Ax = b}. Let A α denote the principal submatrix of A occupying rows and columns in α and let x α, b α denote the subvectors of x and b respectively whose components lie in α. In this paper we prove that A α x α = b α is a sign-solvable linear system and that, for j α, sign(x j : A α x α = b α ) = sign(x j : Ax = b) unless sign(x j : A α x α = b α ) = 0 while sign(x j : Ax = b) 0. By a theorem of Frobenius [4], A can be transformed into the form A 1 O O O A 21 A 2 O O (1.1) , A k 1,1 A k 1,2 A k 1 O A k1 A k2 A k,k 1 A k where A r, r = 1,..., k, are fully indecomposable matrices by permutations of rows and columns if necessary. The matrices A 1,..., A k are called the fully indecomposable components of A. For each r = 1,..., k, let α r denote the set of row numbers of A r. Let x (r), b (r) denote the vectors x αr and b αr for brevity. In this paper we show that if Ax = b is partial sign-solvable, then, for each r = 1,..., k, either none of the components of x (r) is a fixed

3 A note on partial sign-solvability 473 zero solution of Ax = b or all of the components of x (r) are fixed zero solutions. 2. Preliminaries For an n n matrix A = [a ij ], the digraph D(A) associated with A is defined as the one whose vertices are 1, 2,..., n and there is an arc i j in D(A) if and only if a ij 0. A 1-factor of D(A) is a set of cycles or loops γ 1,..., γ k such that each of the vertices 1, 2,..., n belongs to exactly one of the γ i s. A 1-factor of D(A) gives rise to a nonzero term in the expansion of det A. We call a 1-factor a positive (negative resp.) 1-factor if the product of signs of the arcs in the 1-factor is positive (negative resp.), where the sign of an arc i j is defined to be the same as sign(a ij ). A square matrix A is called irreducible if there does not exist a permutation matrix P such that [ ] P AP T X O =, Y where X and Y are nonvacuous square matrices. Clearly a fully indecomposable matrix is an irreducible matrix. A digraph D is called strongly connected if for any two vertices u, v of D there is a path from u to v. It is well known that, for a square matrix A, A is irreducible if and only if D(A) is strongly connected [4]. Given a linear system Ax = b, let Bx = c be the linear system obtained from Ax = b by applying one or more of the following operations. Permuting rows of (A, b). Simultaneously permuting columns of A and components of x. Multiplying a row of (A, b) by 1. Multiplying a column of A and the corresponding component of x by 1. Then Bx = c has the same solution as Ax = b. If A is fully indecomposable, then there are permutation matrices P, Q, R and diagonal matrices D, E with diagonal entries 1 or 1 such that RDP AQER T has the form (1.1) and has 1 s on its main diagonal. Assume that A is an n n SNS-matrix. Suppose that Ax = b is partially sign-solvable linear system. Since, by the Cramer s rule, x j = det A(j b), (j = 1, 2,..., n), det A

4 474 Suk-Geun Hwang and Jin-Woo Park where and in the sequel A(j b) denotes the matrix obtained from A by replacing the column j by the vector b, it is clear that x j is sign-determined if and only if either A(j b) has identically zero determinant or A(j b) is an SNS-matrix. In the sequel, for a submatrix B of A, we assume that B uses the same row number and column number as those in A. From example, if A = a 11 a 12 a 13 [ ] a 21 a 22 a 23 a22 a, B = 23 a a 31 a 32 a 32 a then row 2 and row 3 of B are (a 22, a 23 ), (a 32, a 33 ) and the column 2 and column 3 of B are (a 22, a 32 ) T, (a 23, a 33 ) T. By the same way, the indices of components of subvectors of x and b will be used. We close this section with the following lemma. Lemma 2.1. Let A be an n n SNS-matrix with negative main diagonal entries and let α {1, 2,..., n}. Then the following hold. (1) A α is an SNS-matrix. (2) If, for some j α, x j is sign-determined by Ax = b, then x j is sign-determined by A α x α = b α. Proof. (1) Since A α is a principal submatrix of A, D(A α ) has a 1- factor. Since every 1-factor of D(A α ) gives rise to a 1-factor of A, the sign non-singularity of A α follows from that of A. (2) Suppose that j α and that x j is not sign-determined by A α x α = b α. Then A α (j b α ) has both a positive 1-factor and a negative 1- factor, which yield a positive 1-factor and a negative 1-factor of A(j b) since A has negative main diagonal entries, contradicting that x j is sign-determined by Ax = b. 3. Main results We first show that a partial sign-solvable linear system has a signsolvable subsystem. Theorem 3.1. Let A be an n n SNS-matrix with negative main diagonal entries. Let Ax = b be a partial sign-solvable linear system and let α = {j : x j is sign-determined by Ax = b}. Then the following hold. (1) A α is an SNS-matrix and A α x α = b α is sign-solvable. (2) For j α,

5 A note on partial sign-solvability 475 (i) if x j = 0 is a fixed zero solution of Ax = b, then x j = 0 is a fixed zero solution of A α x α = b α, and (ii) if x j is not a fixed zero solution of Ax = b, then sign(x j : A α x α = b α ) = sign(x j : Ax = b) unless sign(x j : A α x α = b α ) = 0. Proof. (1) follows directly from Lemma 2.1. (2) (i) Every 1-factor of D(A α (j b α )) yields a 1-factor of D(A(j b)). Since x j = 0 is a fixed zero solution of Ax = b, D(A(j b)) has no 1-factor. Therefore D(A α (j b α )) has no 1-factor, telling us that A α (j b α ) has identically zero determinant. (ii) Suppose that x j is not a fixed zero solution of Ax = b. Let α = m, where α stands for the number of elements of α. Then sign(det A(j b)) = ( 1) n m sign(det A α (j b α )) unless A α (j b α ) has identically zero determinant. Since sign(det A) = ( 1) n m sign(det A α ) we see that det A α (j b α ) det A α, det A(j b) det A have the same sign unless det A α (j b α ) = 0. It may well be possible that sign(x j : A α x α = b α ) = 0 even though sign(x j : Ax = b) 0 as we see in the following example. Example 3.2. Let A = , b = 1 0, and let x = (x 1, x 2, x 3, x 4 ) T be a solution of Ax = b. Then sign(x 3 : Ax = b) = sign(x 4 : Ax = b) = 1, but x 3 and x 4 are fixed zero solutions in the sign-solvable subsystem [ ] [ ] [ 1 0 x3 0 =. 1 1 x 4 0] In the rest of this section we investigate the distribution of fixed zero solutions of a partial sign-solvable linear system. Lemma 3.3. Let A be a fully indecomposable SNS-matrix and let Ax = b be a partial sign-solvable linear system. Then the following are equivalent. (1) Ax = b has a fixed zero solution.

6 476 Suk-Geun Hwang and Jin-Woo Park (2) b = 0. (3) Every component of x is a fixed zero solution of Ax = b. Proof. (1) (2) Suppose that b = (b 1,..., b n ) T 0. Then b i 0 for some i. Since x j = 0 is a fixed zero solution of Ax = b, it follows that A(j b) has identically zero determinant. Since the (i, j)- entry of A(j b), which is b i, is not zero, it follows that A(i j), the (n 1) (n 1) matrix obtained from A by deleting the row i and the column j, has identically zero determinant so that there exist a p q zero submatrix of A(i j), where p + q = (n 1) + 1, contradicting the fully indecomposability of A. (2) (3) and (3) (1) are clear. Lemma 3.4. Let A be a square matrix of the form A 1 O O B 1 B 2 A 2 O O. A = B.. 3 O O,..... A k 1 O B k A k where A r is fully indecomposable and B r O for each r = 1, 2,..., k. Then D(A) has a 1-factor. Proof. If, for some distinct p, q {1,..., k}, there is an arc in D(A) from a vertex of D(A p ) to a vertex of D(A q ), we simply say that there is an arc from D(A p ) to D(A q ). By the structure of A, we see that there is an arc e r from D(A r ) to D(A r 1 ) for each r = 1, 2,..., k, where we assume that A 0 equals A k. Since D(A r ) is strongly connected for each r = 1,..., k, there occurs a cycle in D(A) containing all e r s as its part, and the lemma is proved. Now we are ready to prove our last main theorem. Theorem 3.5. Let A be an SNS-matrix of the form (1.1) with negative main diagonal entries. Then for each r = 1,..., k, either non of the components of x αr is a fixed zero solution of Ax = b or all of the components of x αr are fixed zero solutions of Ax = b. Moreover, in the latter case, b αr = 0. Proof. We write x (r) and b (r) instead of x αr and b αr for the sake of simplicity. We prove the theorem by induction on k. The theorem for k = 1 follows directly from Lemma 3.3.

7 A note on partial sign-solvability 477 Let k 2. In the proof of the theorem, we shall call a diagonal block A r an isolated block if all the blocks in the block row r are O s except for A r and if b r = 0. Let x j = 0 is a fixed zero solution of Ax = b, where j α p. Suppose that non of A 1,..., A p is an isolated block. Then, first of all, b 1 0 because A 1 is not an isolated block. If b p 0, then there is an i α p such that b i 0. A(j b) αp = A p (j b p ). Note that b i 0 is the (i, j)-entry of A p (j b p ) so that there is an arc e in D(A p (j b p )). Remember that A p (j b p ) uses the same row numbers and column numbers. Since A p is strongly connected there is a path γ from j to i in D(A p ) and hence in D(A p (j b p )). Now, the path γ together with the preceded arc e gives rise to a 1-factor of A p (j b p ) which yields a 1-factor of A(j b), contradicting that x j = 0 is a fixed zero solution of Ax = b. Thus b p = 0. Since b 1 0, it must be that p 2. We claim that there exists a sequence r 1,..., r m such that p = r 1 > r 2 > > r m 1, A r1 r 2 O, A r2 r 3 O,..., A rm 1 r m O and b m 0. Since A p is not isolated and b p = 0, [A p1,..., A p,p 1 ] O and hence there is an r 2 < p such that A pr2 O. If b r2 0, then we end up with the sequence p, r 2. If b r2 = 0, then since [A r2 1,..., A r2,r 2 1] O, there is an r 3 < r 2 such that A r2 r 3 O. If b r3 0, then we are done. Otherwise we repeat this process. Eventually we will end up with a sequence with the required property because b 1 0. Thus there occurs a 1-factor of A(j b) by Lemma 3.4 contradicting that x j = 0 is a fixed zero solution of Ax = b. Therefore it is shown that A q is an isolated block for some q {1,..., p}. Now the system Ax = b can be written as { Aq x q = 0 A β x β = b β, where β = {1,..., n} α q, and it follows, by induction, that every component of x p is a fixed zero solution of Ax = b. References [1] L. Bassett, The scope of qualitative economics, Rev. Econ. Studies 29 (1962), [2] L. Bassett, J. Maybee, and J. Quirk, Qualitative economics and the scope of the correspondence pronciple, Econometrica 36 (1968), [3] R. A. Brualdi and B. L. Shader, Matrices of sign-solvable linear systems, Cambridge University Press, New York, [4] H. Minc, Nonnegative matrices, Wiley, New York, 1988.

8 478 Suk-Geun Hwang and Jin-Woo Park [5] P. A. Samuelson, Foundations of Economic Analysis, Harvard University Press, Cambridge, 1947, Atheneum, New York, Department of Mathematics Education, Kyungpook National University, Taegu , Korea

### Sign pattern matrices that admit M, N, P or inverse M matrices

Sign pattern matrices that admit M, N, P or inverse M matrices C Mendes Araújo, Juan R Torregrosa CMAT - Centro de Matemática / Dpto de Matemática Aplicada Universidade do Minho / Universidad Politécnica

### (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular.

Theorem.7.: (Properties of Triangular Matrices) (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. (b) The product

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

### Determinants. Dr. Doreen De Leon Math 152, Fall 2015

Determinants Dr. Doreen De Leon Math 52, Fall 205 Determinant of a Matrix Elementary Matrices We will first discuss matrices that can be used to produce an elementary row operation on a given matrix A.

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

### 10. Graph Matrices Incidence Matrix

10 Graph Matrices Since a graph is completely determined by specifying either its adjacency structure or its incidence structure, these specifications provide far more efficient ways of representing a

### 9 Matrices, determinants, inverse matrix, Cramer s Rule

AAC - Business Mathematics I Lecture #9, December 15, 2007 Katarína Kálovcová 9 Matrices, determinants, inverse matrix, Cramer s Rule Basic properties of matrices: Example: Addition properties: Associative:

### Notes on Determinant

ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

### Cofactor Expansion: Cramer s Rule

Cofactor Expansion: Cramer s Rule MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Introduction Today we will focus on developing: an efficient method for calculating

### Inverses and powers: Rules of Matrix Arithmetic

Contents 1 Inverses and powers: Rules of Matrix Arithmetic 1.1 What about division of matrices? 1.2 Properties of the Inverse of a Matrix 1.2.1 Theorem (Uniqueness of Inverse) 1.2.2 Inverse Test 1.2.3

### Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

### MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

### Similarity and Diagonalization. Similar Matrices

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

### 1 Determinants. Definition 1

Determinants The determinant of a square matrix is a value in R assigned to the matrix, it characterizes matrices which are invertible (det 0) and is related to the volume of a parallelpiped described

### Systems of Linear Equations

Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

### NON SINGULAR MATRICES. DEFINITION. (Non singular matrix) An n n A is called non singular or invertible if there exists an n n matrix B such that

NON SINGULAR MATRICES DEFINITION. (Non singular matrix) An n n A is called non singular or invertible if there exists an n n matrix B such that AB = I n = BA. Any matrix B with the above property is called

### Lecture Notes: Matrix Inverse. 1 Inverse Definition

Lecture Notes: Matrix Inverse Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Inverse Definition We use I to represent identity matrices,

### Solution. Area(OABC) = Area(OAB) + Area(OBC) = 1 2 det( [ 5 2 1 2. Question 2. Let A = (a) Calculate the nullspace of the matrix A.

Solutions to Math 30 Take-home prelim Question. Find the area of the quadrilateral OABC on the figure below, coordinates given in brackets. [See pp. 60 63 of the book.] y C(, 4) B(, ) A(5, ) O x Area(OABC)

### UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure

UNIT 2 MATRICES - I Matrices - I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress

### MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix.

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. Inverse matrix Definition. Let A be an n n matrix. The inverse of A is an n n matrix, denoted

### DETERMINANTS. b 2. x 2

DETERMINANTS 1 Systems of two equations in two unknowns A system of two equations in two unknowns has the form a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 This can be written more concisely in

### 2.1: Determinants by Cofactor Expansion. Math 214 Chapter 2 Notes and Homework. Evaluate a Determinant by Expanding by Cofactors

2.1: Determinants by Cofactor Expansion Math 214 Chapter 2 Notes and Homework Determinants The minor M ij of the entry a ij is the determinant of the submatrix obtained from deleting the i th row and the

### Lecture 4: Partitioned Matrices and Determinants

Lecture 4: Partitioned Matrices and Determinants 1 Elementary row operations Recall the elementary operations on the rows of a matrix, equivalent to premultiplying by an elementary matrix E: (1) multiplying

### A note on companion matrices

Linear Algebra and its Applications 372 (2003) 325 33 www.elsevier.com/locate/laa A note on companion matrices Miroslav Fiedler Academy of Sciences of the Czech Republic Institute of Computer Science Pod

### A CHARACTERIZATION OF TREE TYPE

A CHARACTERIZATION OF TREE TYPE LON H MITCHELL Abstract Let L(G) be the Laplacian matrix of a simple graph G The characteristic valuation associated with the algebraic connectivity a(g) is used in classifying

### The Determinant: a Means to Calculate Volume

The Determinant: a Means to Calculate Volume Bo Peng August 20, 2007 Abstract This paper gives a definition of the determinant and lists many of its well-known properties Volumes of parallelepipeds are

### MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix.

MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix. Matrices Definition. An m-by-n matrix is a rectangular array of numbers that has m rows and n columns: a 11

### Notes on Matrix Multiplication and the Transitive Closure

ICS 6D Due: Wednesday, February 25, 2015 Instructor: Sandy Irani Notes on Matrix Multiplication and the Transitive Closure An n m matrix over a set S is an array of elements from S with n rows and m columns.

### Diagonal, Symmetric and Triangular Matrices

Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by

### Minimally Infeasible Set Partitioning Problems with Balanced Constraints

Minimally Infeasible Set Partitioning Problems with alanced Constraints Michele Conforti, Marco Di Summa, Giacomo Zambelli January, 2005 Revised February, 2006 Abstract We study properties of systems of

### Diagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions

Chapter 3 Diagonalisation Eigenvalues and eigenvectors, diagonalisation of a matrix, orthogonal diagonalisation fo symmetric matrices Reading As in the previous chapter, there is no specific essential

### UNCOUPLING THE PERRON EIGENVECTOR PROBLEM

UNCOUPLING THE PERRON EIGENVECTOR PROBLEM Carl D Meyer INTRODUCTION Foranonnegative irreducible matrix m m with spectral radius ρ,afundamental problem concerns the determination of the unique normalized

### 7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

### SCORE SETS IN ORIENTED GRAPHS

Applicable Analysis and Discrete Mathematics, 2 (2008), 107 113. Available electronically at http://pefmath.etf.bg.ac.yu SCORE SETS IN ORIENTED GRAPHS S. Pirzada, T. A. Naikoo The score of a vertex v in

### Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Pearson Education, Inc.

2 Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Theorem 8: Let A be a square matrix. Then the following statements are equivalent. That is, for a given A, the statements are either all true

### We call this set an n-dimensional parallelogram (with one vertex 0). We also refer to the vectors x 1,..., x n as the edges of P.

Volumes of parallelograms 1 Chapter 8 Volumes of parallelograms In the present short chapter we are going to discuss the elementary geometrical objects which we call parallelograms. These are going to

### University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

### Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

### Continuity of the Perron Root

Linear and Multilinear Algebra http://dx.doi.org/10.1080/03081087.2014.934233 ArXiv: 1407.7564 (http://arxiv.org/abs/1407.7564) Continuity of the Perron Root Carl D. Meyer Department of Mathematics, North

The Hadamard Product Elizabeth Million April 12, 2007 1 Introduction and Basic Results As inexperienced mathematicians we may have once thought that the natural definition for matrix multiplication would

### Matrix Inverse and Determinants

DM554 Linear and Integer Programming Lecture 5 and Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1 2 3 4 and Cramer s rule 2 Outline 1 2 3 4 and

### Linear Algebra Notes

Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note

### A matrix over a field F is a rectangular array of elements from F. The symbol

Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F) denotes the collection of all m n matrices over F Matrices will usually be denoted

### Summary of week 8 (Lectures 22, 23 and 24)

WEEK 8 Summary of week 8 (Lectures 22, 23 and 24) This week we completed our discussion of Chapter 5 of [VST] Recall that if V and W are inner product spaces then a linear map T : V W is called an isometry

### We know a formula for and some properties of the determinant. Now we see how the determinant can be used.

Cramer s rule, inverse matrix, and volume We know a formula for and some properties of the determinant. Now we see how the determinant can be used. Formula for A We know: a b d b =. c d ad bc c a Can we

### MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006

MATH 511 ADVANCED LINEAR ALGEBRA SPRING 26 Sherod Eubanks HOMEWORK 1 1.1 : 3, 5 1.2 : 4 1.3 : 4, 6, 12, 13, 16 1.4 : 1, 5, 8 Section 1.1: The Eigenvalue-Eigenvector Equation Problem 3 Let A M n (R). If

CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences More than you wanted to know about quadratic forms KC Border Contents 1 Quadratic forms 1 1.1 Quadratic forms on the unit

### Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible:

Cramer s Rule and the Adjugate Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible: Theorem [Cramer s Rule] If A is an invertible

### Calculus and linear algebra for biomedical engineering Week 4: Inverse matrices and determinants

Calculus and linear algebra for biomedical engineering Week 4: Inverse matrices and determinants Hartmut Führ fuehr@matha.rwth-aachen.de Lehrstuhl A für Mathematik, RWTH Aachen October 30, 2008 Overview

### a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

### 6. Cholesky factorization

6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix

### 2.5 Elementary Row Operations and the Determinant

2.5 Elementary Row Operations and the Determinant Recall: Let A be a 2 2 matrtix : A = a b. The determinant of A, denoted by det(a) c d or A, is the number ad bc. So for example if A = 2 4, det(a) = 2(5)

### Math 313 Lecture #10 2.2: The Inverse of a Matrix

Math 1 Lecture #10 2.2: The Inverse of a Matrix Matrix algebra provides tools for creating many useful formulas just like real number algebra does. For example, a real number a is invertible if there is

### Linear Dependence Tests

Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks

### I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called

### Good luck, veel succes!

Final exam Advanced Linear Programming, May 7, 13.00-16.00 Switch off your mobile phone, PDA and any other mobile device and put it far away. No books or other reading materials are allowed. This exam

### Lecture 11. Shuanglin Shao. October 2nd and 7th, 2013

Lecture 11 Shuanglin Shao October 2nd and 7th, 2013 Matrix determinants: addition. Determinants: multiplication. Adjoint of a matrix. Cramer s rule to solve a linear system. Recall that from the previous

### Section 2.1. Section 2.2. Exercise 6: We have to compute the product AB in two ways, where , B =. 2 1 3 5 A =

Section 2.1 Exercise 6: We have to compute the product AB in two ways, where 4 2 A = 3 0 1 3, B =. 2 1 3 5 Solution 1. Let b 1 = (1, 2) and b 2 = (3, 1) be the columns of B. Then Ab 1 = (0, 3, 13) and

### MATH10212 Linear Algebra B Homework 7

MATH22 Linear Algebra B Homework 7 Students are strongly advised to acquire a copy of the Textbook: D C Lay, Linear Algebra and its Applications Pearson, 26 (or other editions) Normally, homework assignments

### Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

### Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

### Factorization Theorems

Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

### Row Echelon Form and Reduced Row Echelon Form

These notes closely follow the presentation of the material given in David C Lay s textbook Linear Algebra and its Applications (3rd edition) These notes are intended primarily for in-class presentation

### Solving Linear Systems, Continued and The Inverse of a Matrix

, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

### DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH

DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH CHRISTOPHER RH HANUSA AND THOMAS ZASLAVSKY Abstract We investigate the least common multiple of all subdeterminants,

### From the Even Cycle Mystery to the L-Matrix Problem and Beyond. Michael Brundage

From the Even Cycle Mystery to the L-Matrix Problem and Beyond by Michael Brundage A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science University of Washington

### Linear Systems. Singular and Nonsingular Matrices. Find x 1, x 2, x 3 such that the following three equations hold:

Linear Systems Example: Find x, x, x such that the following three equations hold: x + x + x = 4x + x + x = x + x + x = 6 We can write this using matrix-vector notation as 4 {{ A x x x {{ x = 6 {{ b General

### 1 Eigenvalues and Eigenvectors

Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x

### Solutions for Chapter 8

Solutions for Chapter 8 Solutions for exercises in section 8. 2 8.2.1. The eigenvalues are σ (A ={12, 6} with alg mult A (6 = 2, and it s clear that 12 = ρ(a σ (A. The eigenspace N(A 12I is spanned by

### MATH 240 Fall, Chapter 1: Linear Equations and Matrices

MATH 240 Fall, 2007 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 9th Ed. written by Prof. J. Beachy Sections 1.1 1.5, 2.1 2.3, 4.2 4.9, 3.1 3.5, 5.3 5.5, 6.1 6.3, 6.5, 7.1 7.3 DEFINITIONS

### MATH36001 Background Material 2015

MATH3600 Background Material 205 Matrix Algebra Matrices and Vectors An ordered array of mn elements a ij (i =,, m; j =,, n) written in the form a a 2 a n A = a 2 a 22 a 2n a m a m2 a mn is said to be

### SCHOOL OF MATHEMATICS MATHEMATICS FOR PART I ENGINEERING. Self Study Course

SCHOOL OF MATHEMATICS MATHEMATICS FOR PART I ENGINEERING Self Study Course MODULE 17 MATRICES II Module Topics 1. Inverse of matrix using cofactors 2. Sets of linear equations 3. Solution of sets of linear

### Helpsheet. Giblin Eunson Library MATRIX ALGEBRA. library.unimelb.edu.au/libraries/bee. Use this sheet to help you:

Helpsheet Giblin Eunson Library ATRIX ALGEBRA Use this sheet to help you: Understand the basic concepts and definitions of matrix algebra Express a set of linear equations in matrix notation Evaluate determinants

### B such that AB = I and BA = I. (We say B is an inverse of A.) Definition A square matrix A is invertible (or nonsingular) if matrix

Matrix inverses Recall... Definition A square matrix A is invertible (or nonsingular) if matrix B such that AB = and BA =. (We say B is an inverse of A.) Remark Not all square matrices are invertible.

### Split Nonthreshold Laplacian Integral Graphs

Split Nonthreshold Laplacian Integral Graphs Stephen Kirkland University of Regina, Canada kirkland@math.uregina.ca Maria Aguieiras Alvarez de Freitas Federal University of Rio de Janeiro, Brazil maguieiras@im.ufrj.br

### 6.2 Permutations continued

6.2 Permutations continued Theorem A permutation on a finite set A is either a cycle or can be expressed as a product (composition of disjoint cycles. Proof is by (strong induction on the number, r, of

### 1 Solving LPs: The Simplex Algorithm of George Dantzig

Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

### Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Peter J. Olver 4. Gaussian Elimination In this part, our focus will be on the most basic method for solving linear algebraic systems, known as Gaussian Elimination in honor

### 2.5 Gaussian Elimination

page 150 150 CHAPTER 2 Matrices and Systems of Linear Equations 37 10 the linear algebra package of Maple, the three elementary 20 23 1 row operations are 12 1 swaprow(a,i,j): permute rows i and j 3 3

### 1. LINEAR EQUATIONS. A linear equation in n unknowns x 1, x 2,, x n is an equation of the form

1. LINEAR EQUATIONS A linear equation in n unknowns x 1, x 2,, x n is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b, where a 1, a 2,..., a n, b are given real numbers. For example, with x and

### 1.5 Elementary Matrices and a Method for Finding the Inverse

.5 Elementary Matrices and a Method for Finding the Inverse Definition A n n matrix is called an elementary matrix if it can be obtained from I n by performing a single elementary row operation Reminder:

### Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

### Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. ( 1 1 5 2 0 6

Chapter 7 Matrices Definition An m n matrix is an array of numbers set out in m rows and n columns Examples (i ( 1 1 5 2 0 6 has 2 rows and 3 columns and so it is a 2 3 matrix (ii 1 0 7 1 2 3 3 1 is a

### Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

MAT 2 (Badger, Spring 202) LU Factorization Selected Notes September 2, 202 Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix

### NOTES on LINEAR ALGEBRA 1

School of Economics, Management and Statistics University of Bologna Academic Year 205/6 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

### 8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

### IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible

### AN ITERATIVE METHOD FOR THE HERMITIAN-GENERALIZED HAMILTONIAN SOLUTIONS TO THE INVERSE PROBLEM AX = B WITH A SUBMATRIX CONSTRAINT

Bulletin of the Iranian Mathematical Society Vol. 39 No. 6 (2013), pp 129-1260. AN ITERATIVE METHOD FOR THE HERMITIAN-GENERALIZED HAMILTONIAN SOLUTIONS TO THE INVERSE PROBLEM AX = B WITH A SUBMATRIX CONSTRAINT

### SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH

31 Kragujevac J. Math. 25 (2003) 31 49. SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH Kinkar Ch. Das Department of Mathematics, Indian Institute of Technology, Kharagpur 721302, W.B.,

### Min-cost flow problems and network simplex algorithm

Min-cost flow problems and network simplex algorithm The particular structure of some LP problems can be sometimes used for the design of solution techniques more efficient than the simplex algorithm.

### Vector and Matrix Norms

Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

### Classification of Cartan matrices

Chapter 7 Classification of Cartan matrices In this chapter we describe a classification of generalised Cartan matrices This classification can be compared as the rough classification of varieties in terms

### Week 5 Integral Polyhedra

Week 5 Integral Polyhedra We have seen some examples 1 of linear programming formulation that are integral, meaning that every basic feasible solution is an integral vector. This week we develop a theory

### Markov Chains, Stochastic Processes, and Advanced Matrix Decomposition

Markov Chains, Stochastic Processes, and Advanced Matrix Decomposition Jack Gilbert Copyright (c) 2014 Jack Gilbert. Permission is granted to copy, distribute and/or modify this document under the terms

### Lecture Notes 1: Matrix Algebra Part B: Determinants and Inverses

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 57 Lecture Notes 1: Matrix Algebra Part B: Determinants and Inverses Peter J. Hammond email: p.j.hammond@warwick.ac.uk Autumn 2012,

### 4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:

### 2.3 Convex Constrained Optimization Problems

42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

### Lecture 21: The Inverse of a Matrix

Lecture 21: The Inverse of a Matrix Winfried Just, Ohio University October 16, 2015 Review: Our chemical reaction system Recall our chemical reaction system A + 2B 2C A + B D A + 2C 2D B + D 2C If we write